I recently had the privilege of co-hosting a webinar on AWS container security with the very knowledgeable Curtis Rissi, Senior Solutions Architect at Amazon Web Services. Our comprehensive, one-hour discussion covered everything from common container security mistakes to best practices, all from the insightful perspective of an AWS insider and an MDR expert. Here’s an overview of what we discussed, which I hope can aid you on your path to securing your AWS containers.
AWS Container Security Overview
Containers provide increased efficiency, portability, and scalability. Compared to virtual machines, they have a smaller footprint/attack surface and provide an additional layer of security by isolating applications. However, containerized environments are still susceptible to malicious attacks between containers or within the shared resources of the underlying host. Therefore, you need a strong AWS container security strategy that includes:
- 360-degree awareness of the container
- Full understanding of how it interacts with its environment
- Culmination of automated governance policies woven into the continuous integration/continuous delivery (CI/CD) pipeline
Pitfalls of Container Security
Planning an AWS container security strategy can be trial-and-error, but security isn’t something you should chance — it requires knowledge and careful strategy. Gain a leg up on your container security journey by avoiding these six common missteps:
1. Starting with a customer-facing or mission-critical application
- Don’t take a chance on starting something that can negatively impact the customer, business processes, or revenue streams if you get it wrong.
- Do start with something that is safe to fail and offers leeway to get it right. Fail quickly, recover, and learn with minimal splash. For example, start with a small batch, back-end processing task instead of building an ecommerce site.
2. Focusing too much on the containers themselves
- Don’t make the mistake of thinking only about the containers when securing the ecosystem. The underlying container management hosts are equally important.
- Do assess the security of all components, regularly scan for vulnerabilities, and keep all parts of the system up-to-date while also monitoring for threats.
3. Ignoring automation as a table-stakes requirement
- Don’t ignore the minimum requirements or a broad ecosystem.
- Do leverage automation through every stage of the lifecycle to rapidly scale from dozens of containers to thousands. Build security, operations, introspection, and monitoring into it.
4. Assuming code libraries are safe
- Don’t blindly trust any third-party code. For example, plug-ins for SMS from sources that seem trustworthy can have unintentional or malicious code.
- Do maintain vigilance with governance, artifact repositories, and scanning libraries. Have approved libraries from which your applications can pull. Treat these dependencies as your own code. Version, scan, and vet them.
5. Giving containers unnecessary privileges
- Don’t allow liberal access to containers or assign rights to the containers themselves — this creates more opportunity for security risks. The more default access everything has, the more opportunity for a compromised container. It also makes it more difficult to track a breach’s entry point.
- Do wall off the containers and give out access only when critical.
6. Failing to properly vet an image
- Don’t assume that just because an image comes from an apparently trusted source that it can be trusted and is safe.
- Do perform tests on all images. Ensure you check the versions and history behind all components of your images.
Key Security Considerations
After avoiding the most common pitfalls, keep these four security considerations in mind:
- Vulnerable source containers impact the entire platform, either by human error or through malicious changes.
- Mitigate the risk by maintaining tight access control and integrate with idP.
- Regularly scan, encrypt, and use HTTPs for transfers for images.
- Insecure orchestration can lead to malicious changes and access to bad actors, resulting in rogue containers, such as cryptojacking containers.
- Mitigate the risk through tight access control, restricting public access.
- Ensure technology is updated regularly (don’t assume default security is in place).
- Monitor the configuration closely to make sure what is expected to run on containers and the host is what’s running.
- Have a system in place that can track what requests are being made. Is this the normal state? Or is there something malicious and running in this environment?
- Vulnerable system components create insecure configurations, resulting in lateral spread.
- Mitigate risk through tight access control.
- Monitor host and intra container traffic, and host behavior through logs and the network.
- Alert Logic offers visibility into not only who is talking to who but also what they’re talking about.
- If you’re in a modern architecture, the complexity of communications is impossible for a person to monitor on their own. You need to use tooling to scan throughout the entire stack and across the solutions.
- Application vulnerabilities resulting in a compromise result in overly permissive configurations.
- Mitigate risk by regularly scanning at run-time and monitoring systems and applications.
- Segregate containers and store sensitive data separately, putting them into essentially a vault and rotating them so it’s not the same key for a year or more. Rotate at regular intervals based on risk requirements.
- Developers don’t need to know the dbpassword, only where to get it when the application runs. Only the app, CISD platform, and possibly the host only need it.
Shared Responsibility Model
AWS Secure provides security of the cloud and the customer provides security in the cloud. Security is a shared, but not equal, responsibility. Some, but not all, of what is secured by AWS includes:
- System Image Library
- Perimeter Security
Some areas under your responsibility as the client include:
- Configuration Scanning & Management
- Log Analysis
- Threat Detection
- Access Management
- Data Encryption
- Incident Response
Consider the way most platforms work, including AWS — it’s all based on API calls. In the end, it’s imperative to log and monitor those calls. Ensure critical functions like Cloud Trail are activated and stay on. For best results, rely on an automated system to turn them on if they’re ever inadvertently turned off (in a production environment they should always be on). It’s crucial to know who is talking to what, who requested it, when, and do they have permissions?
How you run your own containers depends on what platform you use. Consider this:
- BYO Ecosystem
- Run containers with your choice of orchestration
- Offers maximum flexibility and responsibility
- Works for hybrid workloads, migrating out of datacenter into AWS. A great starting point. Minimal impact on existing processes.
- Can be an onerous but viable process
- You’re in control of everything from patching individual libraries to securing access controls on the network level.
- Great if you have specific niche requirements
Amazon ECS / EKS
- Run containers with server-level control
- Offers reduced complexity while retaining flexibility
- Easier integrations behind the scenes
- Allows you to not have to focus on controlling, keep track of patching underlying hosts to run the orchestrator. Focus on the workloads, what they’re doing and being successful on that level while still having the flexibility of the full platform.
- Security is difficult. If you’re not an expert, if that’s not what your company does, it can be a distraction and you don’t and shouldn’t have to do it all yourself.
- ECS / EKS meets specific needs that have minimal impact to gain the maximum benefit for your team — you don’t need to learn anything new. It offers a comfortable level of consistency.
- Pure containers, run without managing servers
- For specific workload with minimum complexity
- Takes away the need to manage the underlying server
- Caveat: because the host is abstracted away from you, there’s certain access not available. For example, you can’t conduct bead security scanning with the underlying host. Shift your mindset if using this.
Alert Logic & AWS — Delivering Peace of Mind
Alert Logic is a long-time partner with AWS, offering managed detection and response (MDR) to deliver peace of mind from threats. We combine 24/7 SaaS security with visibility and detection coverage wherever your systems reside. Learn more about securing your AWS containers by watching the full webinar.