Select Page

AWS has become one of the most popular options for running containers because of its high reliability, strong security, and native integrations. There are multiple ways to containerize applications on AWS depending on your needs. AWS Fargate is a highly turnkey solution that offloads infrastructure management to AWS, thereby reducing the complexity of app deployment for the customer.

However, Fargate doesn’t absolve customers of all responsibility.

While it maintains the security of the host environment, the customer is still responsible for securing the worker nodes and workloads. To successfully meet this responsibility, it’s important to understand how AWS Fargate works and what practices you should follow to ensure the security of your deployed applications.

What is AWS Fargate?

AWS Fargate is a technology for use with Amazon ECS that allows you to run containers without having to manage the underlying servers. Typically, developers would have to provision, configure, and scale clusters for virtual machines to run containerized applications. AWS Fargate relieves developers of this responsibility, so they can focus on building better applications without having to manage the infrastructure on which they run.

When running Amazon ECS tasks and services with Fargate, you must build the container image, define the images and resources your application needs, and launch the application. AWS Fargate manages all underlying infrastructure.

AWS Fargate offers several benefits. Because AWS abstracts away the underlying infrastructure, developers only have to be concerned with containers and building their apps. AWS picks the EC2 instance types, manages the cluster scheduling, and handles the cluster optimization. Once you take care of the container requirements and upload everything to Amazon ECS, AWS Fargate launches your containers for you and automatically scales them to your requirements. Ultimately, running containers with AWS Fargate lowers infrastructure management and application costs.

The AWS Shared Responsibility Model

AWS operates under a shared responsibility model that dictates which security controls it’s responsible for and which are the customer’s responsibilities. Principally, it states that AWS guarantees the security of its physical facilities, network, and hardware and the customer is responsible for securing whatever they put into the cloud through network controls, application configurations, identity, and access management, and other measures.

However, the balance of responsibility shifts depending on the particular AWS service the customer is using. Concerning infrastructure security, AWS assumes more responsibility for AWS Fargate resources than it does for other self-managed instances. With Fargate, AWS manages the security of the underlying instance in the cloud and the runtime that’s used to run your tasks. As the customer, you’re responsible for securing the application code and the configuration of the service.

Best practices to secure containers

Basic practices for securing containers in any situation apply to those deployed with Fargate as well. While this list isn’t exhaustive, it addresses the most common container security concerns.

[Related Reading: What Is Container Security?]

Use trusted images

Docker images are the fundamental unit of Docker containers. Each image is a standalone bundle of executable software that contains source code, system tools, libraries, dependencies, and everything else needed to run a Docker container. Organizations often use container repositories to share versions of particular images among their team or with the development community at large. When developers are building a containerized application, they’ll pull images from public or private container image repositories rather than build an image from scratch in order to speed up the development process.

Ultimately, an application is only as secure as the images used to build it. As with any code, an image or its dependencies can introduce vulnerabilities. To ensure the security of your containers, it’s critical to acquire images only from trusted, secure sources and avoid sources that lack control policies. Images should be scanned prior to deployment and regularly afterward for vulnerabilities and kept up to date with the latest security patches. Smaller images can further mitigate security vulnerability by reducing the attack surface.

Limit privileges

By default, Docker containers execute using root user privileges if no user is specified. This opens access to both the containers and the host, leaving the application vulnerable to exploitation. To reduce exposure, it’s best to create a dedicated user with the least privileged access required to run containers. In cases where container processes must run as the root user, you can increase security by remapping the user to a less privileged user on the host.

Prevent new privileges

Docker containers are also allowed by default to acquire new privileges after they have launched, potentially enabling an attacker to exploit a container to gain access to other parts of the container environment. It’s important to disable container processes from gaining new privileges to prevent privilege escalation attacks.

Monitor APIs

Containers need to communicate to deploy and run correctly, and they do this in part through APIs. Malicious actors can take advantage of poorly or incorrectly configured APIs to deploy an image and run a malicious container on the host system. To prevent this and other intrusions, it’s important to properly configure and secure the APIs between the containers themselves and any API data that’s going to eventually hit the pipeline.

Secure registries

A container registry is a content delivery system that is used to store and distribute container images. Whether you use a managed third-party registry or host your own, it’s essential to secure your image inventory. Regularly scanning images for vulnerabilities, controlling user access, and setting up encrypted channels for connecting to the registry, are some of the measures you should take to reduce security risks.

Managing your risk

Container security is a growing concern, and understanding these best practices is the first step to securing your applications deployed with AWS Fargate. Following the best practices outlined above will mitigate against compromise, however, they are not standalone solutions.

Organizations must be positioned to identify compromise when it occurs, and robust security requires a deep level of visibility into the network many organizations have difficulty achieving on their own.

To achieve this visibility, logs should be collected from each task and network traffic from the base host, to, from, and between containers. The data should then be analyzed 24/7 so analysts can alert responders when any unpatched, undetected, or zero-day vulnerabilities are successfully exploited and advise on remediation actions for the compromised container(s).

This thorough approach minimizes the impact of compromise by allowing Fargate adopters to disrupt threat actions early, therefore minimizing any potential impact and ensuring you get the best outcome from your Fargate deployments.

Josh Davies
About the Author
Josh Davies

Josh Davies is a Product Manager at Alert Logic. Formerly a Security Analyst and Solutions Architect, Josh has tremendous experience working with mid-market and enterprise organisations; conducting incident response and threat hunting activities as an analyst before working with organisations to identify appropriate security solutions for challenges across cloud, on-premises and hybrid environments.

Related Post

Ready to protect your company with Alert Logic MDR?