AWS is one of the most popular options for running containers because of its high reliability, strong security, and native integrations. There are multiple ways to containerize applications on AWS depending on your needs. AWS Fargate is a highly turnkey solution that offloads infrastructure management to AWS, thereby reducing the complexity of app deployment for the customer.

However, Fargate doesn’t absolve customers of all responsibility.

While it maintains the security of the host environment, the customer is still responsible for securing the worker nodes and workloads. To successfully meet this responsibility, it’s important to understand how AWS Fargate works and what practices you should follow to ensure the security of your deployed applications.

 

What is AWS Fargate?

AWS Fargate is a technology for use with Amazon ECS that allows you to run containers without having to manage the underlying servers. Typically, developers would have to provision, configure, and scale clusters for virtual machines to run containerized applications. AWS Fargate relieves developers of this responsibility, so they can focus on building better applications without having to manage the infrastructure on which they run.

AWS Fargate offers several benefits. Because AWS abstracts away the underlying infrastructure, developers only have to be concerned with containers and building their apps. AWS picks the EC2 instance types, manages the cluster scheduling, and handles the cluster optimization. Once you take care of the container requirements and upload everything to Amazon ECS, AWS Fargate launches your containers for you and automatically scales them to your requirements. Ultimately, running containers with AWS Fargate lowers infrastructure management and application costs.

 

How Does AWS Fargate Work?

AWS Fargate lets virtual machines to manage containers, saving time for developers. Because AWS abstracts away the underlying infrastructure, developers only have to be concerned with containers and building their apps. AWS picks the EC2 instance types, manages the cluster scheduling, and handles the cluster optimization.

When running Amazon ECS tasks and services with Fargate, you must build the container image, define the images and resources your application needs, and launch the application. AWS Fargate manages the underlying infrastructure. Once you take care of the container requirements and upload everything to Amazon ECS, AWS Fargate launches your containers for you and automatically scales them to your requirements. Ultimately, running containers with AWS Fargate lowers infrastructure management and application costs.

 

AWS Fargate Security and the Shared Responsibility Model

AWS operates under a shared responsibility model that dictates which security controls it’s responsible for and which are the customer’s responsibilities. Principally, it states that AWS guarantees the security of its physical facilities, network, and hardware and the customer is responsible for securing whatever they put into the cloud through network controls, application configurations, identity, and access management, and other measures.

However, the balance of responsibility shifts depending on the particular AWS service the customer uses. Concerning infrastructure security, AWS assumes more responsibility for AWS Fargate resources than it does for other self-managed instances. With Fargate, AWS manages the security of the underlying instance in the cloud and the runtime that’s used to run your tasks. As the customer, you’re responsible for securing the application code and the configuration of the service.

Basic practices for securing containers in any situation apply to those deployed with Fargate. While this list isn’t exhaustive, it addresses the most common container security concerns.

[Related Reading: What Is Container Security?]

 

AWS Fargate Best Practices:
Securing Containers

 

Use trusted Docker images

Docker images are the fundamental unit of Docker containers. Each image is a standalone bundle of executable software that contains source code, system tools, libraries, dependencies, and everything else needed to run a Docker container. Organizations often use container repositories to share versions of particular images among their team or with the development community at large. When developers are building a containerized application, they’ll pull images from public or private container image repositories rather than build an image from scratch in order to speed up the development process.

Ultimately, an application is only as secure as the images used to build it. As with any code, an image or its dependencies can introduce vulnerabilities. To ensure container security, it’s critical to acquire images only from trusted, secure sources and avoid sources that lack control policies. Images should be scanned prior to deployment and regularly afterward for vulnerabilities and kept up to date with the latest security patches. Smaller images can further mitigate security vulnerability by reducing the attack surface.

Limit Docker container privileges

By default, Docker containers execute using root user privileges if no user is specified. This opens access to both the containers and the host, leaving the application vulnerable to exploitation. To reduce exposure, it’s best to create a dedicated user with the least privileged access required to run containers. In cases where container processes must run as the root user, you can increase security by remapping the user to a less privileged user on the host.

Prevent new Docker container privileges

Docker containers are allowed by default to acquire new privileges after they have launched, potentially enabling an attacker to exploit a container to gain access to other parts of the container environment. It’s important to disable container processes from gaining new privileges to prevent privilege escalation attacks.

Monitor APIs

Containers need to communicate to deploy and run correctly, and they do this in part through APIs. Malicious actors can take advantage of poorly or incorrectly configured APIs to deploy an image and run a malicious container on the host system. To prevent this and other intrusions, it’s important to properly configure and secure the APIs between the containers themselves and any API data that’s going to eventually hit the pipeline.

Secure container registries

A container registry is a content delivery system that is used to store and distribute container images. Whether you use a managed third-party registry or host your own, it’s essential to secure your image inventory. Some measures to take to reduce security risk include regularly scanning images for vulnerabilities, controlling user access, and setting up encrypted channels for connecting to the registry.

 

Managing Your Risk in AWS Fargate

Container security is a growing concern, and understanding these best practices is the first step to securing your applications deployed with AWS Fargate. Following the best practices outlined above will mitigate against compromise; however, they are not standalone solutions.

Organizations must be positioned to identify compromise when it occurs, and robust security requires a deep level of visibility into the network many organizations have difficulty achieving on their own.

To achieve this visibility, logs should be collected from each task and network traffic from the base host, to, from, and between containers. The data should then be analyzed 24/7 so analysts can alert responders when any unpatched, undetected, or zero-day vulnerabilities are successfully exploited and advise on remediation actions for the compromised container(s).

This thorough approach minimizes the impact of compromise by allowing Fargate adopters to disrupt threat actions early, therefore minimizing any potential impact and ensuring you get the best outcome from your Fargate deployments.

Free AWS Security Assessment

Fortra’s Alert Logic MDR provides container security solutions for AWS ECS, EKS, and Fargate.

GET STARTED

 

Josh Davies
About the Author
Josh Davies
Josh Davies is the Principal Technical Product Marketing Manager at Alert Logic. Formerly a security analyst and solutions architect, Josh has extensive experience working with mid-market and enterprise organizations, conducting incident response and threat hunting activities as an analyst before working with businesses to identify appropriate security solutions for challenges across cloud, on-premises, and hybrid environments.

Related Post

Ready to protect your company with Alert Logic MDR?