How to Use Terraform to Deploy Containers to ECS Fargate

Erik A. Ekberg
8 min readMar 9, 2023

--

Amazon Web Services (AWS) Fargate is a serverless solution to container orchestration.

But to get from “Hello localhost” to “Hello world” there are many networking and infrastructure hoops to jump through.

Terraform makes setting up and deploying Docker images to AWS ECS repeatable, but https://serverless.tf/ makes it easy.

An allegory to the value of Infrastructure as Code and container orchestration pictured by sheet music.

Step 1: Create an Application

We want to create a simple “Hello world!” application using Node and Docker.

First, create a new folder for this project and then run npm init to scaffold our new Node application.

Then run npm install express@^4.18.2 to install express which we will use to accept and respond to HTTP requests with.

Next, we will create an index.js file that return Hello World! whenever our server is hit.

// ./index.js
const express = require('express')

const app = express()

app.get('/', (_req, res) => {
res.send(`
This "Hello world!" is powered by Terraform AWS Modules!
The ISO datetime right now is ${new Date().toISOString()}.
`)
})

app.listen(process.env.PORT, () => {
console.log(`Listening on port ${process.env.PORT}`)
})

Next we need to containerize our “Hello world!” app so that we can build and push our Docker image to AWS.

To do this, we need a Dockerfile that will copy over our application files, install our dependencies like express, and lastly run our application.

# ./Dockerfile
FROM node:18-alpine

WORKDIR /app

ENV \
MY_INPUT_ENV_VAR=dockerfile-default-env-var \
NODE_ENV=production \
PORT=8080

EXPOSE ${PORT}

COPY package*.json ./
RUN npm ci

COPY . .

CMD [ "node", "index.js"]

We must also prevent Docker from copying sensitive data to image, like our Terraform state files.

We do this by creating a .dockerignore file and explicitly telling Docker which exact files we want to COPY over through a whitelist.

# ./.dockerignore
# Ignore all files when COPY is run
*

# Un-ignore only these specific file so that when COPY
# is run only these specific files are copied over
!index.js
!package-lock.json
!package.json

Step 2: Authenticating Docker through Terraform

We will make Terraform authenticate our local Docker client so that we can repeatably build and push images to AWS.

Lets now create a main.tf file where we will

  1. Install the hashicorp/aws and kreuzwerker/docker Terraform providers
  2. Copy over some values from our Dockerfile that we will use later
  3. Use Terraform to authenticate our local Docker client
// ./main.tf
terraform {
required_version = "~> 1.3"

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.56"
}
docker = {
source = "kreuzwerker/docker"
version = "~> 3.0"
}
}
}

locals {
container_name = "hello-world-container"
container_port = 8080 # ! Must be same EXPORE port from our Dockerfile
example = "hello-world-ecs-example"
}

provider "aws" {
region = "ca-central-1" # Feel free to change this

default_tags {
tags = { example = local.example }
}
}

# * Give Docker permission to pusher Docker images to AWS
data "aws_caller_identity" "this" {}
data "aws_ecr_authorization_token" "this" {}
data "aws_region" "this" {}
locals { ecr_address = format("%v.dkr.ecr.%v.amazonaws.com", data.aws_caller_identity.this.account_id, data.aws_region.this.name) }
provider "docker" {
registry_auth {
address = local.ecr_address
password = data.aws_ecr_authorization_token.this.password
username = data.aws_ecr_authorization_token.this.user_name
}
}

With our main.tf setup, lets run terraform init to install our providers and terraform apply to ensure everything runs without any syntax or other errors.

As we go along, if you ever want to destroy everything we have built just run terraform destroy.

Step 3: Build and Push

With our local Docker client authenticated we can now use Terraform to build and push our Node application to an AWS Elastic Container Registry (ECR).

In the below code snippet we will

  1. Build an ECR instance
  2. Build our container locally with our local Docker client
  3. Use our local Docker client to push our container image to our ECR
# ./main.tf

module "ecr" {
source = "terraform-aws-modules/ecr/aws"
version = "~> 1.6.0"

repository_force_delete = true
repository_name = local.example
repository_lifecycle_policy = jsonencode({
rules = [{
action = { type = "expire" }
description = "Delete all images except a handful of the newest images"
rulePriority = 1
selection = {
countNumber = 3
countType = "imageCountMoreThan"
tagStatus = "any"
}
}]
})
}

# * Build our Image locally with the appropriate name so that we can push
# * our Image to our Repository in AWS. Also, give it a random image tag.
resource "docker_image" "this" {
name = format("%v:%v", module.ecr.repository_url, formatdate("YYYY-MM-DD'T'hh-mm-ss", timestamp()))

build { context = "." } # Path to our local Dockerfile
}

# * Push our container image to our ECR.
resource "docker_registry_image" "this" {
keep_remotely = true # Do not delete old images when a new image is pushed
name = resource.docker_image.this.name
}

Here, we are adding a new module from https://serverless.tf/.

Every time we add a new module or provider to our code we must always run terraform init again to download and install this new dependency.

So if we run terraform init and then terraform apply we should see Terraform build our ECR instance.

If you log into AWS at this time and navigate to the ca-region-1 (or whatever region you entered into region = … in the aws provider) you should see an ECR instance with the name of hello-world-ecs-example.

Within that repository, you should also see one container image with an image tag like 2023–03–08T08–43–21.

We are using timestamp() to generate a new container image tag every time we run terraform apply; so each time we run terraform apply going forward we should see a new image with a new tag in our ECR.

For demonstration purposes, we are also adding an ECR Lifecycle Policy so that you can save money by letting AWS automatically delete old images we no longer need.

Step 4: Setup Networks

With our application container now in AWS, we need to setup several layers of networks inside AWS so that our application is reachable from the internet.

Terraform AWS Modules maintained by https://serverless.tf/ makes this simple through their terraform-aws-modules/vpc/aws module which we will add to our main.tf file.

# ./main.tf

data "aws_availability_zones" "available" { state = "available" }
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.19.0"

azs = slice(data.aws_availability_zones.available.names, 0, 2) # Span subnetworks across 2 avalibility zones
cidr = "10.0.0.0/16"
create_igw = true # Expose public subnetworks to the Internet
enable_nat_gateway = true # Hide private subnetworks behind NAT Gateway
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]
single_nat_gateway = true
}

Since we are installing a new module, we again must run terraform init and then terraform apply to provision our VPC in AWS.

Our module "vpc" does a lot for us under the hood by setting up internet gateways (igw), NAT gateways, and various subnetworks to route incoming and outgoing traffic across multiple AWS availability zones for redundancy.

If you want to understand all the networking required, check out out The Fargate/Terraform tutorial I wish I had by Jimmy Sawczuk.

Step 5: Make a Website

With our VPC setup, we now need to make our application accessible to the internet with an AWS Application Load Balancer (ALB).

Again, Terraform AWS Modules is here to help us out.

For our ALB, we will

  1. Accept HTTP requests from the internet
  2. Forward HTTP requests from the internet to our container_port where our container will eventually be running
  3. Allow our container and other resources to make HTTP requests to external services
# ./main.tf

module "alb" {
source = "terraform-aws-modules/alb/aws"
version = "~> 8.4.0"

load_balancer_type = "application"
security_groups = [module.vpc.default_security_group_id]
subnets = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id

security_group_rules = {
ingress_all_http = {
type = "ingress"
from_port = 80
to_port = 80
protocol = "TCP"
description = "Permit incoming HTTP requests from the internet"
cidr_blocks = ["0.0.0.0/0"]
}
egress_all = {
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
description = "Permit all outgoing requests to the internet"
cidr_blocks = ["0.0.0.0/0"]
}
}

http_tcp_listeners = [
{
# * Setup a listener on port 80 and forward all HTTP
# * traffic to target_groups[0] defined below which
# * will eventually point to our "Hello World" app.
port = 80
protocol = "HTTP"
target_group_index = 0
}
]

target_groups = [
{
backend_port = local.container_port
backend_protocol = "HTTP"
target_type = "ip"
}
]
}

Again we need to run terraform init and terraform apply and see a new ALB running inside the AWS EC2 dashboard.

Step 6: Create Our Fargate Cluster

With all our networking infrastructure now setup and provisioned through Terraform, we now move into actually building our AWS Fargate cluster to run and auto scale our application.

# ./main.tf

module "ecs" {
source = "terraform-aws-modules/ecs/aws"
version = "~> 4.1.3"

cluster_name = local.example

# * Allocate 20% capacity to FARGATE and then split
# * the remaining 80% capacity 50/50 between FARGATE
# * and FARGATE_SPOT.
fargate_capacity_providers = {
FARGATE = {
default_capacity_provider_strategy = {
base = 20
weight = 50
}
}
FARGATE_SPOT = {
default_capacity_provider_strategy = {
weight = 50
}
}
}
}

Step 7: Define Our Task

We must tell AWS how to run our container, so, we need to define an ECS Task which tells our ECS Cluster how much cpu and memory our container needs.

But also, we need to tell our ECS Cluster which port our application will be running on.

# ./main.tf

data "aws_iam_role" "ecs_task_execution_role" { name = "ecsTaskExecutionRole" }
resource "aws_ecs_task_definition" "this" {
container_definitions = jsonencode([{
environment: [
{ name = "NODE_ENV", value = "production" }
],
essential = true,
image = resource.docker_registry_image.this.name,
name = local.container_name,
portMappings = [{ containerPort = local.container_port }],
}])
cpu = 256
execution_role_arn = data.aws_iam_role.ecs_task_execution_role.arn
family = "family-of-${local.example}-tasks"
memory = 512
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
}

If we run terraform init and terraform apply again, in the AWS ECS dashboard we should see our Fargate cluster and our "hello-world-container" task.

An ECS Task can have many revisions which are essentially different configurations of a task.

For example, if we modified how much cpu our container needs to run then a new task revision will be created.

In our case, since we are generating a new container image tag every time we run terraform apply this also means we are creating a new task revision every time we run terraform apply.

We don’t need to worry about these task revisions since Terraform is managing our infrastructure and will always use the correct one.

Step 8: Run Our Application

Now with everything setup, we can finally run our container.

# ./main.tf

resource "aws_ecs_service" "this" {
cluster = module.ecs.cluster_id
desired_count = 1
launch_type = "FARGATE"
name = "${local.example}-service"
task_definition = resource.aws_ecs_task_definition.this.arn

lifecycle {
ignore_changes = [desired_count] # Allow external changes to happen without Terraform conflicts, particularly around auto-scaling.
}

load_balancer {
container_name = local.container_name
container_port = local.container_port
target_group_arn = module.alb.target_group_arns[0]
}

network_configuration {
security_groups = [module.vpc.default_security_group_id]
subnets = module.vpc.private_subnets
}
}

If you run terraform apply and click on the cluster in the AWS ECS dashboard you should see our task in a pending state.

When our container is finished spinning up, it should say running.

Step 9: Visit Our Website

With everything setup, we can now output and visit the URL created by our ALB.

# ./main.tf

# * Output the URL of our Application Load Balancer so that we can connect to
# * our application running inside ECS once it is up and running.
output "url" { value = "http://${module.alb.lb_dns_name}" }

Run terraform apply and after Terraform finishes running you should be able to visit the URL output to the terminal.

At the URL, you should see our “Hello World!” Node application running!

Step 10: Deploy a Change

Make a change to our index.js file and run terraform apply again.

// ./index.js
...

app.get('/', (_req, res) => {
res.send(`
Now this is pod conducting!
`)
})

...

After a couple of minutes you should see a new container spinning up: look for that pending and running status labels in the ECS Cluster dashboard.

When the new container is running, visit the same output URL you visited earlier and you should see your changes!

Also, feel free to view the complete example on GitHub.

--

--

Erik A. Ekberg

Software engineer with a background in human psychology and data analytics who affords both customer and engineer delight through Agile software architectures.