Terraform ECS Fargate Example
Amazon Web Services (AWS) Fargate is a serverless solution to container orchestration. But to get a “Hello world” application from development to the internet has many networking and infrastructure hoops to jump through. But once codified through Terraform, deploying iterations becomes simple. To get AWS Fargate working swiftly, we will walk through how to use https://serverless.tf/ to setup and deploy to AWS Fargate.
Step 1: Create an application
We want to create a simple “Hello world!” application using Node and Docker. First, run npm init
to scaffold our new Node application and then run npm install express@^4.18.2
which we will use to run our application. Next, we will create an index.js
file that return Hello World!
whenever our server is hit.
// ./index.js
const express = require('express')
const app = express()
app.get('/', (_req, res) => {
res.send(`
This "Hello world!" is powered by Terraform AWS Modules!
The ISO datetime right now is ${new Date().toISOString()}.
`)
})
app.listen(process.env.PORT, () => {
console.log(`Listening on port ${process.env.PORT}`)
})
Next we need to containerize our application so that we can build and push it to AWS. We will do this through Docker. We need a Dockerfile
that will copy over our application files, install our dependencies like express
, and ultimately run our application.
# ./Dockerfile
FROM node:18-alpine
WORKDIR /app
ENV \
MY_INPUT_ENV_VAR=dockerfile-default-env-var \
NODE_ENV=production \
PORT=8080
EXPOSE ${PORT}
COPY package*.json ./
RUN npm ci
COPY . .
CMD [ "node", "index.js"]
Before moving onto Terraform, we need to prevent Docker from copying sensitive data, like our Terraform state files, into our container. We will do this by writing a .dockerignore
file to tell Docker exactly what to copy.
# ./.dockerignore
# Ignore all files when COPY is run
*
# Un-ignore these specific file so that they are
# copied over when COPY is run
!index.js
!package-lock.json
!package.json
Step 2: Authenticating Docker through Terraform
To push our container from our computer to AWS, we need to authenticate Docker through Terraform. Lets create a main.tf
file where we will
- Install the
hashicorp/aws
andkreuzwerker/docker
Terraform providers - Copy over some values from our
Dockerfile
that we will use later - Use Terraform to authenticate Docker
// ./main.tf
terraform {
required_version = "~> 1.3"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.56"
}
docker = {
source = "kreuzwerker/docker"
version = "~> 3.0"
}
}
}
locals {
container_name = "hello-world-container"
container_port = 8080 # ! Must be same port from our Dockerfile that we EXPOSE
example = "hello-world-ecs-example"
}
provider "aws" {
region = "ca-central-1" # Feel free to change this
default_tags {
tags = { example = local.example
}
}
# * Give Docker permission to pusher Docker Images to AWS.
data "aws_caller_identity" "this" {}
data "aws_ecr_authorization_token" "this" {}
data "aws_region" "this" {}
locals { ecr_address = format("%v.dkr.ecr.%v.amazonaws.com", data.aws_caller_identity.this.account_id, data.aws_region.this.name) }
provider "docker" {
registry_auth {
address = local.ecr_address
password = data.aws_ecr_authorization_token.this.password
username = data.aws_ecr_authorization_token.this.user_name
}
}
With our main.tf
setup, lets run terraform init
to install our providers and terraform apply
to ensure everything runs correctly. If you ever want to destroy everything we have built just run terraform destroy
.
Step 3: Build and push
With Docker authenticated we can now use Terraform to build and push our container to AWS Elastic Container Registry (ECR). In the below code snipit we will
- Build an ECR instance
- Build our container locally
- Push our locally built container to our ECR
# ./main.tf
module "ecr" {
source = "terraform-aws-modules/ecr/aws"
version = "~> 1.6.0"
repository_force_delete = true
repository_name = local.example
repository_lifecycle_policy = jsonencode({
rules = [{
action = { type = "expire" }
description = "Delete all images except a handful of the newest images"
rulePriority = 1
selection = {
countNumber = 3
countType = "imageCountMoreThan"
tagStatus = "any"
}
}]
})
}
# * Build our Image locally with the appropriate name so that we can push
# * our Image to our Repository in AWS. Also, give it a random image tag.
resource "docker_image" "this" {
name = format("%v:%v", module.ecr.repository_url, formatdate("YYYY-MM-DD'T'hh-mm-ss", timestamp()))
build { context = "." } # Path to our Dockerfile
}
# * Push our Image to our ECR.
resource "docker_registry_image" "this" {
keep_remotely = true # Do not delete old images when a new image is pushed
name = resource.docker_image.this.name
}
Every time we add a new Terraform module
we need to terraform init
again. So if we run terraform init
and thenterraform apply
we should now see an ECR repository with the name hello-world-ecs-example
and one container image inside with a tag like 2023–03–08T08–43–21
.
We are using timestamp()
to generate a new container image tag every time we run terraform apply
. So each time we run terraform apply
we should see a new tag in our ECR.
Step 4: Setup networks
With our application container now in AWS, we need to setup various networks within AWS so that our application is reachable from the internet. Terraform AWS Modules maintained by https://serverless.tf/ makes this simple.
# ./main.tf
data "aws_availability_zones" "available" { state = "available" }
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.19.0"
azs = slice(data.aws_availability_zones.available.names, 0, 2) # Span subnetworks across multiple avalibility zones
cidr = "10.0.0.0/16"
create_igw = true # Expose public subnetworks to the Internet
enable_nat_gateway = true # Hide private subnetworks behind NAT Gateway
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]
}
If we again run terraform init
and terraform apply
we should see a new VPC in our AWS Dashboard.
This module "vpc"
is doing a lot for us under the hood. If you want a more detailed understanding of what networks are required and how to setup those networks to run AWS Fargate clusters, I recommend checking out The Fargate/Terraform tutorial I wish I had by Jimmy Sawczuk.
Step 5: Make a website
With our VPC setup, we now need to make our application accessible to the internet with an AWS Application Load Balancer (ABL). Again, Terraform AWS Modules is here to help us out. For our ABL, we will
- Accept HTTP requests from the internet
- Forward HTTP requests from the internet to our
container_port
where our container will eventually be running - Allow our container and other resources to make HTTP requests to external services
# ./main.tf
module "alb" {
source = "terraform-aws-modules/alb/aws"
version = "~> 8.4.0"
load_balancer_type = "application"
security_groups = [module.vpc.default_security_group_id]
subnets = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id
security_group_rules = {
ingress_all_http = {
type = "ingress"
from_port = 80
to_port = 80
protocol = "TCP"
description = "HTTP web traffic"
cidr_blocks = ["0.0.0.0/0"]
}
egress_all = {
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
http_tcp_listeners = [
{
# ! Defaults to "forward" action for "target group"
# ! at index = 0 in "the target_groups" input below.
port = 80
protocol = "HTTP"
target_group_index = 0
}
]
target_groups = [
{
backend_port = local.container_port
backend_protocol = "HTTP"
target_type = "ip"
}
]
}
Again we can run terraform init
and terraform apply
and see a new ABL instance running somewhere inside the AWS EC2 dashboard.
Step 6: Create our Fargate cluster
With all our networking infrastructure now setup and provisioned through Terraform, we can now move into actually building a AWS Fargate cluster to run and auto-scale our container.
# ./main.tf
module "ecs" {
source = "terraform-aws-modules/ecs/aws"
version = "~> 4.1.3"
cluster_name = local.example
# * Allocate 20% capacity to FARGATE and then split
# * the remaining 80% capacity 50/50 between FARGATE
# * and FARGATE_SPOT.
fargate_capacity_providers = {
FARGATE = {
default_capacity_provider_strategy = {
base = 20
weight = 50
}
}
FARGATE_SPOT = {
default_capacity_provider_strategy = {
weight = 50
}
}
}
}
Step 7: Define our task
For AWS Fargate to know how to run our container, we need to define an AWS ECS Task.
# ./main.tf
data "aws_iam_role" "ecs_task_execution_role" { name = "ecsTaskExecutionRole" }
resource "aws_ecs_task_definition" "this" {
container_definitions = jsonencode([{
environment: [
{ name = "NODE_ENV", value = "production" }
],
essential = true,
image = resource.docker_registry_image.this.name,
name = local.container_name,
portMappings = [{ containerPort = local.container_port }],
}])
cpu = 256
execution_role_arn = data.aws_iam_role.ecs_task_execution_role.arn
family = "family-of-${local.example}-tasks"
memory = 512
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
}
If we run terraform init
and terraform apply
at this point, in the AWS ECS dashboard we should see our Fargate cluster and our "hello-world-container"
task.
A task can have many revisions which are essentially different versions of a task. Since we are generating a new container tag every time we run terraform apply
this also causes a new task revision to be made to point to this new tag. We don’t need to worry about this revision number.
Step 8: Run our application
Now with everything setup, we can finally run our container.
# ./main.tf
resource "aws_ecs_service" "this" {
cluster = module.ecs.cluster_id
desired_count = 1
launch_type = "FARGATE"
name = "${local.example}-service"
task_definition = resource.aws_ecs_task_definition.this.arn
lifecycle {
ignore_changes = [desired_count] # Allow external changes to happen without Terraform conflicts, particularly around auto-scaling.
}
load_balancer {
container_name = local.container_name
container_port = local.container_port
target_group_arn = module.alb.target_group_arns[0]
}
network_configuration {
security_groups = [module.vpc.default_security_group_id]
subnets = module.vpc.private_subnets
}
}
If you run terraform apply
and click on the cluster in the AWS ECS dashboard you should see our task in a pending
state. When our container is finished spinning up, it should say running
.
Step 9: Visit our website
# * Output the URL of our Application Load Balancer so that we can connect to
# * our application running inside ECS once it is up and running.
output "url" { value = "http://${module.alb.lb_dns_name}" }
Run terraform apply
and after Terraform is finished running you should be able to visit the output URL to see our Hello World application running!
Step 10: Deploy a change
// ./index.js
...
app.get('/', (_req, res) => {
res.send(`
Now this is pod conducting!
`)
})
...
Run terraform apply
and give it a couple minutes to spin up the a new container looking for that pending
and running
state in the cluster dashboard. When the new container is running, visit the output URL and see your changes live.
You can view the complete example at https://github.com/1Mill/example-terraform-ecs/tree/main/examples/terraform-aws-modules.