← Home

AWS - API Gateway to ECS via VPC Link

This week, I’ve been working with [0] to set up some infrastructure for a project we’re working on.

We’re using API Gateway and using the Custom Domains feature so we can run all our APIs on the same domain, e.g. `[1]

Most of our APIs are Serverless, comprised of Lambdas running in response to API Gateway events, which keeps time spent managing infrastructure to a minimum, but we wanted to slot in a service which runs as a Docker container on an ECS cluster inside a VPC.

Up until recently, the way to do this was to publish your backend service on the Internet and then route that via the API Gateway. That was a bit rubbish, because your service ended up also being addressable directly on the Web instead of only through the API Gateway.

“VPC Link” provides a way for API Gateway to connect to a private (internal) load balancer inside your VPC but the only type of Load Balancer that’s supported is a Network Load Balancer.

Network Load Balancers are very simple, but this simplicity places some restrictions on your design.

Application Load Balancers (ALBs) integrate really well into Elastic Container Service. On the ALB, you can configure a listener on a TCP port which routes to a service running on the Elastic Container Service (ECS).

This allows an ECS node to choose a port on the machine to open up from a range of available ports (see [2] while routing traffic from that external port to a port open on the Docker container.

ALB supports routing to ECS natively, so when you’re using an ALB, a security group which allows access on that port can be set up against the ALB, without having to worry about the dynamic nature of ports allocated within the ECS cluster.

I mention this because prior to VPC Link being available, this is what I would have done to make a service available on the Web.

Network Load Balancers (NLBs) don’t support this behaviour at all. Network Load Balancers route at the network level, based on some health checks the load balancer carries out - they just target instances. Therefore, there’s no concept of a security group on NLBs and no concept of routing to a service on ECS.

As a result, if you want to connect API Gateway to your private ECS cluster, you need to make sure that the service you’re sharing is available on a fixed port in the task definition:

[
    {
        "name": "${name}",
        "image": "12327816371.dkr.ecr.eu-west-2.amazonaws.com/repository:${tag}",
        "cpu": 128,
        "memory": 128,
        "entryPoint": [],
        "environment": [
            {
                "name": "CONNECTION_STRING",
                "value": "${db_connection}"
            }
        ],
        "portMappings": [
            {
                "containerPort": 8080,
                "protocol": "tcp",
                "hostPort": 10000
            },
            {
                "containerPort": 7777,
                "protocol": "tcp",
                "hostPort": 10001
            }
        ],
        "volumesFrom": [],
        "links": [],
        "mountPoints": [],
        "essential": true
    }
]

You also then need to make sure that the EC2 instances which make up your ECS cluster have a security group configured to allow access to this port:

resource "aws_launch_configuration" "ecs_cluster" {
  name                 = "${var.environment}-${var.application}-ecs"
  image_id             = "${lookup(var.ecs_amis, var.region)}"
  instance_type        = "${var.ecs_instance_type}"
  key_name             = "${var.key_name}"
  iam_instance_profile = "${aws_iam_instance_profile.ecs_cluster.name}"
  security_groups      = ["${aws_security_group.ecs-cluster-group.id}", "${aws_security_group.ecs-api.id}"]
  user_data            = "#!/bin/bash\necho ECS_CLUSTER=${aws_ecs_cluster.ecs_cluster.name} >> /etc/ecs/ecs.config"

  lifecycle {
    create_before_destroy = true
  }
}

Setting up the “VPC Link” itself can’t be done using Terraform as of December 2017 and is tracked as an issue at [3] so you’re going to have to configure it manually for now.