← Home

Building and shipping .NET Core 2.0 applications on Circle CI with Docker and Amazon ECR

There are generally a few parts to shipping a .NET Core application:

  • Compiling it (turning the source code into something that can run)
  • Distributing it to where it needs to run (in the cloud, someone else’s computer etc.)
  • Running it (using the .NET Framework)

My team has been using .NET Core applications running on Docker in production for 6 months now and we’re sure that Docker helps with compiling, distributing and running our apps.

I’ve got a few helpful files to share, but thought it might be worth explaining. Code first, explanation later.

Remember to add the AmazonEC2ContainerRegistryPowerUser IAM role to the Circle CI user.

Compiling

This needs to be done on a machine with software development tools (sometimes known as a Software Development Kit) installed on it.

While it’s possible to distribute source code and get users to compile it, or run it as scripts (e.g. Node.js) there are also a number of reasons why you wouldn’t, for example:

  • Development tools tend to take up a lot of disk space, so if you don’t need them, great.
  • You want to minimise the security footprint of the execution environment - i.e. if someone manages to hack the server, it’s better if there aren’t already powerful development tools installed on there for them to use.
  • It takes time to compile, wastes processing time and is error / accident prone - if you’re shipping the same software to 4 servers, you could save 3 lots of compilation by doing it once and distributing the output to all the other machines.

Aside from taking up a lot of disk space, Software Development Kits (SDKs) generally need to be installed onto the computer and often change different commands. For example, if you install .NET Core 2.0 tooling onto a machine with .NET Core 1.1 on it, dotnet commands will output slightly different values or do different things.

As people install new tools on their laptops, they might not realise that what works on their computer doesn’t work for everyone else, for example, because not all operating systems ship with a zip command line tool.

For this reason, it’s been quite common to use a “Build Server” which builds code on a machine with a standard configuration. If it builds on that machine, then it’s probably fine everywhere.

This is an improvement, but you’ve still got the problem of installing the dependencies (like the zip program, and the Software Development Kits) onto the build server and remembering how you configured it.

Docker helps with this by providing a way to run processes within a pre-configured environment, including dependencies. Using Docker, I can run a process inside a “container” which contains everything required to build and run .NET Core 2.0 applications while giving that process access to directories on my computer which contain my source code.

Microsoft provide Docker “images” for this purpose, such as microsoft/dotnet:2-sdk which contains everything you need to build a .NET Core 2.0 application. (If you’re building an ASP.NET Core application, there’s a container optimised for that too).

The basic process is to run the Docker command line tool to “mount” the source code directories into the build container so that the build process can access them, creating build outputs (DLLs) on the local machine disk as dotnet build executes.

Anyone with Docker installed should then be able to build the code in the container without having to install the .NET SDKs etc. on their machine. Happy days. Now, how do I get the build outputs to someone else?

Distribution

In the past, .NET applications from big companies were distributed as MSI packages with graphical installers that installed dependencies like the .NET Framework as needed. This takes a lot of effort to do (anyone remember WiX?), so commonly developers distributed a zip file full of the compiled code and a Word document or README file containing instructions of what else to install, and how to configure the application (e.g. how to set values in the Web.Config).

This is highly tedious. It’s much better if the application contains all of its dependencies so you can just run it, rather than having to follow complicated steps.

Alternative approaches involve packaging up .NET applications into zips or similar, installing an agent on each machine you intend to deploy to and then managing it that way, or making a base VM image that downloads software when it starts up. Still, quite a bit of bespoke tinkering each time.

Regardless, it’s also annoying if you have to download massive binary files just to update a few MB of software.

Docker helps here by allowing you to produce a “Docker image” made up of various filesystem “layers” and by providing a way of distributing those filesystem layers to people who want to download your software.

Each layer is uniquely identified by its hash, so if you’ve already downloaded a layer and have it saved on your computer, it doesn’t get downloaded again. In this way, if you run two containers which are based on top of microsoft/dotnet:2-runtime on the same computer, the filesystem layers which make up microsoft/dotnet:2-runtime will only be downloaded once, saving bandwidth and disk space.

This is much better than sharing 4GB+ VM images.

To make a Docker image, you need to “inerhit” a base image and then lay your changes on top using a Dockerfile. In the case of .NET, Microsoft provide the microsoft/dotnet:2-runtime image which contains everything needed to run .NET Core 2.0 applications, so the Dockerfile just needs to add the build outputs from the first step into the runtime container and set what DLL the container should run.

You can see these steps in the Makefile (docker build -t storedata-updater ./updater/Updater). Note that the publish.sh file is inside the ./updater/ directory.

To distribute the container, you need to push it to a Docker registry which allows other people to download it. Docker provides its own - just like Github etc., it’s free for open source projects, but if you want to control access to your containers (like you often will), then you need to pay.

Amazon also provides ECR which you can use instead of Docker’s default registry. This is handy if you want to simplify your financial interactions, control the data sovereignty of your Docker images, want to save a few quid, or don’t want to set up a process for handling access control for yet another cloud service, since AWS ECR uses Amazon’s IAM system for access control.

Since we’re building our outputs in Circle CI, we need to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables in Circle CI so that the build process can use the AWS command line tooling to push to ECR.

I always set up an account in AWS for Circle CI which limits what it can do. In this case, to push images, the AmazonEC2ContainerRegistryPowerUser role is plenty.

Once this is in place, you can hopefully see in the Makefile how easy it is to get Circle to login to Docker and push the newly built container up to Amazon ECR. Your other AWS users can get access via an IAM role.

Running

Great, so we now have a Docker image uploaded to ECR. After this, it’s easy enough to login to AWS ECR by executing the output of the aws ecr get-login command in a terminal, and then running docker run with the name of the container you want to run, e.g. docker run 465xxxxxxx611.dkr.ecr.eu-west-2.amazonaws.com/storedata-updater:20

The particular example app (storedata-updater) I’ve mentioned was designed to run on an on-premise server as a background process, but it’s really just a scheduled task that takes 80 seconds to run its job, so in the end, I made a few modifications and set it running on AWS Lambda instead where it runs once every 24 hours - more on that in a future post…