Clone the project repo to follow along…
In part one of this series we walked through using Docker to containerize a simple Node.js app and verified our shiny new container worked as expected. Unfortunately, containers aren’t much use if we can’t get them deployed to start delighting users with the sheer awesomeness of our brilliant code.
In the container world, the path to production involves some sort of registry… While it sounds fancy, a registry isn’t much different from the simple web servers you historically used to host RPMs or DEBs (essentially just another tool to help you maintain the ITIL concept of a Definitive Software Library. At their heart they are still highly scalable web servers, conveniently tweaked to work with the Docker tool chain. You have a lot of choices when choosing a registry, and the right choice will depend on your requirements.
If you just want to get started quickly, the publicly hosted Docker Hub is a common starting point. Even when self-hosting, you will often pull a lot of starter images from Docker Hub. When you need more control over where your images are stored and integration with custom tooling, commercial offerings such as Sonatype’s Nexus Repository are available (full disclosure, I work there!). If already plugged into the AWS ecosystem, Elastic Container Registry (ECR) is a natural pairing for Elastic Container Service (ECS) apps. Since this series is focussed on shipping containerized apps atop ECS, let’s walk through using ECR.
Working with containers, the cloud, or most anything in a post Twelve-Factor App world involves a lot of environmental manipulation. While not strictly related to AWS or ECR, I wanted to take a minute to briefly show how I reduce the effort it takes to maintain per-project environment settings. In the end this slight detour will be useful in our tour of ECR, since we’ll use several convenience scripts which rely on environment variables.
There are several tools that can help us here… You might have used autoenv,
be familiar with Node.js’ .env
files (similar idea in different context), or
even have a custom setup letting you skip this entire section. In this project
we’ll use direnv.
The concept is very simple, but will save us a lot of time. A shell hook (read
the installation docs to get this right for
your specific shell) automatically sources configuration files as we change into
directories, and unloads any exported variables when we leave those directories.
This means we can save per-project settings in .envrc
files and not have to
remember all these details or waste time exporting them again and again!
Since this is an aside, I’m not going to re-cover how to configure or use direnv
– their site has good documentation you can follow for that. I only want to
point out two things. First, as soon as you create a .envrc
, you should also
echo .envrc >> .gitignore
. While everything in .envrc
isn’t necessarily secret,
secrets often reside there. You want to be sure it’s never committed to source
control!
Last but not least, things which are truly secret may not need to reside
directly in .envrc
. For example, you could have truly sensitive things stored in
something like Ansible Vault,
then include
commands in your direnv configuration to pull directly from Vault when exporting
into the environment (and conveniently removing those bits when you leave the
project’s work directory).
For now, it’s enough to know some of the “magic” you’ll see happening below is thanks to direnv…
Similar to other AWS services, there are a few prerequisites needed before you can interact with ECS or ECR. I’m not going to detail all of those here. For one thing, you probably already know them if you’re reading this. For another, it would make this more of a book than a blog post. In case it’s useful, I wanted to be explicit on a few things you’ll need.
Aside from an AWS account, you should have an IAM user or assumed role with administrator access. If you want to get more advanced and lock things down more than that, you can use the linked doc to specifically grant the permissions required to interact with ECS and ECR. The important thing is to adhere to the best practice of not using your root account, and being aware that whatever user you do use needs permission to access ECR (or nothing we try below will work).
If you’re using EC2 for other things, you may already have a Key Pair allowing you to SSH into instances. You won’t have to worry about key pairs in this series, because we’re going to ship our ECS app using the Fargate launch type (leveraging AWS-managed shared infrastructure). If you decide to use the EC2 launch type (hosting your containers on EC2 instances you manage), you will need an associated key pair for administrative tasks.
From a network perspective, you’ll need a VPC and Security Group. For experimentation, assuming you haven’t deleted it, you can just use the default VPC. For production services you’ll likely want to create a dedicated VPC. Our container instances will need a security group which grants access to any ports we wish to expose. In our example this will just be HTTP over port 80/tcp.
Since we’re going to use some scripts which wrap the AWS CLI, you also need that
installed
and
configured.
If you’re on a Mac, the prior is as simple as brew install awscli
. For the latter,
just run aws configure
and follow the prompts.
Finally, the good stuff! Let’s get the [app we containerized last time](/blog/posts/thinking-inside-the-box]
pushed into ECR. Unfortunately we can’t
just docker push
, since we’ll need to figure out the right registry and how to
authenticate… Luckily AWS makes this easy!
The default registry associated with a region and account can be be derived from
the account ID and region name. The AWS CLI provides a get-login-password
command we can use to authenticate with the docker CLI.
Our project repo provides convenience scripts wrapping the requisite AWS commands. These rely on a few environment variables. As mentioned above, I’ll be using direnv to auto-export needed bits. Here’s my .envrc:
export PROFILE="personal"
export REGION="us-east-2"
export AWS_ACCOUNT_ID="012345678901"
export REPO_URI="012345678901.dkr.ecr.us-east-2.amazonaws.com/hello-world"
If it’s the first time you’ve configured .envrc
, you’ll need to direnv allow
to
enable exporting its contents. This ensures random (untrusted) projects which
include .envrc
files can’t easily pwn you! Once allowed, future exports will be
automatic until the file contents change.
Don’t worry, we’ll see how to get the REPO_URI
… but notice how the location of
our default registry is easily derived from our account ID and region? The
provided ecs-login
script simply wraps get-login-password
and docker login
(along with bits provided by .envrc
) to simplify authentication:
❯ ./scripts/ecr-login
Login Succeeded
Once authenticated, we’re still not ready to push our image.
First, we need to create a repository to hold our image. This is similar to
Docker Hub or other registries, where you have per-project or service-related
repositories to keep images organized and appropriately secured. The ecr
sub-command of the AWS CLI has a create-repository option for this. Since that
requires a few arguments we don’t want to remember each time, another wrapper
helps… simply provide the name of the repository to create:
❯ ./scripts/ecr-create-repo hello-world
{
"repository": {
"repositoryArn": "arn:aws:ecr:us-east-2:012345678901:repository/hello-world",
"registryId": "012345678901",
"repositoryName": "hello-world",
"repositoryUri": "012345678901.dkr.ecr.us-east-2.amazonaws.com/hello-world",
"createdAt": "2020-03-14T20:35:54-04:00",
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": true
}
}
}
Now you see where we got REPO_URI
– take the value of repositoryUri
and add it
to your .envrc
(as you saw above in mine). Be sure to direnv allow
so REPO_URI
can be used by our scripts below.
Now that we’ve authenticated and have a repository created, the actual push is
entirely handled by Docker. I almost always end up getting the tag format wrong
at least once or having to refer to the documentation
so use another script. This one takes a little longer to run since it has to transfer our image
contents over the network, and takes the image name to push (remember how we
created node-aws-ecs-app
in the last part of this series
using docker build
?):
❯ ./scripts/ecr-push-image node-aws-ecs-app
The push refers to repository [012345678901.dkr.ecr.us-east-2.amazonaws.com/hello-world]
41578a476644: Pushed
c0f1bc56f021: Pushed
a2790008003b: Pushed
6149ccf9029b: Pushed
45dee76bb180: Pushed
d8511336706e: Pushed
5280d2327565: Pushed
77d806cfa004: Pushed
930c8bc01816: Pushed
5216338b40a7: Pushed
latest: digest: sha256:bfa585daf12805676876acd680b82f7c090f82c468b0ddb3b58d368d6d52277b size: 2409
By default the script just uses the latest
tag, but you can provide a tag name
as the second argument if you want to push a different version. Since it’s only
a single command and the registry/image name (and more often done by a remote
service – ECS in our case), I haven’t wrapped docker pull
. However, if you want
to verify the image you just pushed is actually available (trust but verify!),
you can:
❯ docker pull ${REPO_URI}:latest
latest: Pulling from hello-world
Digest: sha256:bfa585daf12805676876acd680b82f7c090f82c468b0ddb3b58d368d6d52277b
Status: Image is up to date for 012345678901.dkr.ecr.us-east-2.amazonaws.com/hello-world:latest
012345678901.dkr.ecr.us-east-2.amazonaws.com/hello-world:latest
We’ve officially published our sample app to ECR, so it can be pulled by ECS tasks to provide a real-world service! To clean things up and avoid any financial impact, we can easily remove our image and repository:
❯ ./scripts/ecr-delete-image hello-world
{
"imageIds": [
{
"imageDigest": "sha256:bfa585daf12805676876acd680b82f7c090f82c468b0ddb3b58d368d6d52277b",
"imageTag": "latest"
}
],
"failures": []
}
❯ ./scripts/ecr-delete-repo hello-world
{
"repository": {
"repositoryArn": "arn:aws:ecr:us-east-2:012345678901:repository/hello-world",
"registryId": "012345678901",
"repositoryName": "hello-world",
"repositoryUri": "012345678901.dkr.ecr.us-east-2.amazonaws.com/hello-world",
"createdAt": "2020-03-14T20:35:54-04:00"
}
}
A note on our cleanup scripts… they are more liberal than you probably want to be in production. On the one hand, this is because we’re testing in a sandbox. On the other, it’s a good lesson to always read scripts before running them!
You can pass in a tag as the second argument to ecr-delete-image
, but it
defaults to latest
and uses batch-delete-image
and --image-ids
(maybe you want
to require a tag name). ecr-delete-repo
uses --force
. This means the delete
image step was technically not necessary, since the delete repo script would
wipe out the repo even if it contained images (thankfully not default behavior).
This makes for easy cleanup while experimenting, but might not be what you want.
The point of these wrappers was not so you could copy/paste. I’d like you to be mindful of the wrapped commands, take time to understand how they work, and then tune them for your environment. The point is whether you use shell scripts, Ansible, Terraform or some other tool… Once you figure out how to solve a problem with the AWS console or CLI, you can automate the minutia to reduce future effort. Hopefully these give you a good starting point for your own scripts – just be aware of what they contain.
With only a few commands we’ve managed to push our sample app into the cloud. While not serving users just yet, we’ve got our code in a container registry (ECR) accessible by ECS. In the next installment of this series we’ll look at preparing the Task Definitions which are used to define container instances we can expose as a bonafide service. Be sure to check back next time as we continue our ECS journey!
References