Using AWS IAM and S3 for loading config information into node docker containers running on ECS

This article shows how to setup a docker image using node and load your config info for each environment from IAM protected s3. The node base image is stripped down, so there’s a few modifications to the AWS example scripts that need to happen. Sometimes we have a lot of config variables for each environment (dev, qa, prod) that need to be loaded into our node applications. For example, let’s say we use a config file with all the information needed for a specific environment.

// dev.deploy.cnf
export NODE_ENV=dev
export BASE_URL=https://dev.test.com
export MONGO_URL=mongodb://dev.mongo.test.com:27017
export REDIS_HOST=dev.redis.test.com
export REDIS_PORT=6379
export API_SERVER=https://dev.api.test.com

Instead of maintaining these variables inside ECS task definitions, we can use s3 to hold our config files. By using IAM, we can provide read-only access to a single role, which the task will use to start.

Create s3 Bucket with Limited Access

Create a new s3 bucket with your config files in it. Block off access to everyone except the IAM role you set up in the next step.

/deploy-configs
dev.deploy.cnf
qa.deploy.cnf
stage.deploy.cnf
prod.deploy.cnf

Create IAM Role

Follow this page to setup an IAM role and policy that has access to your s3 Bucket.
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html

Create Docker Container

Here’s my setup using node:latest.

// DockerFile
FROM node:latest

MAINTAINER davin.ninja

# Install aws cli
RUN apt-get update
RUN apt-get -y install python-dev curl unzip
RUN cd /tmp
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
RUN unzip awscli-bundle.zip
RUN ./awscli-bundle/install
RUN rm awscli-bundle.zip
RUN rm -rf awscli-bundle

RUN mkdir -p /src/app
WORKDIR /src/app

# Install app dependencies
COPY package.json /src/app
RUN npm install

COPY . /src/app

# Expose port
EXPOSE 80

# Overwrite the entry-point script
COPY app.start.sh /app.start.sh
ENTRYPOINT ["/app.start.sh"]

What you need to be concerned about from this file is installing the aws-cli. Also, since we’re using node as our base image, some of the sample code on AWS doesn’t work. We’ll need to install a few dependencies first. Note that we’re using python-dev instead of python. After installing aws cli, we need to change our ENTRYPOINT or CMD to point to a new bash script, app.start.sh.

Create Start Script

// app.start.sh
#!/bin/bash

# Load the config
/root/.local/lib/aws/bin/aws s3 cp s3://deploy-confings/${DEPLOY_ENV}.deploy.cnf deploy.cnf

source deploy.cnf

node /src/app/app.js

After creating app.start.sh, make it runnable with chmod +x app.start.sh. This file downloads the config file, based on our tasks DEPLOY_ENV, from s3. It then loads the config and starts your app. Here our app is started with the node command, but yours will be whatever your ENTRYPOINT or CMD was in your dockerfile. Note the location of the aws binary. We didn’t add it to our $PATH.

Create ECS Task

Finally, you need to create a task with your docker image. In order for this image to have access to s3, we need to set the task role to the policy we setup earlier. Also, set your environment variable DEPLOY_ENV to whichever environment your task is for. You’re going to need to create a separate task for each. After that, run your task and check to make sure your config file was downloaded successfully.

This link has some info on setting up the docker container.
https://aws.amazon.com/blogs/security/how-to-manage-secrets-for-amazon-ec2-container-service-based-applications-by-using-amazon-s3-and-docker/

The Error

Since most people come from google, here’s the shortened error you get when you follow the sample code from the link above without using python-dev with the node image.

Running cmd: /usr/bin/python virtualenv.py --python /usr/bin/python /root/.local/lib/aws
Running cmd: /root/.local/lib/aws/bin/pip install --no-index --find-links file:///awscli-bundle/packages awscli-1.11.85.tar.gz

  x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c ext/_yaml.c -o build/temp.linux-x86_64-2.7/ext/_yaml.o
  ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
   #include "Python.h"
                      ^
  compilation terminated.
  error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

Failed to build PyYAML

    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c ext/_yaml.c -o build/temp.linux-x86_64-2.7/ext/_yaml.o
    ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
     #include "Python.h"
                        ^
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

    ----------------------------------------
  Failed building wheel for PyYAML
Command "/root/.local/lib/aws/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-7eYg13/PyYAML/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-qVo4u0-record/install-record.txt --single-version-externally-managed --compile --install-headers /root/.local/lib/aws/include/site/python2.7/PyYAML" failed with error code 1 in /tmp/pip-build-7eYg13/PyYAML/

The command '/bin/sh -c ./awscli-bundle/install' returned a non-zero code: 1
Share this:

Leave a comment