Continuous Delivery With AWS Beanstalk, CodePipeline and Terraform

Jul 2, 2018 11:44 · 2608 words · 13 minute read JavaScript React

One mistake I made with some of my early projects was starting off with manual deployments. I thought that getting it up and running and delivered was the most important goal, and that a manual deployment would reach it the quickest. Having delivered a significant number of projects since then, I now completely disagree.

Always automate from the beginning. You will thank yourself later.

Automating first provides a number of clear advantages. The most obvious is that you don’t have to build automation later. As your project grows, so will its infrastructure requirements. It becomes increasingly difficult to automate once you begin forgetting the manual steps you took to get your application up and running. A less obvious reason for automating first is that you can reuse your automation scripts in the future to reduce zero-to-deployment time. Finally, you can combine your automation with modules created by the community to get a best-practice setup with a low effort.

In this post we’ll focus on continuous delivery, a system that perpetually deploys our application when we update our central source code repository. If we were to pseudocode a continuous delivery solution, it might look something like this:

// Step 1: Listen for source updates
sourceCode.on('update', (source) => {
  // Step 2: Build artifact from source
  const buildResults = build(source);
  if(buildResults.failed) return notify('Build failed!');

  // Step 3: Ensure tests pass
  const testResults = test(buildResults.artifact);
  if(testResults.failed) return notify('Your code tests failed!');

  // Step 4: Get approval for deployment
  const approvalResults = requestApproval(buildResults.artifact);
  if(approvalResults.denied) return notify('Your deployment was denied!');

  // Step 5: Deploy artifact
  const deploymentResult = deploy(buildResults.artifact);
  if(deploymentResult.failed) return notify('Your deployment failed!');

  // Step 6: Notify stakeholders that deployment worked
  notify('Deployment succeeded!');
});

And we’re done!

Just kidding. Setting up continuous delivery requires orchestrating quite a few different tools. Builds have to run in an isolated environment, tests have to run in an isolated environment, and deployment has to push built artifacts to running servers without downtime. We have to set all of this up.

Let’s do it.

Prerequisites

Make sure you have the following software and services installed and configured:

All of the commands in this blog post assume you’re using a Mac or Linux. These commands will probably work on Windows, but they’ll may require some modifications.

Step 1: Build an application

We need to build an application that serves HTTP traffic. Let’s create a NodeJS application to use as an example. You can use anything here, so feel free to build something else instead.

Create a new directory and point it at your GitHub repository:

mkdir incredible-website
cd incredible-website
git init
echo "# Incredible website!" >> README.md
git add README.md
git commit -m "Initial commit."
git remote add origin git@github.com:FindAPattern/incredible-website.git
git push -u origin master

Make sure to replace git@github.com:FindAPattern/incredible-website.git with your own repository. Unless we’re good friends, you probably won’t be able to push to mine!

Next, let’s create a NodeJS project:

npm init
npm install --save express pm2
mkdir src

Express is a popular NodeJS framework for building web applications. PM2 is a popular NodeJS process manager for running web applications in production.

Initialize PM2:

$(npm bin)/pm2 init

This creates a file in the root named ecosystem.config.js. Configure PM2 to run our application by replacing the contents of ecosystem.config.js with:

module.exports = {
  apps : [{
    name: "app",
    script: "./src/app.js",
    env: {
      NODE_ENV: "development",
    },
    env_production: {
      NODE_ENV: "production",
    }
  }]
}

Now let’s build a simple web server. Create a file at src/app.js with the following contents:

const express = require('express');
const app = express();

const DEFAULT_PORT = 8081;
const PORT = process.env.PORT || DEFAULT_PORT;

app.get('/', (req, res) => res.send('Incredible website!'));

app.listen(PORT, () => console.log(`Incredible website listening on ${PORT}`));

We didn’t use the PORT environment variable here arbitrarily. AWS Beanstalk passes in the local port to the application using this environment variable. Your application won’t run in Beanstalk without it!

Test out your application by running the following, and then opening a browser to http://localhost:8081/:

$(npm bin)/pm2-dev start src/app.js

Press Control+C to stop the server. Update package.json to include commands for running the process on Beanstalk:

{
  "name": "incredible-website",
  "version": "1.0.0",
  "description": "",
  "main": "src/app.js",
  "scripts": {
    "start": "$(npm bin)/pm2-runtime start ecosystem.config.js --env production",
    "build": "echo \"Nothing to build!\"",
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/FindAPattern/incredible-website.git"
  },
  "author": "",
  "license": "ISC",
  "bugs": {
    "url": "https://github.com/FindAPattern/incredible-website/issues"
  },
  "homepage": "https://github.com/FindAPattern/incredible-website#readme",
  "dependencies": {
    "express": "^4.16.3",
    "pm2": "^2.10.4"
  }
}

Lastly, we need to add a build specification that tells Amazon how to build our project. We don’t actually have anything to build, so I’ll just setup the npm run build command to echo some text. If you need to build clientside javascript, you’d do it in this command. Regardless, create a buildspec.yml file in the root of your project and write the following in it:

version: 0.2

phases:
  pre_build:
    commands:
      - echo Installing dependencies...
      - npm install
  build:
    commands:
      - echo Building files
      - npm run build
      - rm -rf node_modules
artifacts:
  files:
    - '**/*'
  base-directory: '.'
  discard-paths: no

Our application is ready. Now let’s configure it to work with Beanstalk.

Step 2: Set up Terraform

We need to setup Terraform before we can create any infrastructure. By default, it doesn’t know anything about our AWS account, and it stores state locally. Let’s fix that.

Let’s create a new Terraform project:

mkdir deployment
cd deployment
terraform init

You should separate your deployment files from your application repository. I’m combining them in this post to simplify the tutorial.

Now let’s create a bucket to store Terraform state. I’ll name it terraform-artifacts-bucket for this tutorial, but you’ll have to pick something unique, since AWS requires globally unique names for buckets.

aws s3api create-bucket --acl private --bucket terraform-artifacts-bucket

Terraform recommends enabling bucket versioning, so that in case of a failure we can recover. Let’s do that as well:

aws s3api put-bucket-versioning --bucket terraform-artifacts-bucket --versioning-configuration Status=Enabled

Great. Now we can use the bucket for storing Terraform artifacts. Create a file named terraform.tf in the root of your project, and write the following in it:

terraform {
  backend "s3" {
    bucket = "terraform-artifacts-bucket"
    key    = "incredible-website/terraform.tfstate"
    region = "us-east-1"
  }
}

Now run the following command to initialize the backend:

terraform init

It should create a .terraform folder in the root of your project.

Now that we have a backend configured, let’s configure our project to use our AWS user. Add the following to terraform.tf:

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region     = "${var.region}"
}

This tells Terraform to use the access key and secret key from our local project variables. We’ll have to define those, since they don’t exist yet.

Create a file named variables.tf with the following contents:

variable "access_key" {}
variable "secret_key" {}
variable "region" {
  default = "us-east-1"
}

Tell Terraform what values to use by creating a file named terraform.tfvars with the following contents:

access_key = "your-aws-access-key-here"
secret_key = "your-aws-secret-key-here"

We’ll have to tell Terraform to initialize the aws provider by running the following command:

terraform init

Lastly, just in case you’re storing this project in Git (you should be!), let’s tell Git to ignore our sensitive Terraform files by creating a file named .gitignore with the following contents:

**/.terraform/*
*.tfstate
*.tfstate.*
crash.log
*.tfvars

Make sure you do not check any API keys into your repository! For simplicity, we’ve stored sensitive keys in a .tfvars file. Terraform recommends storing them in environment variables.

Step 2: Set up Beanstalk

Our build process will require access to CodeBuild, CodePipeline, EC2, and Beanstalk. Let’s create a role for that using terraform. Create a file in your Terraform repository named roles.tf containing the following:

resource "aws_iam_role" "build" {
  name = "incredible-website-build-role"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "codepipeline.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "codebuild.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "elasticbeanstalk.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF
}

resource "aws_iam_role_policy" "beanstalk_policy" {
  name = "incredible-website-beanstalk-policy"
  role = "${aws_iam_role.build.id}"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "BucketAccess",
      "Action": [
        "s3:Get*",
        "s3:List*",
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::elasticbeanstalk-*",
        "arn:aws:s3:::elasticbeanstalk-*/*"
      ]
    },
    {
      "Sid": "XRayAccess",
      "Action":[
        "xray:PutTraceSegments",
        "xray:PutTelemetryRecords"
      ],
      "Effect": "Allow",
      "Resource": "*"
    },
    {
      "Sid": "CloudWatchLogsAccess",
      "Action": [
        "logs:PutLogEvents",
        "logs:CreateLogStream"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:logs:*:*:log-group:/aws/elasticbeanstalk*"
      ]
    },
    {
      "Sid": "CloudWatchCodeBuildLogsAccess",
      "Action": [
        "logs:PutLogEvents",
        "logs:CreateLogStream"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:logs:*:*:log-group:/aws/codebuild*"
      ]
    },
    {
        "Sid": "AllowPassRoleToElasticBeanstalk",
        "Effect": "Allow",
        "Action": [
            "iam:PassRole"
        ],
        "Resource": "*",
        "Condition": {
            "StringLikeIfExists": {
                "iam:PassedToService": "elasticbeanstalk.amazonaws.com"
            }
        }
    },
    {
        "Sid": "AllowCloudformationOperationsOnElasticBeanstalkStacks",
        "Effect": "Allow",
        "Action": [
            "cloudformation:*"
        ],
        "Resource": [
            "arn:aws:cloudformation:*:*:stack/awseb-*",
            "arn:aws:cloudformation:*:*:stack/eb-*"
        ]
    },
    {
      "Sid": "LoadBalancer",
      "Effect": "Allow",
      "Action": [
        "elasticloadbalancing:*" 
      ],
      "Resource": [
        "*"
      ]
    },
    {
        "Sid": "AllowDeleteCloudwatchLogGroups",
        "Effect": "Allow",
        "Action": [
            "logs:DeleteLogGroup"
        ],
        "Resource": [
            "arn:aws:logs:*:*:log-group:/aws/elasticbeanstalk*"
        ]
    },
    {
        "Sid": "AllowS3OperationsOnElasticBeanstalkBuckets",
        "Effect": "Allow",
        "Action": [
            "s3:*"
        ],
        "Resource": [
            "arn:aws:s3:::elasticbeanstalk-*",
            "arn:aws:s3:::elasticbeanstalk-*/*"
        ]
    },
    {
        "Sid": "AllowOperations",
        "Effect": "Allow",
        "Action": [
            "autoscaling:AttachInstances",
            "autoscaling:CreateAutoScalingGroup",
            "autoscaling:CreateLaunchConfiguration",
            "autoscaling:DeleteLaunchConfiguration",
            "autoscaling:DeleteAutoScalingGroup",
            "autoscaling:DeleteScheduledAction",
            "autoscaling:DescribeAccountLimits",
            "autoscaling:DescribeAutoScalingGroups",
            "autoscaling:DescribeAutoScalingInstances",
            "autoscaling:DescribeLaunchConfigurations",
            "autoscaling:DescribeLoadBalancers",
            "autoscaling:DescribeNotificationConfigurations",
            "autoscaling:DescribeScalingActivities",
            "autoscaling:DescribeScheduledActions",
            "autoscaling:DetachInstances",
            "autoscaling:PutScheduledUpdateGroupAction",
            "autoscaling:ResumeProcesses",
            "autoscaling:SetDesiredCapacity",
            "autoscaling:SuspendProcesses",
            "autoscaling:TerminateInstanceInAutoScalingGroup",
            "autoscaling:UpdateAutoScalingGroup",
            "cloudwatch:PutMetricAlarm",
            "ec2:AssociateAddress",
            "ec2:AllocateAddress",
            "ec2:AuthorizeSecurityGroupEgress",
            "ec2:AuthorizeSecurityGroupIngress",
            "ec2:CreateSecurityGroup",
            "ec2:DeleteSecurityGroup",
            "ec2:DescribeAccountAttributes",
            "ec2:DescribeAddresses",
            "ec2:DescribeImages",
            "ec2:DescribeInstances",
            "ec2:DescribeKeyPairs",
            "ec2:DescribeSecurityGroups",
            "ec2:DescribeSubnets",
            "ec2:DescribeVpcs",
            "ec2:DisassociateAddress",
            "ec2:ReleaseAddress",
            "ec2:RevokeSecurityGroupEgress",
            "ec2:RevokeSecurityGroupIngress",
            "ec2:TerminateInstances",
            "ecs:CreateCluster",
            "ecs:DeleteCluster",
            "ecs:DescribeClusters",
            "ecs:RegisterTaskDefinition",
            "elasticbeanstalk:*",
            "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
            "elasticloadbalancing:ModifyTargetGroup",
            "elasticloadbalancing:ConfigureHealthCheck",
            "elasticloadbalancing:CreateLoadBalancer",
            "elasticloadbalancing:DeleteLoadBalancer",
            "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
            "elasticloadbalancing:DescribeInstanceHealth",
            "elasticloadbalancing:DescribeLoadBalancers",
            "elasticloadbalancing:DescribeTargetHealth",
            "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
            "elasticloadbalancing:DescribeTargetGroups",
            "elasticloadbalancing:RegisterTargets",
            "elasticloadbalancing:DeregisterTargets",
            "iam:ListRoles",
            "logs:CreateLogGroup",
            "logs:PutRetentionPolicy",
            "rds:DescribeDBInstances",
            "rds:DescribeOrderableDBInstanceOptions",
            "rds:DescribeDBEngineVersions",
            "sns:ListTopics",
            "sns:GetTopicAttributes",
            "sns:ListSubscriptionsByTopic",
            "sqs:GetQueueAttributes",
            "sqs:GetQueueUrl",
            "codebuild:CreateProject",
            "codebuild:DeleteProject",
            "codebuild:BatchGetBuilds",
            "codebuild:StartBuild"
        ],
        "Resource": [
            "*"
        ]
    }
  ]
}
POLICY
}

resource "aws_iam_instance_profile" "build" {
  name = "incredible-website-build-profile"
  role = "${aws_iam_role.build.name}"
}

resource "aws_s3_bucket_policy" "artifacts" {
  bucket = "${aws_s3_bucket.artifacts.id}"
  policy =<<POLICY
{
  "Version": "2012-10-17",
  "Id": "incredible-website-artifacts-policy",
  "Statement": [
    {
      "Sid": "incredible-website-access",
      "Effect": "Allow",
      "Principal": {
        "AWS": "${aws_iam_role.build.arn}"
      },
      "Action": ["s3:ListBucket"],
      "Resource": ["${aws_s3_bucket.artifacts.arn}"]
    },
    {
      "Sid": "incredible-website-child-access",
      "Effect": "Allow",
      "Principal": {
        "AWS": "${aws_iam_role.build.arn}"
      },
      "Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
      "Resource": ["${aws_s3_bucket.artifacts.arn}/*"]
    }
  ]
}
POLICY
}

Feel free to look through the exact policies setup in this document. It’s more broad than it ought to be for an isolated production application, but it’s fine for our purposes.

Now that we have a role and a profile for our build process to use, let’s build the application. Create a file named application.tf and add the following to it:

resource "aws_elastic_beanstalk_application" "app" {
  name        = "incredible-website"
  description = "Application for the incredible website."
}

resource "aws_elastic_beanstalk_environment" "production" {
  name                = "production"
  application         = "${aws_elastic_beanstalk_application.app.name}"
  solution_stack_name = "64bit Amazon Linux 2018.03 v4.5.1 running Node.js"

  setting {
    namespace = "aws:elbv2:listener:80"
    name = "ListenerEnabled"
    value = "true"
  }

  setting {
    namespace = "aws:elbv2:listener:80"
    name = "Protocol"
    value = "HTTP"
  }

  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name = "LoadBalancerType"
    value = "application"
  }

  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name = "NODE_ENV"
    value = "production"
  }

  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name = "IamInstanceProfile"
    value = "${aws_iam_instance_profile.build.name}"
  }

  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name = "InstanceType"
    value = "t2.small"
  }

  setting {
    namespace = "aws:autoscaling:asg"
    name = "MinSize"
    value = "1"
  }
}

output "url" {
  value = "${aws_elastic_beanstalk_environment.production.cname}"
}

This file specifies the parameters of our Beanstalk application. It sets it up to:

  • Run NodeJS
  • Use an HTTP load balancer on port 80
  • Set the Node environment to production
  • Autoscale up from a single instance
  • Use small instances

Time to setup our deployment pipeline!

Step 3: Set up CodeBuild

Our code pipeline depends on AWS CodeBuild to build its deployments, we we’ll need to set that up first. AWS CodeBuild takes a source repository and builds it by running commands specified in that repository’s buildspec.yml file. After it’s done building, it stores the output in an S3 bucket, which CodePipeline uses to deploy it.

Set that up by creating a build.tf file with the following contents:

resource "aws_s3_bucket" "artifacts" {
  bucket = "incredible-website-artifacts"
  acl    = "private"
}

resource "aws_codebuild_project" "build" {
  name = "incredible-website-project"
  description = "Builds the client files for the incredible-website environment."
  build_timeout = "5"
  service_role = "${aws_iam_role.build.arn}"

  artifacts = {
    type = "CODEPIPELINE"
  }

  environment {
    compute_type = "BUILD_GENERAL1_SMALL"
    image = "aws/codebuild/nodejs:7.0.0"
    type = "LINUX_CONTAINER"

    environment_variable {
      "name"  = "S3_BUCKET"
      "value" = "${aws_s3_bucket.artifacts.bucket}"
    }
  }

  source {
    type = "CODEPIPELINE"
    buildspec = "buildspec.yml"
  }
}

This creates a CodeBuild project that reads and writes artifacts from CodePipeline. We’ll have to setup CodePipeline to test it, so let’s do that.

Step 4: Set up CodePipeline

Our last step is to setup CodePipeline. This can be a bit tricky, because for private GitHub repositories it requires generating an access token, which can only be injected via an environment variable. Let’s set up GitHub first.

First, let’s create some variables to store our GitHub account information. Add the following to the variables.tf in your deployment directory:

variable "github_organization" {}
variable "github_repository" {}
variable "github_branch" {}

This tells Terraform what variables it can expect to use when creating resources. Let’s store some values in them.

Add the following to the terraform.tfvars file in your deployment directory:

github_organization = "FindAPattern"
github_repository = "incredible-website"
github_branch = "master"

You should replace the repository information with your own. Otherwise you may feel quite let down to see my incredible website when you run a deployment.

Now let’s generate a personal access token to give CodePipeline permission to access our private repository. GitHub has an excellent article on how to do this, so I’ll give you that instead of reproducing it here: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/.

It’s time to create our actual pipeline. Fortunately, that’s pretty simple now that we’ve configured all of the appropriate underlying services. Create a file named pipeline.tf in the root of your project and add the following to it:

resource "aws_codepipeline" "pipeline" {
  name     = "incredible-website-pipeline"
  role_arn = "${aws_iam_role.build.arn}"

  artifact_store {
    location = "${aws_s3_bucket.artifacts.bucket}"
    type     = "S3"
  }

  stage {
    name = "Source"

    action {
      name             = "Source"
      category         = "Source"
      owner            = "ThirdParty"
      provider         = "GitHub"
      version          = "1"
      output_artifacts = ["source"]

      configuration {
        Owner      = "${var.github_organization}"
        Repo       = "${var.github_repository}"
        Branch     = "${var.github_branch}"
      }
    }
  }

  stage {
    name = "Build"

    action {
      name = "Build"
      category = "Build"
      owner = "AWS"
      provider = "CodeBuild"
      input_artifacts = ["source"]
      output_artifacts = ["artifact"]
      version = "1"

      configuration {
        ProjectName = "${aws_codebuild_project.build.name}"
      }
    }
  }

  stage {
    name = "Deploy"

    action {
      name = "Deploy"
      category = "Deploy"
      owner = "AWS"
      provider = "ElasticBeanstalk"
      input_artifacts = ["artifact"]
      version = "1"

      configuration {
        ApplicationName = "${aws_elastic_beanstalk_application.app.name}"
        EnvironmentName = "${aws_elastic_beanstalk_environment.production.name}"
      }
    }
  }
}

Run the following to give it a spin:

GITHUB_TOKEN=your-github-token-here terraform apply

See your site running in a browser by running the following command:

open http://$(terraform output url)

Step 6: Test your continuous delivery pipeline

Navigate back to your application repository and change the contents of src/app.js to the following:

const express = require('express');
const app = express();

const DEFAULT_PORT = 8081;
const PORT = process.env.PORT || DEFAULT_PORT;

app.get('/', (req, res) => res.send('My deployment pipeline works!'));

app.listen(PORT, () => console.log(`Incredible website listening on ${PORT}`));

Push it to GitHub:

git add .
git commit -m "Improved application message"
git push origin master

Wait five minutes, and then re-open your browser:

open http://$(terraform output url)

Step 7: Clean up

All of the instances and resources we created cost money. Make sure to destroy them by running the following command:

terraform destroy

One final note: we didn’t include any testing stage in here. You can always add this into your buildspec.yml, which will cause the build to fail if any unit tests don’t pass. Beanstalk also has a Test phase that integrates with CodeBuild. I’ll leave that for you as an exercise.

And that’s it. Happy coding!

You can get a copy of the application code here, and the deployment code here.

Tweet Share

Subscribe to my newsletter to receive updates about new posts.

* indicates required