Continuous Delivery Using Google Kubernetes Engine and Google Cloud Build

Jul 25, 2018 13:56 · 1506 words · 8 minute read DevOps GKE Kubernetes Cloud Build

Yesterday, Google announced their new product: Cloud Build. This announcement came right after I had just spent a couple weeks writing about how to automate deployments in Kubernetes using Jenkins. It took me about 30 minutes to port over all of that code to Cloud Build, so I decided to write this post instead.

Cloud Build’s approach to continuous delivery is pretty simple. You create a “Build Trigger”, which tells Cloud Build which repository to watch for changes. Whenever you push a tag or push a branch, Cloud Build pulls the source, looks for a “cloudbuild.yaml” file in the root, and then follows the instructions in that file to run a deployment.

Simple enough. So what do those instructions look like? I’m glad you asked!

The instructions contain a list of commands that reference what Google calls a “Cloud Builder”. These builders are actually docker images into which you can pass arguments to execute build steps.

For example, you can execute a Docker build command using the “” image. A “cloudbuild.yaml” for that command may look like this:

- name: ''
  args: ['build', '-t', '$PROJECT_ID/superb-website:1.0.$BUILD_ID', '.']

If you were to put this in the root of your repository, Google would:

  • Detect the change
  • Pull the source
  • Set the working directory to the newly cloned repository
  • Run docker build -t$PROJECT_ID/superb-website:1.0.$BUILD_ID .

You can see a list of the built-in Cloud Builders here:

Now that’s we have an idea how this works, let’s build an application from scratch.


Create a cluster

First, let’s login to Google cloud, create a project, and set it as our active project:

gcloud auth login
gcloud projects create YOUR_PROJECT_NAME_HERE --name="Cloud Build Example"
gcloud config set project YOUR_PROJECT_NAME_HERE

Before we can create a cluster, we need to enable the Kubernetes Engine API. You can do that through the Cloud Console’s API section. You can also go to the following URL to save some time (make sure to update the project name in the url!):

Next, let’s create our cluster:

gcloud container clusters create cloud-build-example \
      --zone us-central1-b \
      --enable-autorepair \
      --num-nodes 2 \
      --enable-autoscaling \
      --min-nodes 2 \
      --max-nodes 4

Let it chug for a few minutes while it creates the cluster. Meanwhile, we can start building the application.

Build the Application

We’ll build a simple NodeJS application and push it to GitHub.

mkdir hello-world
cd hello-world
npm init
npm install --save express

Now that we have a base application set up, create a file named index.js with the following content:

const express = require('express');
const app = express();

const PORT = process.env.PORT || 8080;

app.get('/', (req, res) => {
  res.send('Hello, World!');

app.listen(PORT, () => {
  console.log(`Server listening on port ${PORT}`);

Let’s tell NPM how to start our server. Open package.json and add a “start” command to it. It should look something like this:

  "name": "hello-world",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "start": "node index.js",
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "",
  "license": "ISC",
  "dependencies": {
    "express": "^4.16.3"

Test it out by running npm start and opening http://localhost:8080 in a browser.

You should see a webpage displaying “Hello, World!”.

Push Application to GitHub

Go to GitHub and create an empty repository. We’ll use this to store the code. Once you have that created, you can push your code by running the following:

git init
echo "node_modules" > .gitignore
git add .
git commit -m "Initial commit"
git remote add origin
git push -u origin master

Dockerize our Application

Since we’re going to run this in Kubernetes, we’ll need to run our application in a Docker container. To do that, we’ll need to create a Dockerfile, and for testing we’ll create a docker-compose.yml file.

First, let’s create the Dockerfile:

FROM node:8

WORKDIR /usr/src/site/
COPY package*.json ./
RUN npm install

COPY . .


CMD ["npm", "start"]

Next, let’s create the docker-compose.yml:

version: '3'
    build: .
      - .:/usr/src/site
      - /usr/src/site/node_modules
      - 8080:8080

Test it out by running:

docker-compose up

Open a browser to http://localhost:8080 to make sure it worked.

Deploy Application Manually

We’ll deploy the first version manually, and then use Cloud Build to update the image in the existing deployment. Let’s figure out what we need to get this running:

  • deployment.yml – This will store the information about how to create instances of our container.
  • service.yml – This creates a local domain by which other Kubernetes resources can access instances of our container.
  • ingress.yml – This creates a load balancer that exposes our local service to the internet.

Let’s build them.

First, let’s build and push our image:

gcloud auth configure-docker
docker build -t
docker push

The first line allows Docker to push to our Google Container Registry.

Now create a file in k8s/deployment.yml with the following content:

apiVersion: extensions/v1beta1
kind: Deployment
    app: hello-world
  name: hello-world
  replicas: 1
        app: hello-world
      - name: web
        - containerPort: 8080
            path: /
            port: 8080

Make sure to replace YOUR_PROJECT_NAME_HERE with your actual project name.

Now, create our service at k8s/service.yml:

kind: Service
apiVersion: v1
  name: hello-world
    app: hello-world
  type: NodePort
  - protocol: TCP
    nodePort: 32131
    port: 80
    targetPort: 8080

And finally, create our ingress at k8s/ingress.yml:

apiVersion: extensions/v1beta1
kind: Ingress
  name: hello-world
    last_updated: 1
  - host:
      - path: /*
          serviceName: hello-world
          servicePort: 80

Now create everything by running:

kubectl apply -f k8s/

You’ll have to add a CNAME record pointing your domain to your new load balancer to test that this works. You can get your load balancer IP by running kubectl get ingress until you see it appear underneath the ADDRESS column.

Once your DNS propagates, you should be able to visit your website via the domain you entered into the ingress.

Adding Google Cloud Build Instructions

Now for the fun part: setting up Google Cloud Build. We’ve already defined in our Dockerfile how to build our image, so we’ll need to instruct Google Cloud Build to follow the steps we just manually used to deploy.

Create a file in the root of your repository named cloudbuild.yaml and put the following contents into it:

- name: ''
  args: ['build', '-t', '$PROJECT_ID/hello-world:1.0.$BUILD_ID', '.']
  timeout: 180s
- name: ''
  args: ['push', '$PROJECT_ID/hello-world:1.0.$BUILD_ID']
- name: ''
  - set
  - image
  - deployment
  - hello-world
  - 'CLOUDSDK_COMPUTE_ZONE=us-central1-b'
  - 'CLOUDSDK_CONTAINER_CLUSTER=cloud-build-example'

If you don’t want to hardcode your cluster and zone, you can use Cloud Build’s variable substitution instead. For simplicity, I’m leaving that out of this post.

Let’s break this down.

The first step builds the image. It replaces the project ID automatically, and it adds the build ID (a randomly generated guid) onto the end of the tag so that Kubernetes will know to pull a new image.

The second step pushes the image to your container registry.

The third step manually overrides the image for the web container within the hello-world deployment. This causes Kubernetes to pull the new image and deploy it automatically.

Finally, push all of your changes to GitHub:

git add .
git commit -m "Added CloudBuild."
git push origin master

Enabling Google Cloud Build

By default, Cloud Build is disabled. You will need to enable it by navigating to the “APIs & Services” subsection of your Cloud Console.

Once you do that, Cloud Build is enabled, but it cannot access your Kubernetes cluster. You’ll need to give it access. Do this by:

  • Open your Cloud Console to the “IAM & admin” subsection.
  • Click the “IAM” section.
  • Click the pencil icon next to the user named “”.
  • Select “Add New Role”.
  • Find “Kubernetes Engine Admin” and add it.
  • Click “Save”.

Adding a Build Trigger

This is the last step!

  • Navigate to the Cloud Console -> Cloud Build -> Build Triggers section.
  • Click “Create trigger”.
  • Click “GitHub”.
  • Click “Continue”.
  • Grant Cloud Build access to GitHub.
  • Select your repository; read and then accept the license agreement.
  • Type the following into the “Branch (regex)” field: ^master$
  • Under “Build configuration,” select “cloudbuild.yaml”
  • Click “Create trigger”

That’s it!

Test it out by pushing some changes to your repository; within a minute or two, it should get pushed to your live infrastructure.

Cleanup (optional)

You probably don’t want to continue paying for this, so make sure to delete your cluster by running:

gcloud container clusters delete cloud-build-example --zone us-central1-b


That wasn’t too bad. This is still all pretty new, so I’m sure it will be changing soon enough, but I like the simplicity so far. It seems like making custom Cloud Builders adds a lot of potential for doing things like using templated Kubernetes resource files instead of manual image overrides, deploying to multiple environments, automated rollbacks, etc…

You can access all of the code for this post at

Let me know what you think, and happy coding!

Tweet Share

Subscribe to my newsletter to receive updates about new posts.

* indicates required