Easy CI and CD for Docker with Bitbucket Pipelines

Vladimir Akopyan
Quickbird
Published in
11 min readFeb 12, 2018

--

Cloud services like Microsoft Azure do a great job of providing runtimes for webapps — you can deploy a web server from Visual Studio with a couple of clicks. But many applications don’t fit that mould — I found myself needing to deploy a custom MQTT server for IoT applications instead of paying a fortune for messaging services like PubNub. This is where containers come in handy.

Containers are much easier to manage than virtual machines, and lend themselves to really sweet CD workflows. These are my notes on setting up CD with Docker, Bitbucket, DotNet Core and Azure Container Instances, and issues that I’ve encountered.

As a refresher, Continuous X paradimes in order of increasing complexity are

  • Continuous Testing — unit tests are being run on every commit.
  • Continuous Integration — testing connected App +Server + DB
  • Continuous Delivery — preparing system for one-click deployment. This is what we are trying to accomplish.
  • Continuous Deployment — going all it. A commit to staging will update the test server. A commit to master will update the production server.

Steps in this tutorial:

  • 1 — Setup Project Locally
  • 2 — Put it in Docker
  • 3 — Push it to Dockerhub
  • 4 — Setup Pipelines
  • 5 — Setup Workflow
  • 6 — Deploy to Azure

Working Results

1 — Setup Project Locally

This project is an MQTT broker written in C# and running on .Net Core. The server itself is written as a MQTTnet library and quite easy to use — you can setup topics, respond to messages from clients, save them to database, etc.

If you are not working with C#, the tutorial would work the same way with Node or any other framework, just skip to Docker section.

Get the server working

Create a DotNet Core console application and call it ‘MQTTserver’. Add the MQTTnet library with Nuget:

Install-Package MQTTnet

The meat of the program

  • Start up a server on port 1833 (standard MQTT port) with no authentication.
  • Send out a Heartbeat message on the topic ‘heartbeat’ saying ‘I am alive for 10 seconds’.
  • Close the program if the user types in ‘quit’.

Normally I would have ‘Press any key to exit’, but when when I tested that in Docker, ‘Console.Readline’ would return a null string as soon as container starts, causing the application to shutdown or crash with null exception.

Place the following code inside class Program. Namespace is skipped for readability.

static void Main(string[] args)
{
Console.WriteLine("Server Started!");
Task.Run(async () => await StartServer()).Wait();
}
private static async Task StartServer()
{
var optionsBuilder = new MqttServerOptionsBuilder()
.WithConnectionBacklog(100)
.WithDefaultEndpointPort(1883);
var mqttServer = new MqttFactory().CreateMqttServer();
await mqttServer.StartAsync(optionsBuilder.Build());
var ct = new CancellationTokenSource();
var heartbeatTask = Task.Run(async () => await ServerHeartbeat(ct.Token, mqttServer));

Console.WriteLine("Type 'quit' to exit");
while (true)
{
//Docker has a habit of sending random shit into CMD
string input = Console.ReadLine();
if (input != null && input.Contains("quit"))
break;
}
ct.Cancel();
await heartbeatTask;
await mqttServer.StopAsync();
}
}
private static async Task ServerHeartbeat(CancellationToken token, IMqttServer server)
{
long heartbeat =0;
while (token.IsCancellationRequested == false)
{
var message = new MqttApplicationMessageBuilder()
.WithTopic("heartbeat")
.WithPayload($"I am alive for {heartbeat} seconds")
.WithAtMostOnceQoS()
.WithRetainFlag(false)
.Build();
await server.PublishAsync(message);
await Task.Delay(1000);
heartbeat++;
}
}

Test the application manually

Download any MQTT client, for example MQTTfx, connect to localhost with default settings and subscribe to topic heartbeat. Beware-topic names are case sensitive. You should see messages coming in like clockwork. You can also connect to it from several clients and try ping-ponging MQTT messages — it should work like any other broker.

Setup Testing

Quality tests are important in Serious Business™, but for this exercise we will just mock unit tests and pretend that’s all of our testing done. Add this class to MQTTserver project. I placed it inside Program.cs for convenience.

public class Util
{
public static int TestMeIAddNumbers(int A, int B)
{
return A + B;
}
}

Now add Tests project to the solution using xUnit framework. Add MQTTserver project to it’s references. Namespace is skipped for readability.

using System;
using Xunit;
using MQTTserver;
public class UnitTest1
{
[Theory]
[InlineData(5, 1, 6)]
[InlineData(7, 1, 8)]
[InlineData(7, 11, 18)]
public void Test1(int A, int B, int result)
{
int returned = Util.TestMeIAddNumbers(A, B);
Assert.True(returned == result);
}
}

You solution directory should look like this:

|   .gitignore
| bitbucket-pipelines.yml
| MQTTserver.sln
| README.md
|
+---MQTTserver
| Dockerfile
| MQTTserver.csproj
| Program.cs
|
\---Tests
Tests.csproj
UnitTest.cs

We will produce the Dockerfile and bitbucket-pipelines.yml later in the tutorial. I hope you know what to do with .gitignore and Readme.md :)
If you open command-line in the solution root, tests should work as follows:

PS E:\Dev\cd_demo> dotnet test Tests
Build started, please wait...
Build completed.
Test run for E:\Dev\cd_demo\Tests\bin\Debug\netcoreapp2.0\Tests.dll(.NETCoreApp,Version=v2.0)
Microsoft (R) Test Execution Command Line Tool Version 15.3.0-preview-20170628-02
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
[xUnit.net 00:00:00.6315795] Discovering: Tests
[xUnit.net 00:00:00.7379109] Discovered: Tests
[xUnit.net 00:00:00.7913502] Starting: Tests
[xUnit.net 00:00:00.9461767] Finished: Tests
Total tests: 3. Passed: 3. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 1.9637 Seconds

Build your application for deployment before proceeding to Docker

PS E:\Dev\cd_demo> dotnet publish MQTTserver -c release -o build
Microsoft (R) Build Engine version 15.4.8.50001 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
MQTTserver -> E:\Dev\cd_demo\MQTTserver\bin\release\netcoreapp2.0\MQTTserver.dll
MQTTserver -> E:\Dev\cd_demo\MQTTserver\build\

2 — Put it in Docker:

Don’t use Visual Studio to ‘Add Docker’ to your project — as usual Microsoft over-complicated the shit out of it, hid the complexity behind ‘clever’ tools, and when the tools break you are left in the lurch.

Let’s create a Dockerfile inside the MQTTserver folder. This file tells docker how to build an image.

FROM microsoft/dotnet:2.0-runtime
WORKDIR /app
COPY /build .
EXPOSE 1883
ENTRYPOINT [“dotnet”, “MQTTserver.dll”]
  • We start off with an image that already has DotNet 2.0. You can see all DotNet related containers produced by Microsoft.
  • Then we set working directory to /app folder inside the container.
  • We copy compiled MQTTserver application from the /build folder on your computer into the /app folder inside the container.
  • Open the port 1883
  • On start, the container will run MQTTserver. If the program quits, container stops.
  • Containerisation process works the same for any executable — you pick a container that has the runtime you need, copy the files you need, and start your program.

Build the image and name it mqtt_server. Image names are lower-case only.

MQTT>docker build . -t  mqtt_image

Container and Image are used interchangeably and confused. An image is read. run command creates an instance of it, a container. You can run a hundred from the same image. Containers to start and die quickly and often. The image stays.

Now we can test it. Even though you have port specified in the docker file, you have to “connect” it on creating a container. You can connect internal port of the container to a different external port.

docker run --name mqtt_container -p 1883:1883 mqtt_image

3 — Push to Container Registry

You will need an account with DockerHub, Azure Container Registry, or a private one. First you have to login

docker login <server> -u <username> -p <password>
  • Usernames a case sensitive Why?
  • If you have a private registry, such as Azure Container registry, you need to specify server. If you are using dockerhub, you have to omit the server!

Dockerhub server is index.docker.io but if, god forbid, you enter
docker login index.docker.io -u <username> -p <password>
It will report “login successful”, but when you try to push the image you will be greeted with denied: requested access to the resource is denied
You Have To omit it! Why? Because fuck you, that’s why!

Now you need to tag an image before upload. That means name it so that it’s globally addressable and find-able + tag it. A repository will typically have several related images with different tags, they might be different releases, versions or whatever you want them to be.

If you use DockerHub

docker tag <ImageName> <dockerhubUsername>/<RepositoryName>:<tag>
LIKE SO:
docker tag mqtt_image clumsypilot/cd_bitbucket_demo:from_pc

if you use another server

docker tag <ImageName> <Servername>/<whatever>/<projectname>:<tag>
LIKE SO:
docker tag mqtt_image coolcompany.azurecr.io/test/mqtt_image:from_pc

Final push

docker push clumsypilot/cd_bitbucket_demo:from_pc
You should see the resulting container image in dockerhub

Now you should be able to get this image from any computer:

docker pull clumsypilot/cd_bitbucket_demo:from_pc

4 — Setup Bitbucket Pipelines

Bitbucket is one of the cheapest private repository services, and it comes with 500 minutes of pipelines runtime— a service that basically copies the contents of your repo into a docker container and runs the contents of bitbucket-pipelines.yml as bash commands.

Create a bitbucket repository and upload the code you have so far. Go to settings and enable pipelines.

# This is a sample build configuration for .NET Core.
# Check our guides at https://confluence.atlassian.com/x/5Q4SMw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: microsoft/dotnet:2.0-sdk
pipelines:
default:
- step:
name: Build MQTTserver
caches:
- dotnetcore
script:
- dotnet restore
- dotnet publish MQTTserver -c release -o build
artifacts:
- MQTTserver/build/**
- step:
name: Run Tests
caches:
- dotnetcore
script:
- dotnet restore
- dotnet test Tests
- step:
name: Publish the container
caches:
- dotnetcore
script:
- cd MQTTserver
- docker login -u $dockerHubUsername -p $dockerHubPWD
- docker build . -t mqtt_image
- docker tag mqtt_image clumsypilot/cd_bitbucket_demo:from_bitbucket
- docker push clumsypilot/cd_bitbucket_demo:from_bitbucket
services:
- docker
Successful pipelines build and push docker containers from repo to Container Storage.
  • Pipelines are made up of steps, and each step has a name. Some people will put everything into one step. I prefer to break things down because then it’s clear, at a glance, what has failed.
  • These steps are basically same CLI commands we have practices previously on the local computer to compile, containerise and push the applicaiton
  • The main cause of pipelines errors for me was getting lost in the files and directories. You can place ls command into YAML file and it will print out contents of the folder. Most other bash commands work too.
  • If Step 2 fails, pipelines will not proceed to Step 3, so put them in sensible order — first build, then test, then deploy.
  • artifacts tag allows you to save results of one step for other steps to access. Here we save build results
  • You can add service containers to test your application with databases, etc.
  • Use the online validator to find syntax errors.
  • Go to Setting>Environment Vatiables and use those to store passwords. Here we store $dockerHubPWD as a variable in bitbucket setting, and avoid having sensitive information in the repository.
  • By default visual studio will create dockerfiles with multistage build. Those don’t make sense to use in pipelines, when we can keep built application artifacts from step 1. I found that ‘simple’ docker build took 25s while multistage docker build build took 1m 40s.

5 — Setup Workflow

Picture stolen from http://nvie.com/posts/a-successful-git-branching-model/

A Problem with current setup is that the container in the Container Registry will be overwritten even if we submit commits to a dev branch.

What if we are using that container in production? We want the container to update only when we merge code into the main branch! As of February 2018 there appears is only a partial solution.

Option 1 — Branch Steps

You can specify branch in the YAML file, and have seperate steps for master, and steps for other branches. But then the master branch will only execute “it’s own” steps, and not the others. This seems to be a requested and discussed feature, but for the moment the issue is not fixed.

That’s unfortunate because we want all branches to run build and test. So you get something like this:

pipelines:
branches:
master:
- step:
name: Build MQTTserver
.....
- step:
name: Run Tests
.....
- step:
name: Publish the container
.....
develop:
- step:
name: Build MQTTserver
....
- step:
name: Run Tests
....

Option 2 — Branch Tags

There is a number of environmental variables available in the pipelines and $BITBUCKET_BRANCH is the one that caught my attention. We can use it to tag containers — each branch will have it’s own container in the repository. Master branch will have a container tagged with master and that’s the one that should be used for production, all others will be separate.

One gotcha is that branch names can contain uppercase letters and special characters, whereas Docker container image names are restricted to
[a-zA-Z0-9][a-zA-Z0-9_.-]
So we have to play with bash a bit to convert all letters to lower case — the following will take bit-bucket branch and replace characters, then save the result to tag variable.

tag=$(echo "$BITBUCKET_BRANCH" | tr -dc '[:alnum:]\n\r-_' | tr '/*' '_'| tr '[:upper:]' '[:lower:]')

Slashes / are often used in branch names as a way of organising them, say Feature/x and Feature/y. They are not allowed on DockerHub but they are allowed on private repositories such as Azure Container Service. Tweak the script to your needs. If you are on Windows, Git Bash can be used to test this.

image: microsoft/dotnet:2.0-sdkpipelines:
default:
- step:
name: Build MQTTserver
caches:
- dotnetcore
script:
- dotnet restore
- dotnet publish MQTTserver -c release -o build
artifacts:
- MQTTserver/build/**
- step:
name: Run Tests
caches:
- dotnetcore
script:
- dotnet restore
- dotnet test Tests
- step:
name: Publish the container
caches:
- dotnetcore
script:
- cd MQTTserver
- docker login -u $dockerHubUsername -p $dockerHubPWD
- docker build . -t mqtt_image
- path="clumsypilot/cd_bitbucket_demo:"
- tag=$(echo "$BITBUCKET_BRANCH" | tr -dc '[:alnum:]\n\r-_' | tr '/*' '_'| tr '[:upper:]' '[:lower:]')
- echo "container name is $path$tag"
- docker tag mqtt_image "$path$tag"
- docker push "$path$tag"
services:
- docker

You will end up with a pile of container images in container repository, so pick your poison.

6 — Deploy to Azure Container Instances

At the time of testing Azure Container Instances wouldn’t accept underscores in container names for some reason, when using portal UI. Had to use Cloud CLI interface.

Once you’ve opened Cloud CLI you need to create a new resource group for the Container Instances — it won’t just live along with other services in an existing resource group!

New-AzureRmResourceGroup -Name mqttserver -Location westeurope

Once you have a new resource group, create the container instance:

New-AzureRmContainerGroup -ResourceGroupName mqttserver -Name mqttserver -Image clumsypilot/cd_bitbucket_demo:master -OsType Linux -Port 1883 -IpAddressType Public

You will get a print-out with IP address of the new container, and you should be able to find it in the normal cloud UI.

You can now use MQTTfx to connect to the server and listen for heart-beat.

Enter settings on MQTTfx and change the IP to that of the container.

To remove the container, run:

Remove-AzureRmContainerGroup -ResourceGroupName mqttserver -Name mqttserver

--

--

Making Internets great again. Slicing crystal balls with Occam's razor.