As I mentioned in a recent post, I manage my blog using a static site generator. While this is great to a point—static site generators can handle effectively infinite traffic, they’re stupidly cheap to run, and I can use whatever editor I feel like—the downside is that I lose tons of features I used to have with dynamic blog engines. For example, while it’s almost true that I can use any editor I want, I don’t have a web-hosted editor like I would in WordPress or MovableType, and I likewise can’t trivially add any sort of dynamic content. Most of what I lose I can live without, but one that is genuinely annoying, and which has even bitten me in the past, is that I can’t publish without being on a computer that has both my SSH keys, and the publishing toolchain installed. Not only is that inconvenient; it means that publishing output can vary depending on which machine I use for a given publishing run.[1]

There’s a pretty easy fix for that: add continuous deployment. If it’s good enough for real software, it’s good enough for a personal blog. I can set up a single, consistent deployment environment on some server, drive all the deploys through that, and call it a day. The problems here being that a) setting up a continuous integration server is annoying, and b) I am lazy. There are cloud-hosted CI servers, but most of them either are overly complex, or are too expensive for me to justify using for my personal blog.

Enter Bitbucket. I’m already using them, since they’re by far and away the best Mercurial hosting game in town these days, and they recently[2] added a new feature called Bitbucket Pipelines that fits all my requirements: cloud-hosted, free, easy-to-use, cheap, and it didn’t cost anything.[3]

And I’m glad I looked, because getting everything running turned out to be stupidly easy.

Step one: write the Dockerfile

Bitbucket Pipelines wants to base your deployment on a Docker image, so I had to write one. Thankfully, it’s so easy to make Docker images these days that pretty much everyone is making them—even when there is no conceivable reason why they should. So let’s set one up.

To deploy my blog, I need at least four things: Hugo, Pygments, rsync, and SSH. It took me a couple tries to get the Dockerfile just right (mostly because I straight-up forgot rsync and SSH on the first go), but the result is literally five lines, total:

FROM alpine:3.6

RUN apk add --no-cache bash git go libc-dev python py2-pip rsync openssh-client
RUN pip install pygments
RUN go get -u github.com/gohugoio/hugo

About the only thing remotely interesting here is that I’m using Alpine Linux, which I selected based on it seemed to be what the cool kids were using these days and it was one of the smallest base Docker images I could find. I’m not honestly sure if bash is needed (I suspect /bin/sh would’ve been just fine), but I originally wrote my deployment script for bash, and I’m too lazy to figure out if I used any bashisms, so let’s just toss that in there anyway. What’s a paltry 34 MB between friends?

Tons of places host Docker images for free these days, and Bitbucket can use any of them; I kept it simple and pushed it to my Docker Hub account.[4]

Step two: write the build script

I actually already had a build script,[5] so all I really had to do was tweak it slightly to be run on something other than my personal machine. The result’s genuinely not interesting, but for completeness, the functional part of it looks like this:

#!/bin/bash

# Normal boilerplate (see e.g. https://sipb.mit.edu/doc/safe-shell/)
set -euo pipefail
IFS=$'\n\t'

# Add $GOPATH to the path so Hugo will be present
export PATH=$(go env GOPATH)/bin:$PATH
hugo --cleanDestinationDir
rsync -av --delete public/ publisher@bitquabit.com:/var/www/blag/

Again, nothing interesting here. We’re at exactly ten lines, and even that only because I added some comments and some blank lines for readability. I called this file build and stored it unceremoniously in the root of my blog repository.

Step three: test it…if you feel like it

Since we’re going to deploy files to a real server in an automated fashion, the next step is to test everything.

Or not. It’s your server; I’m not gonna tell you what to do.

Myself, I decided to half-ass it a bit. Pipelines just launches your Docker image, copies your project into the container, sets your project to be the current directory, and begins running your script. I can do that:

$ docker run -it --volume=C:/Users/b/src/blag:/blag --entrypoint=/bin/bash bpollack/blag-builder:latest
$ cd /blag
$ ./build

The first line says to run a Docker container we built interactively (-i) on my terminal (-t), mount the Windows directory C:\Users\b\src\blag at /blag in the container, and then launch bash once the container is ready. In the next two lines, I demonstrate my amazing CS skills to change to the appropriate directory and run the script, proving that, even in this advanced day and age, I can still play the part of a computer.

This of course failed at the push step due to SSH keys not being set up (more on that in a second!), but otherwise seemed to work fine, so it’s good enough for me. Onwards!

Step four: create the pipeline

The pipeline spec is really simple: you give it a Docker image (which we just made), a condition of when to run (I’ll just have it run whenever there’s a new changeset, which is the default), and what steps to run when the condition is met (in our case, we need to run one single step, which is the build script we just wrote). So that file, in its entirety, is:

image: bpollack/blag-builder:latest

pipelines:
  default:
    - step:
        script:
          - ./build

Granted: being Yaml, this looks like the result of an editor with broken indentation rules. But it’s at least pretty self-explanatory: we give it a Docker image (it defaults to using Docker Hub, which is great, because so did we), we give it one pipeline, called default, and give it the sole job of running a one-line script that calls our real build script, which we wrote together in the previous heading after much struggle. Commit this as a file called bitbucket-pipelines.yml in the root of your repository and push.

Step five: add relevant SSH keys

Congratulations! If you did everything perfectly at this point, Bitbucket will create your pipeline, run the build, and it will fail!…because you don’t allow random people to push stuff to your server over SSH.[6] Fair enough. For reasons I’m not honestly entirely clear about, Bitbucket won’t let you specify SSH keys to use for Pipelines until at least one pipeline exists. But now that we’ve got a pipeline—it’s the one that just failed—you’re good.

In your repository, click on the Settings tab, and then, under the Pipelines heading, there’s an entry called SSH Keys. Still with me? Good. These are SSH keys that will be loaded into your Docker container right before your script runs, and which will be used to push code to your server. I recommend following their advice, generating a key with them, and then adding that key to the ~/.ssh/authorized_keys file in the appropriate user account. You’ll also need to tell it what servers you’ll be using these keys with so that Bitbucket will detect if your server gets swapped out and can avoid deploying your precious secrets to some nefarious machine.

(Incidentally, I recommend using those Bitbucket keys only with a heavily locked-down account that’s dedicated purely to handling the deploy, but how to do that is a bit outside the scope of this particular post.)

Step six: you were actually done at step five

That’s it; we’re done. You do need to either re-run the pipeline manually at this point or push a dummy changeset to make sure, but everything should honestly Just Work™.

That’s honestly it; a hair over twenty lines of code got you free continuous delivery. You can get more fancy at this point if you’d like (I’m probably going to make sure the pipeline runs only when certain bookmarks are moved, rather than on every push, for example), but that’s the fundamentals. Three short files, each ten lines or less.


  1. I briefly had what I guess could qualify as an outage when I accidentally ran a deploy on a machine that didn’t have Pygments installed—which promptly deleted every single code snippet on the site. Oops. ↩︎

  2. Relatively speaking; the feature went into beta in March 2016. ↩︎

  3. It’s not free-free, but you get 50 minutes of build time with the free account, and building my blog with Pipelines takes about 16 to 25 seconds, so I figure I’ll be fine for awhile. ↩︎

  4. I won’t stop you from using this image, but I really discourage you from doing so; I make zero guarantees I won’t do horrible things to it in the future. ↩︎

  5. Two, actually—one for Windows and one for Unix—but since the Windows Subsystem for Linux has stabilized, all the Windows one does is call the Unix one. ↩︎

  6. I sincerely hope. ↩︎