miliucci.org

Independent software consultant

(currently in love with Bemind.me)

Introducing github-ci

Why another CI/CD tool

Continuous Integration and Continuous Delivery are two essential practices in today software development and a lot of free tools and commercial services exists, but in almost all cases they require the use of special configuration files with a custom syntax. 1

Many organizations use Github flow (also known as fork + pull model) and derivates (e.g. C4) as a development process to enforce the above practices, leveraging Github integrations with those tools. 2

Moreover, virtually every software project is now dockerized or it’s developed using compose to set up a local production-like environment.

The need for a Docker based, organization-wide, and Github integrated CI/CD server that was simple, reliable and free pushed me to develop github-ci.

Features, or “convention over configuration”

github-ci is an opinionated continuous integration and delivery application (CI/CD) for Github organizations.

It’s a free alternative to Docker Cloud’s Automated builds and Automated test services that can be deployed in the cloud or on your servers, allowing you to build projects behind your firewall (if you care about it).

It only works in one way, following these steps at every commit, every pushed tag and every pull request change:

  1. it builds the Dockerfile included in the repository
  2. if the build is successful, it runs the docker-compose.test.yml included in the repository evaluating the exit code of the sut named service
  3. if also tests are successful, it sets the Github commit status to success
  4. it pushes the Docker image, built in the first step, to a registry

When pushing an image to the registry it will use as image tag:

  • the git tag
  • or the PR number
  • or the branch name

So 1.2.0, PR-10 and master are all possible Docker image tags.

During the entire build/test process, you can see the build output from Github status links in PR/commits.

How it’s made

github-ci is a small Node application that uses Bull, a Redis based job queue, to execute builds asynchronous and manage their parallelism. NGINX is used to serve builds logs.

It’s written in JavaScript because, as noted by Venkat Subramaniam in his “Rediscovering JavaScript” book:

JavaScript is the English of the programming world—it’s native to some people, it’s arguably the most widely used language, and the language itself has heavily borrowed from other languages, for greater good.

Everybody need a CI/CD, so it’s better that everybody can also contribute to it.

Advantages

  • config files: describe every build and test process using well known syntaxes (Dockerfile and docker-compose)
  • reproducible: builds and tests are totally reproducible in development environments
  • hackable: free software with less than 400 JS LOC
  • self hosted: today servers are cheap and powerful

Current status

The current status of the project is stable hack.

This means that it was started as a one-day hack and from that moment it has been used for months, making ~ 2.000 builds. During this time, some minor fixes has been made.

Limitations

Project’s main limitation is the shared host where the app and builds run: both are executed on the same Docker machine. Probably this will be fixed in future releases using something like lxd/lxc to isolate Docker instances. 3

Keep in mind that this project aim to be simple, avoiding any useless complexity, so any improvement need to be evaluated well.

Just because you can doesn’t mean you should

But I need to build [choose a technology] packages too

Basically you can build anything you want exploiting Dockerfiles & multi-stage builds features with global available and customizable build arguments (e.g. npm registry token, passwords, …). In these cases you will use only the Dockerfile to build and test your package: to skip the test and the image push stages for some repositories, just add them to the dedicated configuration.

Have a look at the examples directory for more details.

History

In the last ten years I used almost every major CI/CD tool and services available: I started with Jenkins in 2011, used Heroku for some projects in 2013, tested Travis CI in 2104, and used Codeship and CirceCI from 2015.

In the same year, while I was using docker and docker-compose heavily for local development, I started to explore available “hosting for containers” services and I ended up using Tutum to run my containers online.

Tutum offered a Docker based service with a Heroku like experience: it streamlined the build, the deployment and the orchestration of Docker containers using a good UI and a CLI.

One of the best features available in Tutum was “build & testing for Repositories”: it used the project’s Dockerfile as a build spec and a docker-compose.test.yml file for setting up and running repository’s tests in a customizable environment.

As they wrote in the post above:

Tutum CI/CD design goals were to provide a flexible, agnostic, automatable and locally replicable solution for build & tests, which mirror the goals for the Docker project.

In other words: solve the build & test problem, well. 4

After three months Docker Inc. acquired Tutum and six months later it was renamed Docker Cloud. After another two years, Docker Inc. killed Docker Cloud.

Now, repository building and testing features are still available as paid services of Docker Cloud’s remains, but I’ve experienced too many problems using it (slowness, timeouts, zombie builds). 5