CI/CD

I forgot what was interesting about the new server, so I’ll be missing some things.

First of all, I found a VPS with 512MB of RAM, 10GB SSD and a vCore for 1€ a month, which is probably better than having the rpi at home. Some advantages:

  • everything is much faster. I suppose that the virtual CPU is way better than the little ARM processor in a rpi 1 B+ and the SSD is faster than SD cards
  • No downtime due to internet cuts (in theory)
  • IPv6 and rDNS
  • I can use fail2ban as I do not have a very intelligent router that things that the best thing to do is mask all internet traffic as if it was coming from itself
  • More secure for my home network

But that is not the great improvement. The real improvement is on the back, on the deployment side. Before, the git repository was uploaded from local to the server where there was a remote with a post-update hook that built and copied the website. This had some disadvanteges:

  • Building was done on the server, which was not the most powerful machine and shouldn’t need to have all the dependencies installed. After all, I chose to use jekyll because it generated static websites, if I wanted the server to do the work I could have used wordpress or something similar
  • The only way to know which version was deployed was to clone the repo from the server, which required ssh access to the deployment user and had no easy way to add more users. In order to have a public mirror of the deployment branch it would have to be pushed to two remotes (the server and the public one) each time.

The solution to all these problems: gitlab CI/CD runners. I say gitlab because that’s what I’m using, but any service that has runners should work. The deploy process is now the following:

  1. Modify and test the website locally
  2. (optional but highly recommended) Push the new master branch to the repo
  3. Either locally or from the web interface, merge master to deploy

And that’s it, the website will automatically be updated. How? Internally, when deploy is pused against, the following happens:

  1. Gitlab spins up the ruby docker image and builds the website, the result is exposed as an artifact. The vendor folder is cached to avoid having to install all the ruby deps each time.
  2. Gitlab spins up another image that has ssh and rsync installed. In it:
    1. It adds the known hosts and an ssh key from gitlab CI/CD variables that are set from the webpage and not stored anywhere in the repo
    2. It does some magic so that the ssh keys work
    3. It uploads the website using rsync

You have to take a bit of care to make sure that the user you are using to deploy can only access the files it needs and that it is setting the correct groups and permissions to them (thanks setgid), but this system will be much better if I end up using servers for collaborative projects.