Dockerising this website

I dockerised this website in an attempt to learn more about docker and in the process created a reason to dockerise.

Why

My initial inspiratation for the dockerisation was a desire to run my own RSS aggregrator (I was sick of the mandatory old reader feed that they add for using their service. Tiny Tiny RSS is what I ended up with). I decided that I didn't want to spend the time adding the nginx configuration, php configuration etc. and that instead I would spend significantly more time dockerising everything in the hope that in the future it would be super easy to spin up self hosted apps and personal experiments.

Beginnings

I started with setting up nginx-proxy, and letsencrypt-nginx-proxy-companion. Combined with a correctly configured domain name these watch the (docker) network they are on and automagically reverse proxy a subdomain to any containers exposing the right ports (and with the right environment variables) and acquire letsencrypt ssl certificates for that subdomain!

For example if I start an nginx container with port 80 exposed, and the environment variables VIRTUAL_HOST=foo.tyers.io LETSENCRYPT_HOST=foo.tyers.io then that would become immeidately available on https://foo.tyers.io on the internet.

I ran immediately into an issue though. My server was a Scaleway instance with an ARM processor. One thing docker containers do not seem to deal particularly well with are different processor architectures and I could not get nginx-proxy running on ARM. This however would provide the perfect test bed for the dockerisation. I started a new x86-64 server and immediately got the nginx-proxy and letsencrypt containers running.

Hosting the site

With the automatic reverse proxy and ssl taken care of now I just had to actually run a site. This site is run using a PHP CMS so it seemed to make sense that I would need an nginx container to handle routing and a php container to handle the php.

Seemed simple enough. I started an nginx container and mounted some volumes. I mounted my sites folder to /var/www/tyers.io within the app and mounted the nginx config file to /etc/nginx/conf.d/tyers.io. The nginx image exposes port 80 by default so I just had to add the VIRTUAL_HOST and LETS_ENCRYPT_HOST environment variables to get nginx-proxy to pick it up.

Similarly I started the the php-fpm container again with the site folder mounted at /var/www/tyers.io. I think it's important that the site is mounted at the same path within each container as the php server gets given a filepath by nginx and does not know that it is inside a seperate container (or inside a container at all).

For both of these containers I made sure that they were on the same docker network as each other and as the reverse proxy containers. Not only does this enable the proxy containers to pick up the containers that need reverse proxying but it also allows them to access each other exposed ports by navigating to container-name:port. This is super convenient, so for example my php-fpm container is called php-fpm so in my nginx config for my site I just need to point the fastcgi_pass paramater at php-fpm:9000. The other advantage of the network is that I am then not exposing the containers port on my host.

Unfortunately this did not work immedialey. I was getting PHP errors, related to innaccesible files. A bit of googling revealed that the file permissions on my mounted volumes were causing the issue.

File system permissions in Docker

The docker containers run on the same kernel as the host, but the OS and userland that they run can be completely different (As long as they are all capable of running on the host kernel). This is why linuxes are happy running on eachother in containers with minimal fuss whereas running linux docker containers on OSX and Windows involves secretly running a linux VM (though Windows has hyper-v now).

This is important because file system permissions are handled by the kernel. A users UID and GID give them a particular level of access to files. Interestingly usernames are handled in the userland so whilst UIDs/GIDs are the same in the docker container, usernames are not! In terms of docker containers accessing mounted volumes the UID of the program in the container needs to have access on the host.

It lives!

In my case this was pretty simple as both php-fpm and nginx run by default as the www-data user in their containers. As mentioned the username is not important but that means that they were also running as UID 33 in their respective containers, and on my host the UID of the www-data was 33. So a quick chmod -R www-data:wwww-data later and my site was up and running!

Adding other sites and services

Once you've jumped the few hurdles for the first containers it gets a lot easier to add new things. I wanted to host another PHP CMS site on the same server, and as mentioned wanted to add Tiny Tiny RSS (also a PHP based thing).

For each one its as easy as adding a new nginx container with the site and nginx config mounted as volumes. The only minor complexity is then having to restart the php-fpm container as you also need to mount the site in there too.

Docker compose

Docker compose allows you to write down these docker machinations in a yml config file. This makes them easy to start and stop in a repeatable manner. It also adds some neat things like managing dependencies between containers, so if my nginx website containers rely on the php-fpm container being started then docker-compose will start them in the correct order.

Here's my basic example for starting this site:

version: '3'
services:
  php-fpm:
    build: /path/to/dockerfile/ # just installs extra deps for ttrss
    container_name: php-fpm
    image: php-fpm
    volumes:
      - /path/to/site:/var/www/tyers.io
    networks:
     - web-network
    restart: always

  nginx-proxy:
    container_name: nginx-proxy
    image: jwilder/nginx-proxy
    ports:
     - "80:80"
     - "443:443"
    volumes:
     - /etc/letsencrypt/certs/:/etc/nginx/certs:ro
     - ng_vhost:/etc/nginx/vhost.d
     - ng_html:/usr/share/nginx/html
     - ng_dhparam:/etc/nginx/dhparam
     - /var/run/docker.sock:/tmp/docker.sock:ro
    networks:
     - web-network
    depends_on:
     - php-fpm
    restart: always

  nginx-letsencrypt:
    container_name: nginx-letsencrypt
    image: jrcs/letsencrypt-nginx-proxy-companion
    volumes:
     - ng_vhost:/etc/nginx/vhost.d
     - ng_html:/usr/share/nginx/html
     - /etc/letsencrypt/certs:/etc/nginx/certs:rw
     - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
     - web-network
    depends_on:
     - nginx-proxy
    environment:
      NGINX_PROXY_CONTAINER: nginx-proxy
    restart: always
  tyers:
    container_name: tyers
    image: nginx:1.15.3
    volumes:
     - /path/to/site/config/nginx/tyers.io.conf:/etc/nginx/conf.d/tyers.io.conf
     - /path/to/site:/var/www/tyers.io
    environment:
     - VIRTUAL_HOST=rhystyers.com,www.rhystyers.com,tyers.io,www.tyers.io
     - LETSENCRYPT_HOST=rhystyers.com,www.rhystyers.com,tyers.io,www.tyers.io
    networks:
     - web-network
    depends_on:
     - nginx-proxy
     - nginx-letsencrypt
     - php-fpm
    restart: always
networks:
  web-network:
    driver: bridge
volumes:
  ng_html:
  ng_vhost:
  ng_dhparam:

Awesome, we have a working server with services that can be easily started/stopped/moved to new servers! It's not perfect though, having to edit the php-fpm service each time I add a new PHP project is not ideal for example.

docker-composer

Docker-composer? A spelling mistake? No! A script I wrote to make managing interdependent docker services a lot easier. Read about it here.