Docker Orchestration from the Couch

Docker Orchestration from the Couch

When I decided to begin self hosting, I decided that I was going to do it with Docker very early on. Docker provides a good way for me to manage deployments without worrying about how different service dependencies might interact or how installation may vary depending on the distro that I chose for hosting. I had also been using Docker at work and had become quite familiar with it. Another major plus for someone who is indecisive and may wish to move to a new VPS or server at some point is that images and containers are portable.

First attempt

Since Docker has become so popular, finding and setting up the services themselves was fairly simple. There is a lot of documentation and tooling to help with getting started or advanced configuration. At first I was hosting on a VPS and I started with writing a few compose files and some simple config files to run services. I started with Nextcloud for contacts and calendar, Wekan for projects, TTRSS for news, and Gogs for Git. I began by managing these each individually in a very adhoc manner. To make things a little easier, I wrote a simple shell script, that I called compose-all.sh to execute compose commands on all subdirectories.

Later, I decided to begin hosting more Docker services from home on my Raspberry Pi and my NAS. To make managing services across three hosts easier, I moved all my Docker configs into a single repo with a subdirectory for each host. Then, to make restarting and updating easier, I updated my script to detect my current host and use the right configuration. I could now pull this configuration repository from any of my hosts and then just run compose-all.sh and ensure all services are up to date.

#! /bin/bash
config_dir=$(dirname $0)
work_dir="$config_dir/$HOSTNAME"
if [ ! -d "$work_dir" ]; then
    work_dir="$config_dir/configs"
fi
if [ ! -d "$work_dir" ]; then
    echo "Unknown hostname..."
    exit 1
fi

# source services
source $work_dir/enabled

for service in "${services[@]}"; do
    (cd "$work_dir/$service" && docker-compose $@)
done

This was certainly handy, but it would also be handy to be able to work at a single computer and then push my configuration to all my hosts in one go. To do this, I decided to take a look at deploying with Ansible.

Ansible

I went through a few iterations of my Ansible setup. The initial one was to have a playbook that would clone my config repo on every host and then execute my compose-all.sh script there to pull, build, and up each service.

Generally, I try to open source or publish as much of my code and configuration as possible. Unfortunately much of the service configuration includes things like secrets. This led me to start looking into how I could move some of the secrets into Ansible Vault. Overall, this would be pretty simple if I was all in on Ansible, however I wanted to be sure that whatever I set up would work just fine if I manually pull and deploy services in the future. This meant that the configuration couldn't be fully encrypted. I had to only encrypt the secrets. I also had to find a way to insert the secrets into the configuration safely, without causing issues should I decide to deploy the service without Ansible. To do this I moved all my secrets into environment variables. This allows Ansible to pass env variables to the host, but would allow me to use .env files in the future, should I give up on Ansible.

So now, how to get the environment variables to the host when running the service? Initially I passed some in through the role that was being executed on the server to up the services, but that meant that all my variables would have to be mashed up in one command, not split by service. To remedy this, I moved all my environment variables into .env files in a private configs repo with the same folder structure as the non-secret configuration. I then encrypted each of the .env files with vault and allow Ansible to decrypt and deploy these to the hosts. Just to be safe, I wrote some ansible-pre-commit hooks to ensure I didn't forget to encrypt any of these sensitive files.

Now my playbook looks something like:

  tasks:
    - name: Clone srv-configs
      git:
        repo: ssh://git@gogs.app.thefij:3022/IamTheFij/srv-configs.git
        dest: './files/srv-configs'
        recursive: yes
        accept_hostkey: yes
      delegate_to: localhost
      run_once: true

    - name: Archive configs
      archive:
        path: './files/srv-configs/{{ ansible_hostname }}/'
        dest: './files/srv-configs-{{ ansible_hostname }}.tar.gz'
      delegate_to: localhost

    - name: Update Services | Copy srv-configs to remote host
      unarchive:
        src: './files/srv-configs-{{ ansible_hostname }}.tar.gz'
        dest: '{{ service_root }}/configs'

    - name: Copy compose all
      copy:
        src: './files/srv-configs/compose-all.sh'
        dest: '{{ service_root }}/'
        mode: '+x'

    - name: Clone srv-configs-private
      git:
        repo: ssh://git@gogs.app.thefij:3022/IamTheFij/srv-configs-private.git
        dest: './files/srv-configs-private'
        recursive: yes
        accept_hostkey: yes
      delegate_to: localhost
      run_once: true

    - name: Copy srv-configs-private to remote host
      copy:
        src: './files/srv-configs-private/{{ ansible_hostname }}/'
        dest: '{{ service_root }}/configs'
        decrypt: yes

    - name: Compose All (pull, build, up)
      command: "{{ service_root }}/compose-all.sh {{ item }}"
      environment:
        # Docker path for QNAP
        PATH: '/share/CACHEDEV1_DATA/.qpkg/container-station/bin:{{ ansible_env.PATH }}'
      args:
        chdir: '{{ service_root }}'
      with_items:
        - "pull"
        - "build --pull"
        - "up -d"

This minimizes dependencies required on the hosts as much as possible (key for my QNAP NAS) as well as allows me to manage each server directly or via my Ansible playbook. There are definitely some things that are lacking. For instance, handling deletion of files is not trivially done. Also, it forces deployment of every service on the host for a single update.

There's probably a much simpler way to do this if I'm truly willing to commit to a single orchestration platform, but I've got issues.