Since I'm hosting my own web services on a small number of servers, I really want to be able to do so with the simplest, widest supported tooling available. For me, that consists of running nearly everything with Docker and keeping as much config as possible in Docker Compose.

So far this has been a spectacularly simple way to configure and schedule long-running services. All my services sit in a single repo with their configuration and running docker-compose is all I need to start, stop, or even update any of my services.

I use Traefik as a reverse proxy, which allows each service to configure its own routes using container labels. Everything required to run, monitor, and backup is in a neat little bundle.

Until it's not...

Let's Encrypt!

Recently, I decided I wanted to switch away from a self-signed certificate for my home network to a certificate from Let's Encrypt. To do that without exposing any of my private network services to the public, I needed perform a DNS challenge for generating a star cert. This is something that Traefik actually does support as well, but there are complications with my specific configuration.

I have some services forwarded publically, and others are not. This can be handled explitly blocking/whitelisting IP addresses at a service level, but that's very tedious and could result in misconfiguring something. The way I chose to do it is by creating a new entrypoint on a different port for public access. I then expose this port via forwarding from my router and the other, internal only, port stays protectected. If I want a service available from the public internet, I just add that entrypoint to the servie and it's done.

Wait... But why scheduled jobs?

Unfortunately with Traefik I'm unable to use an auto generated certificate for multiple entrypoints.

To work around this, I figured I could generate my own star cert in another container and then tell Traefik to use that cert for both entrypoints. I only need to make sure that the cert is renewed every couple months to keep it from lapsing.

This is something that Docker Compose doesn't really handle, at least out of the box. To do this I would have to bundle something like cron inside my images and schedule it using some environment variables (as done with docker-duplicity-cron), or give up and schedule with an external tool like the system cron or Yelp's Tron.

Well shoot. That would make my setup much less tidy! At this point I started looking for something that would, like Traefik, inspect my containers for scheduling instructions. I found nothing.

Enter Dockron

Dockron (Github mirror) (Docker Hub) is a simple way to execute short-running containers on a schedule via Docker labels using minimal configuration.

Deploy Dockron to your server (with access to a Docker socket) and then start a new, short-running container with a schedule specified in a label. For example docker run -l dockron.schedule="* * * * *" busybox echo hello and watch your logs. Every second, your container will start up again and exit.

It's that simple! Now long running and batch containers can be deployed in a single, self-bootstrapping, bundle to any of my hosts.

Future plans

As with Minitor, I plan to keep Dockron a very simple tool with minimal features. I hope to add (or accep ta PR) for things like retries or error handling, but if you want built in alerting or scheduling dependent jobs, this is probably not for you.