Making Docker Testing Easier (For Me)

3 minute read

So…I’ve dockerized most of everything in my home lab at this point. It makes things nice in a lot of ways:

  1. Easy IaC configuration
  2. Separation of applications and their data, making backup and recovery much, much easier
  3. Separation of application requirements and dependencies from the base OS I want to run (Rocky Linux)

Basically, all the advantages that containerization of applications have brought to modern-day IT, I now have.1

However, there is one major thing that makes testing new containerized services difficult, and that is the IP:Port “fun” that comes with docker. Basically, to test something, I have to either use my desktop host and test on some non-standard port, with mapping (Port 8800 anyone?) or I spin-up a disposable Docker VM template image and use that, so I can test a web service on Port 80 where it belongs, for example.

I know that a lot of what I want to test, I want to run as it’s own service (or group of services) vs a bunch of microservices (another reason I’m not thinking about k8s yet) so I want an easy way to test services running under docker close to the way I’d interact with it “in production” (as much as anything in a home lab ever is.)

To do that, I found a great networking plugin for docker called docker-net-dhcp - this networking plugin basically allows Docker containers to exist on a bridged network and get IP addresses from the DHCP server on that network.

Want to run the httpd container? Set it up and run this:

$ docker run -d --name httpd_test --network lan httpd

(In my case, I named the bridged network lan – check the docs for how to set it up.)

The container will get an IP on the lan (e.g. 192.168.1.155) and going to http://192.168.1.155 gets us the It Works! we expect:

$ lynx -dump http://192.168.1.155
                                   It works!

However, it took a ton of effort to get this to finally happen. I installed the plugin and it just would not get an IP. I checked traffic with tcpdump, I checked the logs of the plugin, I checked the logs on the DHCP server. Nada. Nothing.

The problem I ran into is documented excellently here.

(Yes, this talks about things in the context of libvirt and VMs, but the same thing happens with Docker and bridges.)

My solution was a bit more baroque than what they’re looking for, but it works nonetheless. I created a systemd service that runs after docker starts up that resets the sysctl settings on the machine to where they need to be.

[Unit]
Description=Late reset of sysctl variables
Wants=multi-user.target
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/sysctl --system
TimeoutSec=90s

[Install]
WantedBy=multi-user.target

I call it sysctl-fix.service and after sticking it into /usr/local/lib/systemd/system/ I can locate and enable it as follows:

$ systemctl daemon-reload

$ systemctl enable sysctl-fix
Created symlink /etc/systemd/system/multi-user.target.wants/sysctl-fix.service → /usr/local/lib/systemd/system/sysctl-fix.service.

Now things are good to go.

I rebooted the box, just to test that it survives properly after a reboot, and here’s the proof:

$ sysctl -a | grep -i bridge | grep -i table

net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0

Boom.

There is probably a much, much better way to do this (perhaps preloading the bridge module into the initramfs so that the sysctl settings “stick” properly?) However, if there is, I haven’t seen it yet…

  1. At some point, I may go ahead and use Kubernetes, but I just cant get past the fact that it still feels way too much like overkill for what I need, even something reasonably sized, like k3s