Skip to main content

Ben Cromwell

Narrowing Docker IP Ranges for Local Development with Docker Compose

## The problem / background

  • You have a limited IP address space to use for Docker.
  • Docker’s default pool is a /12.
  • For Compose it allocates /16s from it by default.
  • This gives each Compose project 65,536 IPs.
  • That’s likely overkill for most folks’ needs.
  • It also effectively limits you to having 16 Compose projects.

The other problem we’d like to solve, is to provide some space outside of the allocation pool for Compose projects that want to define their own IP ranges to use. Like you’d do for DHCP, we want a pool and some reserved space.

These static ranges can clash with each other. Clashes are more likely when using a wider address space. Those clashes can still occur despite what we do here. That’s a problem solved with some form of centralised documentation. You may want to do a similar thing for allocating ports for forwarding, so that they don’t clash and you can provide one set of ports for everyone in your organisation to use.

## Our goal: configure a more suitable address space for Docker

Our goal is to configure to the Docker daemon to use a more suitable address range for our needs. We’ll also leave some blank space that can’t be allocated automatically, so that those aforementioned projects have some choices that won’t interfere with the daemon’s choices.

## Cleanup

I’m not going to assume you’re starting from a clean slate, so we’ll need to start by clearing out any networks in use at the moment. A simple restart of the Docker daemon won’t sort them out with new address space from our updated configuration, so we need to do it manually.

We also don’t want to remove our volumes, with their hard-earned test data.

This can be achieved thusly:

docker compose down --remove-orphans --volumes=false

Rinse and repeat for each project.

If you have networks created outside of Docker Compose, which you can check with docker network ls, you may need to remove them manually.

Check their address space with docker network inspect net_name.

Delete them with docker network rm net_name.

Following that brief aside, let us continue to configure our daemon.

## Config

This file doesn’t exist by default, you may also need to create its parent directory.

I’ve gone with the following configuration but you can of course use whatever you need.

I’m aiming for roughly half the available space being available for allocation by the daemon and half reserved. There’s so much headroom for what I need to do with it that I’m not worried about making it any more complex than this, so I’ve gone with the three CIDRs you see before you.

/etc/docker/daemon.json

{
  "default-address-pools": [
    {
      "base": "172.17.0.0/16",
      "size": 24
    },
    {
      "base": "172.18.0.0/15",
      "size": 24
    },
    {
      "base": "172.20.0.0/14",
      "size": 24
    }
  ]
}

After adding or modifying this file, restart your Docker daemon with sudo systemctl restart docker, or the equivalent for your system.


NixOS interlude!

If you’re on NixOS, you would do this in the configuration for Docker:

  virtualisation.docker = {
    enable = true;
    daemon = {
      settings = {
        default-address-pools = [
          {
            "base" = "172.17.0.0/16";
            "size" = 24;
          }
          {
            "base" = "172.18.0.0/15";
            "size" = 24;
          }
          {
            "base" = "172.20.0.0/14";
            "size" = 24;
          }
        ];
      };
    };

To check the result, see where in the Nix store your config file is:

sudo systemctl status docker

You’ll see --config-file=/nix/store/{id}-daemon.json by its CGroup. Take a look at the file if you want to check the results.

NixOS interlude ends!


We’ve now carved up the available space into an area for automatic allocations and an area for static allocations.

We’re also telling Docker to allocate /24s instead of its default /16s. This is going to give us the headroom we need. We’re trading off more IPs per Compose project against the facility to have more projects in total. /24s are nice and simple, 256 IPs is likely plenty and we get 1792 of these to play with. This is where you might opt for a different size, to balance this trade off to what you need it to do.

We can therefore start allowing our projects to hand out static ranges from 172.24.0.0 onwards.

This vastly increases the total number of ranges we can use over the default behaviour and gives us predictability over what we can use for projects that need to define their own ranges.

## A quick note on the bridge IP

As part of our choice of ranges, we are also preserving the prior bridge IP.

As we’re on Linux and Docker doesn’t automatically inject host.docker.internal, we may have some stuff that’s reliant on making use of 172.17.0.1 to reach the host machine. For instance, when running Caddy inside Docker and reverse proxying various services back to the host IP. When I first tested this with a single wider range, it ended up making the above IP inaccessible.

The default bridge IP of 172.17.0.1 appears to be preserved by ensuring the first IP out of the pool would be that address. If this isn’t the case for you, try setting the bridge IP manually, by adding:

  "bip": "172.17.0.1/16",

I’m a bit unsure about what’s been going on with the bridge IP. My machines have all used 172.17.0.1 as their bridge IP but messing around with it now, going back and forth commenting out and uncommenting out the pools from the Nix config and rebuilding Nix, I can see it using 172.16.0.1. In any case, I’ve always thought the 172.17.0.1 was Docker’s internal standard default value and that it was predictable and usable (and safe). Maybe that was a wrong assumption. At least there is the option to set the bip and ensure it’s deterministic.

## Onwards, to testing

To test this, we’ll create a quick Docker compose project.

mkdir docker-network-test
cd docker-network-test

Create a compose.yaml file:

---
networks:
  ns_test:
    name: ns_test
    ipam:
      driver: default
      config:
        - subnet: 172.31.0.0/24
          gateway: 172.31.0.1

services:
  hello:
    image: alpine
    command: 'ip a'
    networks:
      - ns_test

Now, we can run docker compose up and we can see immediately the IP output from our hello service.

Attaching to hello-1
hello-1  | 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
hello-1  |     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
hello-1  |     inet 127.0.0.1/8 scope host lo
hello-1  |        valid_lft forever preferred_lft forever
hello-1  |     inet6 ::1/128 scope host
hello-1  |        valid_lft forever preferred_lft forever
hello-1  | 2: eth0@if226: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
hello-1  |     link/ether ea:68:29:03:11:3b brd ff:ff:ff:ff:ff:ff
hello-1  |     inet 172.31.0.2/24 brd 172.31.0.255 scope global eth0
hello-1  |        valid_lft forever preferred_lft forever
hello-1 exited with code 0

All is well and it using 172.31.0.2 with a /24.

This is despite the fact that 172.31.0.0/24 is outside of 172.17.0.0/13. This is what we want.

Note that if you are still getting errors after this along the lines of:

Network my_network Error
failed to create network my_network: Error response from daemon: invalid pool request: Pool overlaps with other one on this address space

Then you still have a network around somewhere using an address in the former space. Go back to docker network ls, make use of docker compose ls to find other Compose projects running on your system, and get inspecting.