This has tripped me up too many times now, I need to make a note for future reference.
With Ubuntu 16.04 and devicemapper as the docker storage driver, occasionally after a system boot the docker service won’t startup. The logs (systemd status docker.service
) show something like:
Dec 05 15:18:23 xxx systemd[1]: Failed to start Docker Application Container Engine. Dec 05 15:18:23 xxx systemd[1]: docker.service: Unit entered failed state. Dec 05 15:18:23 xxx systemd[1]: docker.service: Failed with result 'exit-code'. Dec 05 15:18:23 xxx systemd[1]: docker.service: Service hold-off time over, scheduling restart. Dec 05 15:18:23 xxx systemd[1]: Stopped Docker Application Container Engine. Dec 05 15:18:23 xxx systemd[1]: docker.service: Start request repeated too quickly. Dec 05 15:18:23 xxx systemd[1]: Failed to start Docker Application Container Engine. Dec 05 15:18:39 xxx systemd[1]: Stopped Docker Application Container Engine.
You can react to this one of two ways. Spend ages poking around, trying to start the service and find an explanation, or just run lvscan
and discover that that volume being used by devicemapper is inactive and docker therefore can’t start:
ACTIVE '/dev/ubuntu-vg/root' [51.76 GiB] inherit ACTIVE '/dev/ubuntu-vg/swap_1' [8.00 GiB] inherit inactive '/dev/docker/thinpool' [57.00 GiB] inherit
The solution is to run vgchange -ay <name_of_volume_group>
(or just vgchange -ay
if you don’t need to be specific) which activates the volume and will then allow you to start the docker service. <name_of_volume_group>
can be retrieved via vgs
.
Worth noting that the fix here is nothing magical, it’s just the standard way you activate a volume. The crux of the issue is why the volume was deactivated in the first place – it’s seemingly a race condition during the boot process (I’ve seen suggestions of various culprits), but I’ve yet to pin down the exact context.