Snapd in Docker

Today we advise developers on Mac to build using snapcraft in Docker, but offer no way to test the resulting snap. A proper solution would involve LXD in a lightweight VM with an OSX-native snapcraft talking to that. However, I wanted to see if I could first get snapd inside a Docker for Mac container.

Unfortunately, that didn’t quite work:

$ cat Dockerfile
FROM ubuntu:16.04
ENV container docker
RUN apt-get update
RUN apt-get install -y snapd squashfuse
RUN systemctl enable snapd
CMD [ "/sbin/init" ]

Build it:

$ docker build -t snapd .

Run it:

$ docker run --name=snapdtest \
      -ti --rm \
      --tmpfs /run --tmpfs /run/lock \
      --tmpfs /tmp --cap-add SYS_ADMIN --device=/dev/fuse \
      --security-opt apparmor:unconfined \
      -v /sys/fs/cgroup:/sys/fs/cgroup:ro -d \

Shell into it and try to install a snap:

$ docker exec -it snapdtest /bin/bash
$ snap install core
error: cannot perform the following tasks:
- Mount snap "core" (1577) ([start snap-core-1577.mount] failed with exit status 1: Job for snap-core-1577.mount 
failed. See "systemctl status snap-core-1577.mount" and "journalctl -xe" for details.
$ systemctl status snap-core-1577.mount
● snap-core-1577.mount - Mount unit for core
   Loaded: loaded (/etc/systemd/system/snap-core-1577.mount; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2017-04-06 08:47:53 UTC; 28s ago
    Where: /snap/core/1577
     What: /var/lib/snapd/snaps/core_1577.snap
  Process: 2033 ExecMount=/bin/mount /var/lib/snapd/snaps/core_1577.snap /snap/core/1577 -t fuse.squashfuse -o ro,allow_other (code=exited, status=32)

Apr 06 08:47:53 968ad9239e74 systemd[1]: Mounting Mount unit for core...
Apr 06 08:47:53 968ad9239e74 mount[2033]: mount: wrong fs type, bad option, bad superblock on /var/lib/snapd/snaps/core_1577.snap,
Apr 06 08:47:53 968ad9239e74 mount[2033]:        missing codepage or helper program, or other error
Apr 06 08:47:53 968ad9239e74 mount[2033]:        In some cases useful info is found in syslog - try
Apr 06 08:47:53 968ad9239e74 mount[2033]:        dmesg | tail or so.
Apr 06 08:47:53 968ad9239e74 systemd[1]: snap-core-1577.mount: Mount process exited, code=exited status=32
Apr 06 08:47:53 968ad9239e74 systemd[1]: Failed to mount Mount unit for core.
Apr 06 08:47:53 968ad9239e74 systemd[1]: snap-core-1577.mount: Unit entered failed state.

Skipping the mount call works [ed. clarifying: calling squashfuse instead of mount -t fuse.squashfuse]:

$ squashfuse  -o ro,allow_other /var/lib/snapd/snaps/core_1577.snap /snap/core/1577 
$ ls /snap/core/1577/
bin  boot  dev  etc  home  lib  lib64  media  meta  mnt  opt  proc  root  run  sbin  snap  srv  sys  tmp  usr  var  writable
$ uname -a
Linux 968ad9239e74 4.9.13-moby #1 SMP Sat Mar 25 02:48:44 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Any ideas?


Maybe snapd is attempting to mount with the kernel squashfs driver but the fuse based driver is needed inside the docker environment? It might help to take a look at the useFuse() logic to see if all the conditions are met inside of the docker environment.

Both snapd and docker require significant access to tuning the underlying operating system so that the applications running on top of them can be confined. It’ll definitely take some experimentation and tweaking cycles, probably on both ends, to be able to make that work reliably.

1 Like

The original post shows the mount command used and it is fuse.squashfuse. I had a chat with @evan and he clarified this to me:

If I use mount it fails, if instead I use squashfuse it works

So maybe we can switch to that syntax for mounting the snap?

That’s being done by systemd itself. We’re simply declaring Type=fuse.squashfsfuse in the mount unit.

Stéphane Graber will likely be able to help here since he made snapd work inside LXD.

I’ve running into this issue on the attempt to create ‘classic’ snap in docker.

classic confinement requires the core snap to be installed. Install it by running `snap install core`.

Obviously as snapd is not running in docker container, so there is no way to just install the ‘core’.

Are there any public Dockerfiles that could help me here?

You can set SNAPCRAFT_SETUP_CORE in your docker run. This hasn’t been documented much as of today due to the fact that doing this is only something one would do on a disposable system. I guess a docker run would fit this description.

@evan any thoughts on how to get this into the existing CI documentation for docker?

works fine. Thank you.

@sergiusens Should I look at for such information when it will be document, or there are other places?

@sergiusens if we always need to do it for Docker, could we check for /.dockerenv inside snapcraft and set SNAPCRAFT_SETUP_CORE accordingly?

1 Like

Yes, that works, so .dockerenv would always be in CWD?

@sergiusens always in /.dockerenv:

Alternatively (and perhaps a bit more likely to survive future changes), you can check /proc/1/cpuset for “docker”:

 docker run snapcore/snapcraft sh -c "cat /proc/1/cpuset | grep '^/docker' " 
1 Like

you did not install the fuse package and thus are missing /sbin/mount.fuse … so systemd cant actually run mount with -t fuse.$fusefs …

smells like squashfuse simply misses a dependency here …

Thanks @ogra. This is working:

$ cat Dockerfile
FROM ubuntu:16.04
ENV container docker
ENV PATH /snap/bin:$PATH
ADD snap /usr/local/bin/snap
RUN apt-get update
RUN apt-get install -y snapd squashfuse fuse
RUN systemctl enable snapd
CMD [ "/sbin/init" ]


$ cat snap
#!/bin/sh -e

while ! kill -0 $(pidof snapd) 2>/dev/null; do
  echo "Waiting for snapd to start."
  sleep 1

/usr/bin/snap $@

$ chmod +x snap

Now, build it:

$ docker build -t snapd . 

Run it:

$  docker run --name=snapd -ti -d \                                                                                     
  --tmpfs /run --tmpfs /run/lock --tmpfs /tmp \
  --privileged \ # [1]
  -v /lib/modules:/lib/modules:ro \ # [2]

And install some snaps:

$ docker exec -it snapd snap install emoj
$ docker exec -it snapd emoj success
✔  ✅  ☑  📚  👌  🎓  💰


  1. Otherwise systemd complains about /sys not being writable when reloading udev rules (ConditionPathIsReadWrite=/sys was not met)

  2. Otherwise strictly confined snaps fail to execute:
    $ docker exec -it snapd emoj
    cannot perform operation: mount --rbind /lib/modules /tmp/snap.rootfs_NCx2ET//lib/modules: No such file or directory


Can I run graphical programs from such environment?

$ sudo docker exec snappy snap install ohmygiraffe
$ sudo docker exec snappy ohmygiraffe
AL lib: (WW) ReadALConfig: Ignoring XDG config dir: 
AL lib: (WW) alc_initconfig: Failed to initialize backend "pulse"
ALSA lib conf.c:3750:(snd_config_update_r) Cannot access file /usr/share/alsa/alsa.conf
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM default
AL lib: (EE) ALCplaybackAlsa_open: Could not open playback device 'default': No such file or directory
Could not open device.
/snap/ohmygiraffe/3/bin/launch_omg: line 59:   574 Segmentation fault      $SNAP/usr/bin/love $SNAP/

you will likely need some exta options to bind mount bits that allow access to pulse on the outside of the container like:

-e PULSE_SERVER=unix:${XDG_RUNTIME_DIR}/pulse/native
-v ${XDG_RUNTIME_DIR}/pulse/native:${XDG_RUNTIME_DIR}/pulse/native

and perhaps also:

--device /dev/snd

(though i’d hope the last one is already there)

@ogra, as I understand, sound doesn’t seem critical here. I’m afraid more important here is the lack of assess to OS graphical system from inside Docker containers. I saw some tricks with X11 socket passthrough to overcome this. Don’t know if something changed since then…

all the errors above refer to audio only, do you also get them when running ohmygiraffe without docker ?

yes, maybe the crash is due sound indeed, but I mean - even if I solve sound problem, I would stuck with graphical one. Isn’t it?
(I didn’t get a chance to try without docker, but I think it would be OK)

How is this progressing, I don’t seem to find anything more recent on this. As someone who develops on Mac but for distro on Linuxes, I’d love to have a way to get snap working. Either under docker, or ideally natively - as it makes the edit/build/test cycle MUCH shorter.

Have you tried Multipass?