It could be made to and if it did, then once we were sure that all the processes were killed, snapd could unload the profiles. I believe this was always the intent, but it doesn’t do it yet.
A workaround for this bug on Ubuntu Core 16 is to change the overlay2 storage-driver back to aufs:
In short terms
$ sudo sed -i ‘s/overlay2/aufs/’ /var/snap/docker/current/config/daemon.json
$ sudo snap restart docker
done
@See https://git.launchpad.net/~docker/+git/snap/commit/?h=bugfix/change-aufs-overlay2
Detailed
With storage-driver overlay2
$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:92695bc579f31df7a63da6922075d0666e565ceccad16b59c3374d2cf4e8e50e
Status: Downloaded newer image for hello-world:latest
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:109: jailing process inside rootfs caused \\\"permission denied\\\"\"": unknown.
$ sudo su
root$ journalctl --no-pager -e -k | grep apparmor | grep -v kmod | grep snap.docker.dockerd
Apr 25 21:50:47 localhost.localdomain kernel: audit: type=1400 audit(1556229047.116:15): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.docker.dockerd" pid=1374 comm="apparmor_parser"
Apr 25 21:50:52 localhost.localdomain kernel: audit: type=1400 audit(1556229052.260:46): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.docker.dockerd" pid=1547 comm="apparmor_parser"
Apr 25 21:50:57 localhost.localdomain kernel: audit: type=1400 audit(1556229057.708:88): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.docker.dockerd" pid=1771 comm="apparmor_parser"
Apr 25 21:51:01 localhost.localdomain kernel: audit: type=1400 audit(1556229061.208:97): apparmor="STATUS" operation="profile_load" profile="snap.docker.dockerd" name="docker-default" pid=1856 comm="apparmor_parser"
Apr 25 21:55:23 localhost.localdomain kernel: audit: type=1400 audit(1556229323.728:103): apparmor="DENIED" operation="open" profile="snap.docker.dockerd" name="/system-data/var/snap/docker/common/var-lib-docker/overlay2/6cba9d1d59f62094649efe897713fade57986b030ed22a80210053af583c49fc/diff/" pid=2110 comm="runc:[2:INIT]" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Change the docker storage-driver overlay2 to aufs
$ sudo sed -i ‘s/overlay2/aufs/’ /var/snap/docker/current/config/daemon.json
$ sudo snap restart docker
With storage-driver aufs
$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:92695bc579f31df7a63da6922075d0666e565ceccad16b59c3374d2cf4e8e50e
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
Hi, I would like to use the docker snap. However, I don’t want to put everything in my home folder. (Since that is on a NVME SSD)
So I thought I could just make a symlink in my home to point to somewhere else, but I guess the confinement is too smart for that? Because it does not seem to work.
If you could confirm this, then I shall need to move to normal docker again.
It should be smart enough to be able to use a bind-mount instead of a symlink (that usually works for other snaps, but docker might be special here)
@ogra is correct, you can use a bind-mount of your data from outside of the home folder to somewhere in your home folder
I fear I don’t really understand. But I also see that I might not have explained my situation well enough.
So I have the docker snap running.
Then I want to run gitlab in a docker container.
Normally I use volumes and specify something like this:
/opt/gitlab/config:/etc/gitlab
To mount the directory directory of the host in the container.
That does not work with the snap, since it can only access things in ~/
.
If I do /home/user/gitlab/config:/etc/gitlab
all works as expected.
However, if /home/user/gitlab
is a symlink to say /opt/gitlab
it does not work anymore.
I think you are referring to adding a bind-mount to the snap yaml, but I would rather not build my own. But then again, I just might not understand what you mean.
For your example of /opt/gitlab/config
, you would (as root) perform a bind mount that gives /home/user/gitlab/config
a view into /opt/gitlab/config
, that is to say run:
$ sudo mount --bind /opt/gitlab/config /home/user/gitlab/config
Note this will only last for your current session, to perform this at every boot, you can add an entry to your /etc/fstab
like this:
/opt/gitlab/config /home/user/gitlab/config none defaults,bind 0 0
Note that the above was not tested so you may need to adjust it slightly, etc.
Ah, thanks! I only knew about bind mounts in the context of containers.
@ijohnson why lxd snap has not this limitation? We can mount disk from everywhere with it.
Using home or media to access things does not mean this is more secure: do you know what files are in media or home? NO. Only ops know it.
You should not try to replace ops job.
we do know that these (/home and /media/$USER/) are typically the only places a user can access without privilege escalation in a default install, so snapd gained interfaces to allow acces to these dirs to be able to access user data.
I think i missed something.
Lxd snap, which has some privileges, can create unprivileged containers which can access anything on the host filesystem, if you mount it on the container.
What prevents docker snap to do the same?
LXD has a special interface specifically for its use called lxd-support
. This is likely providing different rules than the docker snap is gaining.
ok but why 2 container providers , packaged as snaps by canonical, doing globally the same thing, don’t use the same tips to create containers as simply as possible ?
why one is complicated and one is not, since canonical is buidling them 2?
Can’t get the docker snap to work. After reading this whole thread and trying various things, still no joy. I’m running on btrfs for my filesystem and have tried “overlay2” and “btrfs” storage drivers as well as the default.
Seem to get the furthest with the --edge channel for docker and the overlay2 driver - I can get a sensible response from docker info
, but docker run hello-world
fails on all drivers.
All throw differing errors.
⚡ snap info core
name: core
summary: snapd runtime environment
publisher: Canonical✓
contact: snaps@canonical.com
license: unset
description: |
The core runtime environment for snapd
type: core
snap-id: 99T7MUlRhtI3U0QFgl5mXXESAiSwt776
tracking: stable
refresh-date: 9 days ago, at 12:21 BST
channels:
stable: 16-2.40 2019-08-12 (7396) 92MB -
candidate: 16-2.40 2019-07-17 (7396) 92MB -
beta: 16-2.41~pre1 2019-08-20 (7640) 93MB -
edge: 16-2.41~pre1+git1439.bebc78f 2019-08-22 (7654) 93MB -
installed: 16-2.40 (7396) 92MB core
I’m happy to try and debug this further, but there seems to be so many variables I’m not sure where to start.
A quick drive-by report:
docker build
failed when trying to build an image from source files in a temp directory. Example error message: unable to prepare context: path "/tmp/tmpiriw68qk_blah" not found
.
Uninstalling the Docker snap and reinstalling via apt
solved the issue.
How can we change the data-root directory? I’m a newcomer to snaps but not to Linux and docker. Tried to follow the following guide at https://linuxconfig.org/how-to-move-docker-s-default-var-lib-docker-to-another-directory-on-ubuntu-debian-linux but I’m unable to set this via the daemon.json file as it’s already set as a parameter for the docker snap. This is a conflict for docker (flag and json configuration conflict) and thus won’t reconcile the error as mentioned on https://docs.docker.com/config/daemon/. But since this is a snap, the suggestion won’t work in this case.
Normal startup log:
$ systemctl status snap.docker.dockerd.service --no-pager --full ● snap.docker.dockerd.service - Service for snap application docker.dockerd Loaded: loaded (/etc/systemd/system/snap.docker.dockerd.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/snap.docker.dockerd.service.d └─docker-mods.conf Active: active (running) since Tue 2020-05-12 15:56:28 CEST; 20s ago Main PID: 16014 (dockerd) Tasks: 38 (limit: 413) CGroup: /system.slice/snap.docker.dockerd.service ├─16014 dockerd -G docker --exec-root=/var/snap/docker/423/run/docker –data-root=/var/snap/docker/common/var-lib-docker --pidfile=/var/snap/docker/423/run/docker.pid --config-file=/var/snap/docker/423/config/daemon.json ├─16060 containerd --config /var/snap/docker/423/run/docker/containerd/containerd.toml --log-level error ├─16228 /snap/docker/423/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9001 -container-ip 172.17.0.2 -container-port 9000 ├─16239 containerd-shim -namespace moby -workdir /var/snap/docker/common/var-lib-docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d47fcd1c2a0d9c3daaf2565b9742cefa0844e9620857fc2619792ff01179c82f -address /var/snap/docker/423/run/docker/containerd/containerd.sock -containerd-binary /snap/docker/423/bin/containerd -runtime-root /var/snap/docker/423/run/docker/runtime-runc └─16269 /portainer
Short error log after having defined data-root in daemon.json:
$ systemctl status snap.docker.dockerd.service ● snap.docker.dockerd.service - Service for snap application docker.dockerd Loaded: loaded (/etc/systemd/system/snap.docker.dockerd.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/snap.docker.dockerd.service.d └─docker-mods.conf Active: failed (Result: exit-code) since Tue 2020-05-12 15:25:22 CEST; 22min ago Process: 15857 ExecStart=/usr/bin/snap run docker.dockerd (code=exited, status=1/FAILURE) Main PID: 15857 (code=exited, status=1/FAILURE)
Long error log:
sudo journalctl -xe --no-pager
- May 12 15:25:19 ubuntu-core systemd[1]: Started Service for snap application docker.dockerd. – Subject: Unit snap.docker.dockerd.service has finished start-up – Defined-By: systemd – Support: http:// www .ubuntu.com/support – – Unit snap.docker.dockerd.service has finished starting up. – – The start-up result is RESULT. May 12 15:25:19 ubuntu-core kernel: aufs aufs_fill_super:912:mount[15697]: no arg May 12 15:25:19 ubuntu-core kernel: overlayfs: missing ‘lowerdir’ May 12 15:25:19 ubuntu-core sudo[15667]: pam_unix(sudo:session): session closed for user root May 12 15:25:19 ubuntu-core docker.dockerd[15679]: unable to configure the Docker daemon with file /var/snap/docker/423/config/daemon.json: the following directives are specified both as a flag and in the configuration file: data-root: (from flag: /var/snap/docker/common/var-lib-docker, from file: /media/dockerdaemon/data-root/) May 12 15:25:19 ubuntu-core systemd[1]: snap.docker.dockerd.service: Main process exited, code=exited, status=1/FAILURE May 12 15:25:19 ubuntu-core systemd[1]: snap.docker.dockerd.service: Failed with result ‘exit-code’. May 12 15:25:20 ubuntu-core systemd[1]: snap.docker.dockerd.service: Service hold-off time over, scheduling restart. May 12 15:25:20 ubuntu-core systemd[1]: snap.docker.dockerd.service: Scheduled restart job, restart counter is at 1.
<removed restarts 1-4 as the reason is the same>
May 12 15:25:21 ubuntu-core systemd[1]: snap.docker.dockerd.service: Main process exited, code=exited, status=1/FAILURE May 12 15:25:21 ubuntu-core systemd[1]: snap.docker.dockerd.service: Failed with result ‘exit-code’. May 12 15:25:22 ubuntu-core systemd[1]: snap.docker.dockerd.service: Service hold-off time over, scheduling restart. May 12 15:25:22 ubuntu-core systemd[1]: snap.docker.dockerd.service: Scheduled restart job, restart counter is at 5. – Subject: Automatic restarting of a unit has been scheduled
- Automatic restarting of the unit snap.docker.dockerd.service has been scheduled, as the result for – the configured Restart= setting for the unit. May 12 15:25:22 ubuntu-core systemd[1]: Stopped Service for snap application docker.dockerd. – Subject: Unit snap.docker.dockerd.service has finished shutting down – Defined-By: systemd – Support: http:// www .ubuntu.com/support – – Unit snap.docker.dockerd.service has finished shutting down. May 12 15:25:22 ubuntu-core systemd[1]: snap.docker.dockerd.service: Start request repeated too quickly. May 12 15:25:22 ubuntu-core systemd[1]: snap.docker.dockerd.service: Failed with result ‘exit-code’. May 12 15:25:22 ubuntu-core systemd[1]: Failed to start Service for snap application docker.dockerd. – Subject: Unit snap.docker.dockerd.service has failed – Defined-By: systemd – Support: http:// www .ubuntu.com/support – – Unit snap.docker.dockerd.service has failed. – – The result is RESULT.
I’m running this on Ubuntu Core 18.04 so the primary partition is quite limited:
$ df -h Filesystem Size Used Avail Use% Mounted on udev 178M 0 178M 0% /dev tmpfs 38M 7.8M 30M 21% /run /dev/sda3 3.5G 891M 2.5G 27% /writable /dev/loop0 55M 55M 0 100% / /dev/loop1 211M 211M 0 100% /lib/modules tmpfs 187M 4.0K 187M 1% /etc/fstab tmpfs 187M 0 187M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 187M 0 187M 0% /sys/fs/cgroup tmpfs 187M 0 187M 0% /media tmpfs 187M 0 187M 0% /var/lib/sudo tmpfs 187M 0 187M 0% /tmp tmpfs 187M 0 187M 0% /mnt /dev/loop2 896K 896K 0 100% /snap/pc/36 /dev/loop3 94M 94M 0 100% /snap/core/9066 /dev/loop4 28M 28M 0 100% /snap/snapd/7264 /dev/loop5 23M 23M 0 100% /snap/snapd/5754 /dev/loop6 211M 211M 0 100% /snap/pc-kernel/383 /dev/loop7 55M 55M 0 100% /snap/core18/1668 /dev/loop8 5.9M 5.9M 0 100% /snap/nano/27 /dev/sdb1 125G 143M 119G 1% /media/docker /dev/sda2 50M 2.4M 47M 5% /boot/efi tmpfs 38M 0 38M 0% /run/user/1000 /dev/loop9 121M 121M 0 100% /snap/docker/423 /dev/loop10 384K 384K 0 100% /snap/rsync/9
Currently running docker in --devmode but will switch to stable once I get the data-root working.
Is it possible for you to define the data root in the daemon.json file, @ijohnson? For example mine is:
$ cat daemon.json { “tlsverify”: true, “tlscacert”: “/media/docker/daemon/ca.pem”, “tlscert”: “/media//docker/daemon/server-cert.pem”, “tlskey”: “/media/docker/daemon/server-key.pem”, “hosts”: [“tcp://0.0.0.0:2376”, “unix:///var/run/docker.sock”], “data-root”: “/media/dockerdaemon/data-root/”, “log-level”: “error”, “storage-driver”: “overlay2” }
So I imagine the updated release can contain:
"data-root": "/var/snap/docker/common/var-lib-docker",
in order for us to easily modify this setting.
Please advise.
Hi, I no longer maintain the docker snap, @tianon is the new maintainer. However I can say that what you’re trying to do won’t work OOTB today because whatever directory you use for docker to store it’s data needs to be accessible by the dockerd daemon from strict confinement, so your example path of /media/dockerdaemon/data-root/
would not be accessible from strict confinement unless the docker snap starts declaring the removable-media
interface plug, which perhaps @tianon could do to enable this use case. In addition to adding that plug, the default declaration of the data root would need to be moved from the command line to the json file as well, which probably should only be done for new, fresh installs to avoid modifying people’s existing daemon.json’s as they are usually customized I think.
Using a ZFS root filesystem the Docker snap is occasionally unable to remove datasets - this seems intermittent and I haven’t isolated the exact scenario that triggers it yet.
I also needed to manually configure dockerd to use the ZFS driver for storage before it would even start.
Thanks for this.
@tianon: any thoughts on this?
Docker now failed with permission issues inside containers