Snapd apparmor profiles not being applied in LXD containers with lxc.apparmor.profile=unconfined when host is rebooted

Hey folks. I’m working on a bug in canonical-kubernetes. For context, in order to make Kubernetes work in LXD containers, we have a special LXD profile that does a few things, most notably setting lxc.apparmor.profile=unconfined.

Initially everything works, but when the host machine is rebooted, all calls to snap executables within the containers fail:

$ /snap/bin/kube-controller-manager -h
cannot change profile for the next exec call: No such file or directory
snap-update-ns failed with code 1

This affects all snaps within the LXD containers, including snaps with confinement: strict and confinement: classic.

The output of sudo aa-status (link to full output) within the containers does not show any profiles for the snaps installed within them, e.g. kube-controller-manager.

I can work around it by manually reloading apparmor profiles:

$ sudo apparmor_parser /var/lib/snapd/apparmor/profiles/*
$ /snap/bin/kube-controller-manager -h
<command succeeds>

It’s not clear to me why the profiles aren’t being applied on reboot. I have confirmed that this issue is not present when using a standard LXD profile, so it is definitely specific to our profile. Unfortunately, we’re pretty much stuck with this profile for now due to requirements in Kubernetes itself.

I think this issue is recent - we have been able to reboot LXD machines in the past without the snaps giving us too much trouble.

Any ideas why this might be happening, or how we might be able to fix it? Thanks.

What host kernel and what guest OS are you using?

Host machine:

$ uname -a
Linux ip-172-31-63-188 4.4.0-1060-aws #69-Ubuntu SMP Sun May 20 13:42:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.4 LTS
Release:	16.04
Codename:	xenial

Guest machine:

$ uname -a
Linux test2 4.4.0-1060-aws #69-Ubuntu SMP Sun May 20 13:42:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.4 LTS
Release:	16.04
Codename:	xenial

Thanks. After a reboot, what is the output of this command in an affected container: sudo aa-status.

After reboot, in the container:

$ sudo aa-status
apparmor module is loaded.
56 profiles are loaded.
50 profiles are in enforce mode.
   /sbin/dhclient
   /snap/core/4650/usr/lib/snapd/snap-confine
   /snap/core/4650/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/bin/lxc-start
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/lib/snapd/snap-confine
   /usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/sbin/tcpdump
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>:///sbin/dhclient
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>:///usr/bin/lxc-start
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>:///usr/lib/NetworkManager/nm-dhcp-client.action
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>:///usr/lib/NetworkManager/nm-dhcp-helper
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>:///usr/lib/connman/scripts/dhclient-script
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>:///usr/lib/lxd/lxd-bridge-proxy
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>:///usr/lib/snapd/snap-confine
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>:///usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>:///usr/sbin/tcpdump
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>://lxc-container-default
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>://lxc-container-default-cgns
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>://lxc-container-default-with-mounting
   :lxd-juju-3b7be8-0_<var-snap-lxd-common-lxd>://lxc-container-default-with-nesting
   lxc-container-default
   lxc-container-default-cgns
   lxc-container-default-with-mounting
   lxc-container-default-with-nesting
   lxd-juju-3b7be8-0_</var/snap/lxd/common/lxd>
   lxd-juju-6527f9-0_</var/snap/lxd/common/lxd>
   lxd-juju-6527f9-1_</var/snap/lxd/common/lxd>
   lxd-juju-6527f9-2_</var/snap/lxd/common/lxd>
   lxd-juju-6527f9-3_</var/snap/lxd/common/lxd>
   lxd-juju-6527f9-4_</var/snap/lxd/common/lxd>
   lxd-juju-6527f9-5_</var/snap/lxd/common/lxd>
   lxd-juju-6527f9-6_</var/snap/lxd/common/lxd>
   lxd-juju-6527f9-7_</var/snap/lxd/common/lxd>
   lxd-juju-6527f9-8_</var/snap/lxd/common/lxd>
   snap-update-ns.conjure-up
   snap-update-ns.core
   snap-update-ns.kubectl
   snap-update-ns.lxd
   snap.core.hook.configure
   snap.lxd.benchmark
   snap.lxd.buginfo
   snap.lxd.check-kernel
   snap.lxd.daemon
   snap.lxd.hook.configure
   snap.lxd.lxc
   snap.lxd.lxd
   snap.lxd.migrate
6 profiles are in complain mode.
   snap.conjure-up.conjure-down
   snap.conjure-up.conjure-up
   snap.conjure-up.hook.configure
   snap.conjure-up.juju
   snap.conjure-up.juju-wait
   snap.kubectl.kubectl
1 processes have profiles defined.
1 processes are in enforce mode.
   /sbin/dhclient (907) 
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Here is also the list of installed snaps in the container:

$ snap list
Name                     Version    Rev   Tracking  Developer  Notes
cdk-addons               1.10.3     399   1.10      canonical  -
core                     16-2.32.8  4650  stable    canonical  core
kube-apiserver           1.10.3     406   1.10      canonical  -
kube-controller-manager  1.10.3     396   1.10      canonical  -
kube-scheduler           1.10.3     405   1.10      canonical  -
kubectl                  1.10.3     405   1.10      canonical  classic

Correct me if I’m wrong but if you switch an LXD container to unconfined mode it disables apparmor stacking and then the profiles you see on the inside of the container as the same as those on the outside of the container. Long ago we said that this is an unsupported configuration simply because we cannot make this work in general.

Is this the case here? CC @jdstrand

Zyga, that’s correct - under this configuration, all of the containers and the host are using the same set of apparmor profiles.

It’s understandable that this is a configuration y’all can’t reasonably support. Unfortunately, we’re stuck with it for now to make kubelet work under LXD.

Do you at least have any quick pointers or advice for me while I’m looking into workarounds? How do the snapd apparmor profiles normally get loaded on boot?

Ok, thanks for confirming.

Assuming that using something other than ‘unconfined’ causes the container to use stacked profiles, the only way you can reasonably pull this off is instead of using ‘unconfined’ you define another profile for your container that is wide open. Eg, on the host (untested):

$ cat /etc/apparmor.d/my-temporary-kubelet-container
#include <tunables/global>
profile "my-temporary-kubelet-container" (attach_disconnected,mediate_deleted,complain) {
  capability,
  change_profile,
  dbus,
  file,
  network,
  mount,
  remount,
  umount,
  pivot_root,
  ptrace,
  signal,
  unix,
}

$ sudo apparmor_parser -r /etc/apparmor.d/my-temporary-kubelet-container 

Then launch with lxc.apparmor.profile=my-temporary-kubelet-container. Assuming simply not using ‘unconfined’ is enough, this should allow it to reboot and have the profiles loaded.

If that doesn’t work, then you would need to add to the guest a systemd unit or modify /etc/rc.local or something to run your sudo apparmor_parser -r /var/lib/snapd/apparmor/profiles/* command. The downside to this approach is that the policy in the container may interfere with the host’s processes if the same snaps are installed but have different interface connections, etc.

Thanks, much appreciated. Manually loading /var/lib/snapd/apparmor/profiles/* is what I found on my own - hacky, but seems to do the trick.

I’ll look into your suggestion to make our own profile for the container instead of setting it to unconfined. Sounds a lot cleaner. Thanks again.

although, it has been a while since the last post in this forum, but worth a try. I followed the instructions from the website configuring microk8s inside LxD containers (microk8s.io/docs/lxd). The suggested profile was used in the containers. The steps worked fine inside the container following the first time launch.

The problem was seen once the host is turned off and not simply container restarted. Every time, I ran the commands after restarting the containers such as microk8s.inspect or microk8s.status the output was like below

snap-confine has elevated permissions and is not confined but should be

Then running the hack as below followed by the container restart solved the problem. The question is why the profile is not persistent?

sudo apparmor_parser /var/lib/snapd/apparmor/profiles/

Similar issues are all over the forum elsewhere
discuss.linuxcontainers.org/t/snap-confine-has-elevated-permissions-and-is-not-confined-but-should-be/8380