Hey folks. I’m working on a bug in canonical-kubernetes. For context, in order to make Kubernetes work in LXD containers, we have a special LXD profile that does a few things, most notably setting
Initially everything works, but when the host machine is rebooted, all calls to snap executables within the containers fail:
$ /snap/bin/kube-controller-manager -h cannot change profile for the next exec call: No such file or directory snap-update-ns failed with code 1
This affects all snaps within the LXD containers, including snaps with
confinement: strict and
The output of
sudo aa-status (link to full output) within the containers does not show any profiles for the snaps installed within them, e.g. kube-controller-manager.
I can work around it by manually reloading apparmor profiles:
$ sudo apparmor_parser /var/lib/snapd/apparmor/profiles/* $ /snap/bin/kube-controller-manager -h <command succeeds>
It’s not clear to me why the profiles aren’t being applied on reboot. I have confirmed that this issue is not present when using a standard LXD profile, so it is definitely specific to our profile. Unfortunately, we’re pretty much stuck with this profile for now due to requirements in Kubernetes itself.
I think this issue is recent - we have been able to reboot LXD machines in the past without the snaps giving us too much trouble.
Any ideas why this might be happening, or how we might be able to fix it? Thanks.