In Multipass, we use sshfs to mount directories on the host into a virtual machine instance. I’m trying to build an sshfs snap to use in these instances so we can provide mounts for Core based VM’s and to also get the latest and greatest sshfs and fuse code since the archive is rather out of date.
I’ve made quite a bit of progress getting the snap strictly confined, but I’m running into an issue that I’ve yet to overcome. I have added the fuse-support, mount-observe, and network plugs for the sshfs app and have them connected after installing the snap. I also run the command via sudo since user mounts are not supported and have the mount in a directory allowed by the apparmor profile.
When strictly confined, everything will start up and the sshfs process is running, but the mount is not listed when running sudo mount nor is there anything in the target directory. No denials or warnings are shown in the systemd journal either.
If I make the snap with devmode confinement and install it with --devmode, I then get an EPERM error on /dev/fuse when sshfs tries to start. Here is the exact error:
fuse: failed to open /dev/fuse: Operation not permitted
Again, nothing in the systemd journal from apparmor or seccomp.
Lastly, if I use classic confinement, then it all works as expected.
Any ideas on how I can debug this further or what may be going wrong?
Almost certainly you’re hitting the device cgroup. If you install the snap in devmode and don’t connect any interfaces does it work (connecting some interfaces will turn on the device cgroup)?
Yes, that is the name of the snap. I removed that directory and still the same issue. Also, and probably expected, but that directory is recreated when I issue the sshfs command.
So if the directory is recreated then you still have an interface connected which is triggering the device cgroup creation. Can you provide a list of all interfaces you have under plugs in your snapcraft.yaml?
Also, I did sudo snap disconnect ... to disconnect the previously connected interfaces. There is no need to restart snapd or anything for that to really take effect, right?
just a side note here … the fuse-support interface does not support unmounting, only mounting … if your multipass setup relies on dynamically mounting/unmounting fuse-support will likely not help you.
Right, I’ll have to see what the behavior is after getting through the current hurdle, but we don’t explicitly unmount anything when issuing a multipass umount. We only kill the running sshfs process and the corresponding sftp server on the host and that has worked well so far
It is possible that sshfs does an umount when catching SIGTERM, so that might be an issue. Although I’m a bit puzzled as why one cannot unmount a fuse share though…
Hmm, okay so if that file doesn’t exist then snap-confine shouldn’t be setting up a new device cgroup. At this point I’d recommend trying the following:
removing the snap
re-building the snap without any interfaces declared in plugs
Well, I’m getting even worse results now. The main sshfs process is not even running now. I see it stuck in /snap/bin/sshfs, so it’s not even getting as far as before. And I still don’t see anything in the systemd journal about what is happening.
Ok, after destroying the instance, installing the sshfs snap in devmode without any plugs defined, I can get the main sshfs process running. However, just like what occurred when the snap was strictly confined, everything is up and running but the mount is not working. It does not show up in sudo mounts nor is anything in the target directory.