Getting EPERM when trying to open /dev/fuse

Hi All!

In Multipass, we use sshfs to mount directories on the host into a virtual machine instance. I’m trying to build an sshfs snap to use in these instances so we can provide mounts for Core based VM’s and to also get the latest and greatest sshfs and fuse code since the archive is rather out of date.

I’ve made quite a bit of progress getting the snap strictly confined, but I’m running into an issue that I’ve yet to overcome. I have added the fuse-support, mount-observe, and network plugs for the sshfs app and have them connected after installing the snap. I also run the command via sudo since user mounts are not supported and have the mount in a directory allowed by the apparmor profile.

When strictly confined, everything will start up and the sshfs process is running, but the mount is not listed when running sudo mount nor is there anything in the target directory. No denials or warnings are shown in the systemd journal either.

If I make the snap with devmode confinement and install it with --devmode, I then get an EPERM error on /dev/fuse when sshfs tries to start. Here is the exact error:

fuse: failed to open /dev/fuse: Operation not permitted

Again, nothing in the systemd journal from apparmor or seccomp.

Lastly, if I use classic confinement, then it all works as expected.

Any ideas on how I can debug this further or what may be going wrong?

Many thanks!

Almost certainly you’re hitting the device cgroup. If you install the snap in devmode and don’t connect any interfaces does it work (connecting some interfaces will turn on the device cgroup)?

Hey!

I removed the snap and installed again and didn’t connect any of the interfaces and I still get the same issue.

What is the output of ls /sys/fs/cgroup/devices?

$ ls /sys/fs/cgroup/devices
cgroup.clone_children  cgroup.sane_behavior  devices.deny  notify_on_release  snap.sshfs.sshfs  tasks
cgroup.procs           devices.allow         devices.list  release_agent      system.slice      user.slice

Is your snap named sshfs perchance? I forgot that snapd won’t remove the device cgroup on snap remove (see https://bugs.launchpad.net/snapd/+bug/1803210).

To manually remove the device cgroup run:

sudo rmdir /sys/fs/cgroup/devices/snap.sshfs.sshfs/

Then (with all the interfaces disconnected) try running the command again and see if it works

Yes, that is the name of the snap. I removed that directory and still the same issue. Also, and probably expected, but that directory is recreated when I issue the sshfs command.

So if the directory is recreated then you still have an interface connected which is triggering the device cgroup creation. Can you provide a list of all interfaces you have under plugs in your snapcraft.yaml?

Here is the apps stanza from the snapcraft.yaml:

apps:
  sshfs:
    command: bin/launch-sshfs
    environment:
      LD_LIBRARY_PATH: $SNAP/lib/$SNAPCRAFT_ARCH_TRIPLET
      PATH: $SNAP/sbin:$SNAP/bin:$PATH
    plugs:
    - fuse-support
    - mount-observe
    - network

Also, this is the list of all interface connections on the system:

$ sudo snap connections --all
Interface      Plug                      Slot          Notes
fuse-support   sshfs:fuse-support        -             -
log-observe    snappy-debug:log-observe  :log-observe  -
mount-observe  sshfs:mount-observe       -             -
network        sshfs:network             -             -

Also, I did sudo snap disconnect ... to disconnect the previously connected interfaces. There is no need to restart snapd or anything for that to really take effect, right?

just a side note here … the fuse-support interface does not support unmounting, only mounting … if your multipass setup relies on dynamically mounting/unmounting fuse-support will likely not help you.

Right, I’ll have to see what the behavior is after getting through the current hurdle, but we don’t explicitly unmount anything when issuing a multipass umount. We only kill the running sshfs process and the corresponding sftp server on the host and that has worked well so far :grin:

1 Like

It is possible that sshfs does an umount when catching SIGTERM, so that might be an issue. Although I’m a bit puzzled as why one cannot unmount a fuse share though…

Does the file /etc/udev/rules.d/70-snap.sshfs.rules exist? If so can you remove that file, remove the cgroups dir, and then run your snap again?

Nope, doesn’t exist.

Hmm, okay so if that file doesn’t exist then snap-confine shouldn’t be setting up a new device cgroup. At this point I’d recommend trying the following:

  1. removing the snap
  2. re-building the snap without any interfaces declared in plugs
  3. rebooting the machine
  4. installing the snap in devmode
  5. running the snap again

Well, I’m getting even worse results now. The main sshfs process is not even running now. I see it stuck in /snap/bin/sshfs, so it’s not even getting as far as before. And I still don’t see anything in the systemd journal about what is happening.

Ok, after destroying the instance, installing the sshfs snap in devmode without any plugs defined, I can get the main sshfs process running. However, just like what occurred when the snap was strictly confined, everything is up and running but the mount is not working. It does not show up in sudo mounts nor is anything in the target directory.

Oh, I see this in journalctl now:

kernel: audit: type=1400 audit(1563295176.250:25): apparmor="ALLOWED" operation="mount" info="failed mntpnt match" error=-13 profile="snap.sshfs.sshfs" name="/home/multipass/snap/sshfs/common/Downloads/" pid=1315 comm="sshfs"