Snap-associated .mount units disable units wanted by

The title says it.

Service units wanted by or before fail because of inactive .mount units associated with each installed Snap application.

This likely is because of the LazyUnmount=yes directive in each such .mount unit.

This is especially alarming because iptables scripts typically depend on, which was specifically implemented for firewall services. See here.

Running systemd-analyze verify on affected .service units returns an error message for each Snap-associated .mount unit in the form of:

snap-bare-5.mount: Unit is bound to inactive unit dev-loop5.device. Stopping, too.

This directive likely must be removed immediately from the .mount unit associated with each Snap application, and in any event before any hoped deployment with Ubuntu 22.04.

why would an unmount option (that is really only there to not stall your shutdown) have any influence on mounting …

the message is pretty clear, your system seems to have deactivated loop devices for whatever reason …

also, you should always include the full output of snap version in such posts

I think I’m a bit confused. Do you have an example maybe?

The mount units declare, while socket units have There’s Before=snapd.serice, but that only affects the ordering, and does not declare a dependency. However, snap service do have but that makes their dependency, not the other way round.

FWIW that line from systemd-analyze was removed from systemd in and replaced with better checks. In fact, with systemd 250 verify does not complain. Technically on 248 it did not fail either.

@ogra, versions are: snap, snapd 2.54.4, series 16, Ubuntu 20.04, kernel 5.13.0-39-generic.

More to the point, LazyUnmount= may or may not be the cause. The error message states that the loop devices are inactive. This could be because they either (a) deactivated or (b) didn’t activate in the first place.

/proc/mounts reveals that loop device mounts exist only with respect to installed Snap applications. Effectively, this indicates these devices are active. So, the problem may instead lie with the startup sequence.

Perhaps this has something to do with ZFS, which to some extent implements SMB and NFS, which of course are network-related. The mount units contain the directive AFTER=zfs-mount.service. The service doesn’t exist on the system, however.

Another hint may be in the mount options, which include x-gvfs-hide. No telling where the binds are but if they’re remote or otherwise

this just means “if zfs-mount.service exists, do not start before it” … it won’t have any other effect on the unit …

yeah, else all the snap mounts would show up in every filemanager and in your panel, this should have no influence at all on mount units …

So, how do zfs mounts unmount? What is the mechanism? I have yet to find any zfs .mount units, nor have I found any directive that would seem to unmount anything zfs-related.

Also, when do the zfs mounts unmount? The zfs-related units have fairly complex, if not overwrought, activation conditions. Absent an unmount directive, or a conflicts=shutdown or before=shutdown directive, the default ordinarily is that they unwind in reverse. When exactly is this?

When, for that matter, do the ordinary snap-related mounts (i.e., those in /etc/systemd/system) unmount?

And, what happens, or could happen, in the case of a hasty or disorderly shutdown? E.g., DefaultTimeoutStopSec=2 (in system.conf). What happens if the snap that is running is GTK or GNOME and either is still running/active when the mounts deactivate?

i can’t tell you since i do not run a zfs root filesystem here … the zfs units only get installed when you pick a zfs filesystem during you OS installation and the zfsutils-linux package gets installed and ships them along …

you had quite a few messed up symlinks on your filesystem in the other thread, instead of digging into snap units i’d recommend researching why your filesystem seems to be corrupt, has broken metadata or what else managed to create unit symlinks that should never ever exist …

unmounting on shutdown clearly works here on all my machines and since there was not a big outcry yet it seems to also work fine on the millions of other Ubuntu installations out there