Using automount to mount Snap images

Hi all,

Random thought I had a couple of days ago: why not using systemd automount units to mount Snap images, instead of static mount that is done today?

Rationale
Currently, snapd mounts all its images on boot - both for current and previous revisions of all Snaps, as per system refresh.retain setting (default of 2 nowadays, I believe). For a system with 10 installed Snaps (some people use much more, plus leftover packages), snapd would’ve to mount 20 images (considering retain=2). That could/would:

  1. Impact system boot time. Recurring complaint on some forums. I’ve not experienced any big delays that could be traced directly to Snap, but reckon I didn’t do any measuring after setting up my system. Nonetheless, worth checking IMHO.

  2. Create df mount point hell. One of the most hated issues with Snaps out there - also, the one that bothers me least. Having all those images mounted effectively turns plain df output almost unusable; resorting to some cmdline options or output processing is the common workaround. For a typical desktop system, one could see (almost) all mount points in a simple df; not true with Snaps.

  3. Waste system resources. This is somewhat controversial, but I took it from autofs manpage (or something like it). Keeping a lot of mount points around could consume system resources, needed by the kernel to maintain all of them ready. Never had any problem with this but, on more constrained systems (I’m looking at you, IoT devices…) this might be true. Also, considering that most systems keep only a single revision of a particular Snap “active” at a time - exception would be experimental feature of enabling two concurrent revisions - leaving current + previous (1 or more) mounted image seems… a waste of resources, really.

Experimenting
So, considering all this, I decided to make a small experiment: got three of my installed Snaps, disabled their mount unit and created respective automount units, with a small umount timeout (a minute, really) just to ease testing. Tested two different scenarios:

  1. Automounting only “spare” revisions (not current).
  2. Automounting both current and spares.

And, let’s boot the system. On both scenarios, for my surprise, had no issues at all, both on boot process and on starting Snap apps after; everything ran smooth. As expected, after 60 seconds, units would be auto umounted, freeing resources and cleaning df output.

Results

  1. Boot time impact: needs deeper evaluation. As I said, changed only three of my Snaps (have 30+ installed, including bases and contents). AFAICT, no change.

  2. dfoutput: obvious improvement. Again, not in my case, since my sample was too small; but it’ll work.

  3. System resources consumption: to be assessed, but I believe IoT would benefit more.

Conclusion
I’d definitely invest some time on implementing this, if I was Canonical. Keeping df cleaner is, by itself, reason enough. Cosmetic, I know, but one less reason for complaints. Also, don’t see much sense in keeping all images mounted all times anyway. Implementation logic seems to be simple - whenever installing a new revision, do what you already do (cleanup old revisions, according to refresh.retain policy, delete old and create new mount units) and configure an automount unit for all revisions except the current. We could go even further, including a knob on snapd for turning all system automountable, altough this would be harder - on my experiments, system always mounted current revision on boot, which means someone is trying to access its mountpoint. Not sure if this is avoidable.

Time needed to mount the packages on demand was unnoticeable (time to deeper tests, again).

I’d like to hear your thoughts on it - or know a reason it wasn’t done this way, if there’s one.

As a final noticed, I did some digging on systemd code and found the commit introducing automount units is so old that predates v1 release, so it wouldn’t be a problem on older systems, it seems.

2 Likes