Parallel snap installs

mborzecki
upcoming

#1

Parallel snap installs is a new feature that will allow installing multiple instances of a given snap.

Each instance will have a locally assigned, optional, unique key. The meaning of the local key is up to the user. For example:

  • postgres_prod
  • postgres_stage
  • postgres

are 3 different instances of postgres snap. Each instance is completely independent, with separate version, revision, track, data etc.

The local key is implicitly assigned during installation, that is snap install postgres_prod creates a posgres_prod instance of posgres snap, with local key prod. The proposed constraints for the format of the key is [a-z0-9]{1,10}. All other snap commands will work as before.

To help snap packages, the paths and environment variables inside the snap will remain unchanged, so that the existing snaps continue to work. That is, when the mount namespace is set up, /snap/hello-world_foo will be made available as /snap/hello-world. The environment variables will also be updated to reflect this, consider this mapping:

| variable         | inside snap environment            | outside                                |
|------------------+------------------------------------+----------------------------------------|
| SNAP_NAME        | hello-world                        | hello-world_foo                        |
| SNAP             | /snap/hello-world/27               | /snap/hello-world_foo/27               |
| SNAP_COMMON      | /var/snap/hello-world/common       | /var/snap/hello-world_foo/common       |
| SNAP_DATA        | /var/snap/hello-world/27           | /var/snap/hello-world_foo/27           |
| SNAP_USER_COMMON | /home/user/snap/hello-world/common | /home/user/snap/hello-world_foo/common |
| SNAP_USER_DATA   | /home/user/snap/hello-world/27     | /home/user/snap/hello-world_foo/27     |

The work has already started, expect updates to be posted in this topic.


Proposal: snapcraft 3.0
#2

I’d expect to be able to know the instance key from inside the snap, even if it’s just for debugging purposes. Having something like SNAP_LOCAL or SNAP_INSTANCE_KEY or SNAP_HKEY_LOCAL_MACHINE or something would work.


#3

This works for the mount namespace but there are many other things that, by snapd design, work in the global namespace. This includes dbus, unix sockets, file paths outside of SNAP* directories (eg, OTOH /run/postgres.socket), signals, ptrace, device cgroup, eventually secmark (fine-grained network mediation), … In addition, there are the filepaths on disk for all the different policy (seems clear this would all include the local key, eg /var/lib/snapd/seccomp/bpf/snap.postgres{,_prod,_staging}.postgres.src).

With the apparmor policy, to support the mount namespace as suggested and the global namespace, we’ll be required to have the profile name (ie, security label) include the local key (eg, ‘postgres_prod’) and introduce another apparmor variable (eg, SNAP_NAME_FILE) without the local key. Eg:

# for non-file accesses
@{SNAP_NAME}="postgres_prod"
# for file accesses
@{SNAP_NAME_FILE}="postgres_prod"
...
profile "snap.postgre_prod.postgres" ... 

and then adjust the template and interface policy accordingly.

Udev tagging uses ‘_’ in the tag name due to limitations in udev (it cannot use ‘.’ in tags). So in /etc/udev/rules.d we would have rules rules like:

$cat /etc/udev/rules.d/70-snap.postgres_prod.rules
...
SUBSYSTEM=="drm", KERNEL=="card[0-9]*", TAG+="snap_postgres_prod_postgres"
TAG=="snap_0ad_0ad", RUN+="/usr/lib/snapd/snap-device-helper $env{ACTION} snap_postgres_prod_postgres $devpath $major:$minor"

Note that normally the tag would be ‘snap_postgres_postgres’ but now we need to special case ‘snap_postgres_prod_postgres’ and understand the difference between 2 '_'s and 3 '_'s. This has an effect on /usr/lib/snapd/snap-device-helper and snap-confine in how they make the translation to the device cgroup which would be /sys/fs/cgroup/devices/snap.postgres_prod.postgres.

Additionally, because only the mount namespace is adjusted, the postgres_prod snap at runtime will see /sys/fs/cgroup/devices/snap.postgres_prod.postgres (and not /sys/fs/cgroup/devices/snap.postgres.postgres, but this is ok cause security policy doesn’t allow poking around in one’s own device cgroup). There will be conflicts whenever snaps try to create the same resource in the global namespace (bind to the same DBus well-known name (eg, “org.postgres.Server”), the same abstract socket name (eg “@postgres.sock”) , the same named socket, pipe, etc in a shared directory like /run (eg, “/run/postgres.socket”) or even a file in a shared directory like /etc (eg, /etc/postgres). It might be easiest in phase I to say that 'anything that uses ‘slots’ cannot be parallel installed). Then in phase II perhaps we can say 'if you slot a named socket from SNAP_DATA/SNAP_COMMON, then that’s ok). This does mean that you won’t be able to parallel many applications on classic distro (since so many will use the dbus interface to bind to a well-known name).


#4

I suspect there are going to be a few things wrt systemd generated files. Desktop files, autostart and DBus activation (not implemented yet but needs to be considered) will have to be looked at as well.

I mentioned DBus in the context of AppArmor in the last post, but the dbus bus policy will be affected as well since there will be multiple bus files generated in /etc/dbus-1/system.d for the same service (ie, /etc/dbus-1/system.d/snap.postgres.postgres.conf, /etc/dbus-1/system.d/snap.postgres_prod.postgres.conf, /etc/dbus-1/system.d/snap.postgres_staging.postgres.conf) but they will all have the same content (presumably, since the snap expects the same content regardless of local key).


#5

We also discussed in hangout that we’ll want to verify XDG_RUNTIME_DIR directories are correctly mounted.


#6

Does this work support classic confinement? Currently there is no mount namespace setup for classic confined snaps AFAIK.


#7

This was discussed briefly. The current idea is to support confined snaps first as those are already set up inside a mount namespace. Support for snaps with classic confinement may come later.


#8

some notes from things I recalled/noticed while reviewing some of the initial work:

  • (obvious) Keys of the snaps map in state will be instance names now
  • We have a general choice to make: either we pass instance names into the store package, which means it should preserve them on the way out, or we make it not aware of that and just extend the SnapAction interface to let us specify instance keys. We should not mix and match though.
  • We need not only to be careful about places using snap names but also places using snap ids because we do have places using those also assuming that there is a one to one correspondence between local names and snap IDs (in the new world given a snap id alone we don’t know which instance is intended). In some cases the fix will just avoid extra requests for the snap id, but in others there will be correctness issue if we don’t move to track things by instance name. Some areas to be careful about:
    • fetching of assertions
    • refresh-control code
    • update code
  • Auto-aliases are dictated by the snap-declaration, if the same snap with auto-aliases is installed multiple times only one instance can get the aliases, we need to review the corresponding code and in general alias conflict code to make sure it doesn’t get confused by the same declaration/aliases applying multiple times.

#9

another consideration: we should stop parallel installing of gadgets and kernels, core and snapd

some of that tying down should be done also at the level of the model assertion itself

probably initially also bases (though I could see use cases to allow different snaps to use different tracks of a base, though mostly in some development scenarios)


#10

All the bits have landed. The feature is supported in edge now, and most of it is in upcoming 2.36, but we’re holding back the announcement until 2.37 until it gets more testing and the store bits are avaialble.

I’m working on documentation and should have something usable ready shortly.

Meanwhile, I’ve put up a simple snap with a service that can be used to play with the feature right here:


#11

I have just dropped some documentation for the feature right here: Parallel Installs