As far as I recall there is code in snapd that handles socket activation, but many moons ago it was decided that the syntax needed rework so we never exposed it in snapcraft.
The aforementioned bug proposes what follows:
The current implementation in snapd is done with two options on the app:
socket (boolean, requires listen-stream)
listen-stream (the systemd listen-stream string)
This should probably be changed to something more like:
Yes indeed, that leaves the question of wanting to grow the feature in steps or all in one sweep. I would say that adding socket activation without users and groups would be a no worse situation than today with the added benefit of non essential services out of the process list.
That said, I am not aware of the grand design for this, so it might be more work to add this first depending on the amount of refactors planned to support Multiple users and groups in snaps
If you’re going to support multiple activatable sockets, it would be helpful to make sure they are configured in a predictable order so that the daemon can use sd_listen_fds() rather than sd_listen_fds_with_names().
So maybe use an array rather than a mapping in the yaml?
If the configuration format is agreed upon we (LXD) could work on implementing at least the minimal set of features that we need (what was there before + socket permissions), the rest could be added later.
Looking at the systemd.socket man page, it seems that sockets defined in the same unit file will be provided in the order defined, but if a service depends on multiple socket units there is no guarantee which order those units will be provided in.
using socat in a wrapper script also gets you nicely working socket activation
(this is particulary using port activation though, but socket is possible as well)
Should we define each socket in its own .socket file or have a single file associated to the .service one?
The former allows finer control on each socket; for instance the LXD snap could disable the socket on 8443 if only the local socket is enabled.
If we go with a single .socket file, then there is no real use for the socket name (lxd-unix and lxd-tcp in the example), so we could make the sockets stanza just a list, but all sockets would be started/stopped together by systemd.
I don’t know much about snap set, but the current implementation won’t interact.
Basically it lets you define a set of sockets that would cause the service to be started. What I mentioned about having separate socket files would possible let the service disable the .socket if it doesn’t actually listen on a specific port.
IOW you’d have to define all possible sockets that the service could listen on.
@ack Yeah, don’t worry too much about that. There are a few ideas to explore in terms of how to allow people and even the snap itself to tune the default value of this setting at runtime, but we can figure this out in a follow up and so it’s not a blocker for this to land.
The only real blocker we have now is figuring out how to define the snap filename in a safe way, per the review.
As a first pass can we just restrict the socket paths to be under the snap’s writable paths?
That would effectively be either:
/var/snap/NAME/current/
/var/snap/NAME/common/
Both of those should always be entirely safe and avoid any potential conflict with the host. Having this initial restriction would unblock LXD (which binds /var/snap/lxd/common/lxd/unix.socket) and would be compatible with allowing other paths (/run or such) down the line either through interfaces or another mechanism.
@stgraber That would be the best approach actually, as it would effectively allow systemd to create anything that the snap might create anyway. The question is just how to verify it easily from that location. The vague option we discussed was to have some sort of support on snap-exec to answer that question, and then do the typical snap run → snap-confine → snap-exec dance to get an answer. It feels a bit fiddly, but I can’t see good alternatives yet. If we go that route, systemd socket files have an ExecStartPre option that runs before the socket is created. Assuming it aborts the socket creation if that command fails, we might leverage it.