Handling of the "cups" plug by snapd, especially auto-connection

@ijohnson, CUPS listening on two domain sockets is no problem at all, it always was capable of this, simply two Listen ... lines in cupsd.conf pointing to the domain sockets do the trick. So (4) in your initial presentation of the proxy idea is solved by my following commit on the CUPS Snap (no change on CUPS itself needed):

It makes simply the run-cupsd script add two Listen ... lines to cupsd.conf when it finds out that we will run in stand-alone mode:

Listen /run/cups/cups.sock
Listen /var/snap/cups/common/run/cups.sock
Port 631
[...]

Now the snapped CUPS listens on /var/snap/cups/common/run/cups.sock in all three modes (stand-alone, proxy, parallel), so that snapped applications can always print to this domain socket. In stand-alone mode unsnapped applications print also to the snapped CUPS as it is also listening on /run/cups/cups.sock then.

1 Like

@ijohnson, @jamesh, now I only need to create the content interface for the snapped clients to force-install the CUPS Snap and having access to /var/snap/cups/common/run/cups.sock for printing, but at the same time the snapped clients need to also have access to D-Bus notifications of CUPS (push notifications from CUPS for printer status changes, job progress, …, no admin tasks possible via D-Bus) (as cups-control also provides).

What I like most here would be an interface named cups containing both the content interface and the D-Bus access. Or do we have to split into cups (or cups-dbus) for the D-Bus and cups-socket for the content interface (as in this example)?

@ijohnson, note that the name of the environment variable is CUPS_SERVER and not CUPS_SOCKET here.

Sorry for taking a while to respond on this. My thoughts are still that we should keep the things as simple as possible for applications, and preferably allow communication with the system CUPS instance.

Now for some specific feedback:

I think a bind-mount like the content interface provides here is going to be part of the solution, but I’m not sure we want to make use of the content interface directly. In particular:

  1. we can’t use this to talk to the host system CUPS.
  2. any application snap will need plug definition and environment variable boilerplate to make proper use of it.
  3. we can’t use it to make the CUPS socket appear at the default /run/cups/cups.sock location, since the content interface only allows bind mount targets under $SNAP/, $SNAP_DATA/, or $SNAP_COMMON/. While layouts allow creating bind mounts outside of those directories, trying to chain the content interface and layouts together is error prone.

One other option would be to modify the existing cups and cups-control interfaces to perform the mounts themselves. That would allow us to bind mount over /run/cups from some directory managed by the slot-side snap, and skip the bind mount when connecting to an implicit system slot.

As for what directory to use as the source for the bind mount, this could either be something fixed by the interface, or specified by an interface attribute on the slot. I don’t think we want to do anything fancy like perform a bind mount over /run/cups on the slot side. While that would allow an unmodified CUPS to create its socket in the normal location, it would also make it impossible to implement the “proxy cups” model, where a snapped CUPS talks to a system CUPS.

What is the advantage of having a snap being able to talk directly to the host system’s CUPS if the proxy mode of the cups snap can provide all the necessary information whilst being mediated?

This is inconvenient, but I don’t view it as a blocker or a large enough disadvantage to stop CUPS from operating in proxy mode, if it is more complicated than a few environment variables, then I think that something like a snapcraft extension and command-chain script could alleviate this situation.

I agree using the content interface with layouts has been error prone in the past, if you think this is a barrier to using it, then we should a) work on identifying those bugs as much as possible and b) if we are already having folks redirect where to find the cups socket via an environment variable, why not just make that environment variable point to somewhere we don’t have to use layouts with, i.e. $SNAP_COMMON, then this is not an issue.

I’m not opposed to this at all, it sounds like from @till.kamppeter’s comments that there are other things that are needed in addition to accessing the socket, mainly D-Bus access in order to print. Is that correct @till.kamppeter ? This does slightly change my proposal, but still I think can be done in a very clean and always mediating way.

I would really like to avoid as much as possible having an implicit slot here since AIUI the cups snap can always do mediation for any distro, then we should have every snap always use the cups snap to the point where implicit slots are not needed.

This isn’t a problem if we require that snaps always have to talk to CUPS through the cups snap’s shared socket, the work from @till.kamppeter ensures that this socket will always expose everything one can do natively outside of the snap sandbox, so from my understanding there is no advantage to a snap being able to talk directly to the host’s CUPS (which may not do any mediation remember).

Just to take a step back, I’m not trying to make this needlessly complicated. I just know that folks have wanted an easy way to print from snaps for a long time, specifically one which doesn’t require fiddling with permissions. In this proxy mode, with a few simple modifications (env vars and the like) to client applications enables those snaps to always be able to do the simple act of printing from inside a snap without needing to connect any interface on any distro that snaps support. Whereas if we just stick to the existing state of things with the cups slot being implicit on other distros, users still need to go and figure out how to connect interfaces and deal with permissions for “something as simple as printing”. I want to make the best experience available for all users of snaps, on any distro. If it’s a bit extra work to make this work, I think it’s well worth the delay to deliver the best experience for snap users wishing to print.

I have the following remarks:

  1. For sole printing and displaying print dialogs which list the available print queues and show the user-settable options for each print queue one only needs the domain socket of CUPS and if it is not at the standard location /run/cups/cups.sock we need to set the CUPS_SERVER environment variable to the actual location (the CUPS Snap is ALWAYS listening on /var/snap/cups/common/run/cups.sock and it is ALWAYS mediating).
  2. D-Bus notifications are an optional service of CUPS. If a client subscribes, CUPS gives push notifications via D-Bus about changes on printer status, available queues, configuration changes, job status, … They are not needed for printing and for displaying standard print dialogs, not used by most client applications which print. They improve possibilities though, as with them a print dialog could update on changes in real time and it could also give the user a way to track their jobs in real time.
  3. The cups-control interface is a complete interface for CUPS clients, it contains both access to the CUPS domain socket plus access to the D-Bus notification facility (and perhaps other features, so let us say cups-control allows access to the CUPS domain socket and the “CUPS extras”).
  4. We need to force snapped applications print through the snapped CUPS as the snapped CUPS always has Snap mediation, the system’s CUPS only in rare cases (and we cannot check this from the outside). To have assure that snapped applications and unsnapped applications see the same printers if we are on a system with unsnapped CUPS, we need to force-install the CUPS Snap in proxy mode.
  5. The interface of CUPS is rather complex, so writing a new “mediation shell daemon” which looks/behaves like CUPS for snapped applications and passes on non-admin requests to the system’s CUPS is way more complex and bug-prone than simply using the actual CUPS (the one in the CUPS Snap) as this “mediation shell daemon”.
  6. @jamesh came with an approach on Mattermost to use the xdg-desktop-portal which flatpak uses. This is more or less like the “Common Print Dialog” which I tried to establish back in 2006 together with some GUI experts but did not succeed due to lack of available coding workforce. The applications do not have their own print dialog but call functions via D-Bus to pop up a print dialog and to actually print. This would probably solve all the problems of the print dialogs I am complaining about, but this is not viable here as all applications would need highly invasive modifications for snapping them, an d snapping command line apps for headless servers will get impossible.
  7. So there seems to be no way to protect an existing CUPS on an arbitrary distribution against admin requests by simple AppArmor and other Snap-typical sandboxing techniques. The proxy CUPS seems to be the easiest-to-implement way.

So our print interface needs:

  1. Printing from Snaps forced through the CUPS of the CUPS Snap so that this CUPS mediates
  2. If the main system’s CUPS is some CUPS installed with the distribution (it is usually not mediating) the CUPS of the CUPS Snap works as proxy. This proxy mode is already implemented.
  3. If there is no CUPS from the distro, the CUPS of the CUPS Snap goes into stand-alone mode to work as the main CUPS. In this case the snapped CUPS also listens on /run/cups/cups.sock so that unsnapped apps can also print (this is also already implemented in the CUPS Snap).
  4. We can simply make the print interface let the CUPS_SERVER env variable be set to /var/snap/cups/common/run/cups.sockso that the snapped app prints to the snapped CUPS and in addition not allow the snapped app access /run/cups/cups.sock to protect against apps which override the env variable and switch (or hard-code) to /run/cups/cups.sock internally.
  5. Better than (4) is that if the print interface bind mounts /var/snap/cups/common/run/cups.sock to /run/cups/cups.sock in the application Snaps instead of setting the CUPS_SERVER env variable, as them badly (or maliciously) programmed apps hard-coding to /run/cups/cups.sock can also print, but only print, not administrate as the job goes through the CUPS of the CUPS Snap.
  6. Whatever the print interface does, it has also to support all the “CUPS extras” which I mentioned in (3) in the beginning of this posting.
  7. The cups-control interface should NEVER route requests through a proxy CUPS, as administrative requests through the proxy CUPS are technically not possible. It should always directly talk to the main CUPS which actually executes the jobs. So it should not do any bind mounting and ALWAYS talk to /run/cups/cups.sock. Then in distros with their own classic CUPS it talk’s to the distro’s CUPS and on systems with the CUPS Snap as main CUPS (stand-alone mode) it talks to the snapped CUPS.
  8. Both the print interface and the cups-control interface should support the very same “CUPS extras” in addition, the ones of the current cups-control interface.

I hope such a set of interfaces could be implemented, especially with (5) instead of (4). Please tell me what I should do from my side for that. Thanks.

@ijohnson, @jamesh, the answer got somewhat longer, but I hope this is all what we need to design the correct interface for printing.

This fact is a bit unfortunate and was not clear to me from your earlier comments. I will think on how best to handle this, but an important point to remember about cups-control is that existing snaps today may be using cups-control to print, and thus will not have the new mediating socket available in their snap namespace, and we should not break these existing snap use cases.

Just as an aside, I totally agree that using portals presents the best experience, but we can’t assume that all snaps will be using portals unfortunately, though AIUI over time more and more frameworks will support using portals by default so we should get there eventually with most applications using portals, but until then we will have to make do.

@ijohnson, probably cups-control does not need to get changed at all, as it is probably already accessing /run/cups/cups.sock and supporting the CUPS extras.

Only cups needs to get substantially changed, to force-install the CUPS Snap and to bind mount /var/snap/cups/common/run/cups.sock to /run/cups/cups.sock so that printing requests always go to the CUPS of the CUPS Snap, and it also has to support the CUPS extras,

@ijohnson, I am aware of this. Most applications have their own (or their GUI toolkit’s) print dialog which uses the libcups API to talk with CUPS, if the snapper of such an app has to modify the code of the app to print via portal, iy is too much work and too bug-prone to snap an app. And if the app is non-GUI for headless servers, it could not get snapped at all. So I am principally working on a way to snap apps without need to modify them, and this is to provide the CUPS interface to the Snaps but mediated so that the Snap cannot do anything nasty.

About the portal alternative I learned only today, when talking with @jamesh on Mattermost (Desktop channel). There I already saw that it is not the right way in most cases.

@ijohnson, @jamesh, anything I need to do for now on the CUPS Snap for this interface?

One issue with the proxy solution is that it would hide the peer credential information from the system CUPS. Is this going to make all print jobs appear to come from the user account the proxy cups runs as? How will that affect logging/auditing and quota features?

It also adds another layer where print jobs can potentially get stuck compared to non-snap apps, and the systems where it is going to provide the most benefits are also the ones least able to confine a system service.

The cups-control interface currently grants plug side apps the ability to receive signals on the org.cups.cupsd.Notifier interface. The cups interface does not.

It primarily seems to be used for asynchronous job and printer status changes. It doesn’t seem to be used by libcups, and I don’t see it used by e.g. GTK’s print API. It seems to have mainly seen use in things like Unity 7’s indicator-printer to provide completion notification independent of the app that submitted the job. I think it can safely be ignored for a “print job submission only” interface.

@jamesh, @ijohnson, the jobs are pased on from the proxy CUPS to the system’s CUPS by the special CUPS backend proxy. This backend does not only passes jobs into system queues which are not shared but it also passes on the user ID of the user who submitted the job originally to the proxy. The backend receives this ID as the second command line argument (see man backend) and before it creates the job on the system’s CUPS (like the lp command does) it calls cupsSetUser(argv[2]) to create the job with this user ID.

@jamesh, @ijohnson, allowing the plugging application to also subscribe to CUPS’ notifications should not create any additional security risk as there is no way to issue administrative action on the CUPS daemon via D-Bus. Allowing this gives only additional liberty to the applications to handle printing. So I am in favor of the cups interface also supporting this. Alternatively, one could create a separate, also auto-connecting cups-dbus interface and let the cups interface limited to non-admin domain socket access.

Okay, so this works because the jobs end up being submitted via TCP rather than the UNIX socket, so there is no possibility of checking peer credentials. So the real server will have to trust whatever user name you put in the IPP request.

Are there many systems where TCP connections may be restricted? I can definitely see references in the Debian packaging change log of updating scripts to deal with cases where TCP has been disabled and cupsd is only listening on the UNIX socket. It’s not at all clear how common this might be though.

@jamesh, the transfer from the proxy CUPS to the system’s CUPS is not necessarily done by TCP, it is done via the access information supplied on the command line of cups-proxyd. cups-proxyd creates the queues on the proxy CUPS using the proxy backend with the access information for the system’s CUPS in the device URI. So the transfer happens via the domain socket in most cases, as usually a distribution’s CUPS has at least the domain socket active, but if the distribution’s CUPS only listens on TCP, the transfer still works.

Looking at the code, it seems that the username passed in the IPP request is only used as a fallback if there is no username from HTTP auth:

And for unix socket connections, this would come from peer credentials.

@jamesh, should I perhaps then let the proxy backend be owned by root, so that the proxy CUPS runs it as root and before the backend creates the job let it drop privileges to the user who originally sent the job? As the application Snap and the snapped proxy CUPS run on the same system, the user should be available for the proxy CUPS and with it the proxy backend.

Snapd’s strict confinement sandbox would likely prevent that kind of setuid use.