reasoning:
These are cli tools for downloading, manipulating and serving a 40GB database holding SHA1+count of compromised passwords. They are designed to be used by developers typically deploying web applications on headless servers, as well as for testing on their developments machines. They may form part of multiple VM/machine hosting infrastructure, the exact architecture is decided by the end user. Performance is a key design goal due to the size of the data.
The personal-files interface will present too much performance overhead vs the native file system for a high throughput database server like this.
Similarly the network bind interface will neither allow the user to use my tools to serve the database in a way that suits their architecture, not will it allow the tools to perform at their full performance potential.
Therefore I am requesting classic confinement.
Also, in a related request. I would prefer for the command names to be
hibp-download
hibp-server
etc
and not
hibp.download
hibp.server
as this will make them consistent with more common linux naming conventions and will match the names in other packaging contexts. Could I please request an alias to affect that?
I understand that strict confinement is generally preferred over classic.
What makes you think the personal-files interface would have any impact on performance? The only time this could have any impact is at snap install time when the apparmor profile gets generated but surely not at runtime (this is the reason apparmor is picked, you have “normal” filesystem access, there is nothing in the way.)
Can you elaborate why network-bind would not be sufficient (it doesn’t do anything apart from allowing execution of the syscalls to create sockets, there is no further processing or overhead here compared to unconfined execution)
Also note that fitting into one of the supported categories is a hard requirement for getting classic granted
then he will also get an error, unless I manage to pre-guess which their desired architeture is?
My tools never access files, interfaces or ports without the user explicitly specifying those, just like every UNIX cli utility ever made. But I cannot guess how they will be used, because I will get it wrong, and the user will be frustrated and give up.
Is this kind of “normal behaviour” just incompatible with what a snap can be?
The denial of opening privileged ports like 443 for unprivileged users is a kernel feature and totally not snap related.
I’m curious how this could even work when your app is not snapped, providing any service on port 443 should always require root permissions regardless of the packaging of your app, else your kernel has a pretty serious security hole…
Since your app does not fit any supported category and as such can not be granted classic I’d suggest moving forward with strict confinement and using the snappy-debug tool with it to sort all confinement issues you encounter.
You can open a new topic in the Snapcraft category and attach the output of snappy-debug to get help with fixing confinement issues you do not understand
perhaps you need to either re-read what I wrote, or recheck what what you are claiming.
root, ie sudo as above , can certainly open privileged ports.
And that is one perfectly reasonable mode of operation of these tools. And that is precisely my point.
That is why “strict mode” can not ever work, as far as I understand.
So, I will not move forward with strict mode. I will move on to a different packaging/distribution solution. Because what you have demonstrated is that snaps can never be “servers” or general command line utilities that need any privileges which the user directly conveys to them by virtue of his privileges on the system.
Perhaps you, as in the snapcraft community, could reconsider your “categorisation system”, such that privileges coveyed to a snap on the command line, can be wielded by that snap. There is an entire class of snaps which you are apparently blocking out here?
Oops, yeah, I missed the sudo in your command, but regardless of this, the network-bind interface grants your apps full access to achieve opening ports.
If you call the snapped command like above does it not open the port when network-bind is defined and connected? That would be a bug and would require more info like i.e. the snappy-debug logs…
There are literally thousands of servers in snaps in the store from tinkering developer snaps up to enterprise software and they all work fine under strict confinement, strict confinement doesn’t limit any functionality unless there is an interface missing/non-existent for a certain task, network-bind is definitely providing the bits needed for what you describe and is being used all day all over the place…
I don’t want to waste too much more of your or my time.
in strict confinement and “network interface” I have to pre-specify which addresses and ports the app is allowed tom bind to right? And the directories where the user can put his files which my utilities downloaded or produced for him?
That’s like saying ls somedir cannot list this directory because the author of ls didn’t guess that might want to access that directory. Or bind to that interface or that port…
If that is correct that’s just not going to fly for me. It would be so counter to the spirit of “55+ years of unix command line” that it would be nothing but confusing and reflect poorly on my software.
No, you seem to completely misunderstand how confinement works, you do not have to specify anything at all, the network-bind interface operates on a kernel level and restricts the execution of syscalls, once the interface is connected the syscalls are granted and your app works as expected.
Have you actually tried packaging it strict before asking for classic? I wonder how you got the impression you’d need to pre-specify anything (for a few of the more complex interfaces you do have to specify additional parameters, but for the vast majority defining them in your snapcraft.yaml and making sure they are connected at runtime is sufficient)
I looked at the interfaces, and they all talked about specifying ~/.my-app-dir in the snapcraft.yaml, and similar for addresses /interfaces. That was enough for me to conclude that this doesn’t appear to work for these kind of utilities. So I asked for --classic.
Maybe you want to try to tailor your docs a little more to this class of “server app” if that’s what you want on snap?
To be honest your attitude hasn’t helped, lecturing my about privileged ports… and being wrong. There is nothing worse than being arrogant and wrong.
I am not highly motivated to proceed with snap. Lots of apparently better alternatives. I might get back to it at some point…
so if I want to put ls in a snap, I have to list every directory which ls might want to display the contents of? That’s what this documentation appears to suggest at first glance.
I don’t pretend that I spent hours trying, life is too short. If the docs say, “prob not” go somewhere else?
I remain unconvinced that this is not the case. And I don’t have the time or inclination to spend hours trying to hack my way into reading or writing some files, which the user has the privileges to read/write.
personal-files is meant as an exception for home, not a replacement for it. Have you tried adding home, you’d then be able to write into most directories (the exception is top level hidden files and folders). Stuff like $HOME/downloads, $HOME/documents, $HOME/videos, etc., would be entirely accessible without defining them one at a time.
You don’t need home at all to write to $SNAP_USER_COMMON or $SNAP_USER_DATA, you could also consider from a root daemon, downloading into $SNAP_DATA (system level), and serving the content from there, so your 40GB file is only downloaded once for however many users on the machine, and that root daemon can then also open up port 443, whilst still being confined.
Whilst I can appreciate “use strict first” can come off as strong advice, we as a community have to be able to justify why we’re giving admin access to people. Once you have classic, nothing stops you updating the snap and deleting the users entire drive, intentionally or by mistake. That level of trust can’t be justified on simply “users might have to manually run mv once and the other options are insufferable” especially where the attempt hasn’t been made in the first place.
Take a look at the QHTTP snap which just wraps the Python built in web server. It doesn’t have home and can open up ports just fine without admin (1024 or higher). I actively tell people to put the files in a specific place because it’s easier than having to guess where they might put them and more secure too.
It can also be run with sudo and use port 80.
It’s probably the smallest example of networking I could give, being literally smaller than a quantum harddrive block.
Your individual snap might be more complicated, snappy-debug will help if it’s doing something unusual. However from an overall architecture POV, opening a webserver with content isn’t unusual and certainly not impossible to do securely.
The reason I don’t give home in that snap is explicitly because I don’t want it to be able to accidentally show the users drive unexpectedly should they use it wrong. I don’t see “use a specific directory” as a weakness but an advantage. There is zero confusion on where to put the files, there is one answer. (Or two, if you need ports below 1023). But there’s nothing actually stopping me (aside from not declaring the permission) from letting the user pick randomly in their $HOME except for it can’t be a top level hidden file/folder, which is what personal-files handles.
What options? The network-bind interface has no options and has not changed in 10 years…
There is nothing to document about it which is why the documentation has not had to be touched in years…
You simply define it in your yaml and that’s it, it is a simple toggle that tells the kernel to allow the bind and socket syscalls, nothing else… It even automatically connects
That may be true for “autostart” or other “desktop” apps…
but I don’t see some, IMO, necessary differentiation between “GUI, autorun, or click-to-run apps”, and CLI sysadmin apps, which are explicitly and manually run. When the user tells the latter category of app to “DO X”, and it refuses, then that is just obstructive. He might run it as sudo, and will still refuse. That is just silly. sudo can do anything, that’s a rule that should and must not be violated.
And anyway, to “delete the drive” wouldn’t the user need to run the app with sudo? Does a --classic app get sudo when you don’t run it as sudo. Please tell me that’s not the case, as that would be super broken?!
This does seem like a “pretend security model to me”. Aimed at people clicking in GUIs. Sysadmins have root, they need to know what they are doing, if they don’t they may as well copy paste.
sudo rm -rf --no-preserve-root /
Snap doesn’t seems suited to sysadmin tools, at all IMO. Those really should be installed via apt or rpm or pacman, that still seems to be the truth. The whole “dependency hell” argument for snaps also doesn’t really apply to sysadmin tools. They tend to use few and stable/old dependencies. The “server and cloud” category seems super empty. not “thousands”. I wonder why? People just use other tools.
The only snaps I have ever installed manually are --classic. Things like cmake, because I wanted the latest version.
It’s not true though, even on other releases where snaps don’t get commonly used. Fedora will use SElinux to similar effect. If you install Apache2 with DNF, it runs as root. But that root doesn’t get to just do anything and go anywhere. Have a look here for this in the wild:
https://community.spiceworks.com/t/selinux-blocking-httpd/321982
Explicitly as an example, Apache2 in Fedora has a predefined list of paths it can access. It’s not functionally much different from what Snap is enforcing.
No, classic apps don’t automatically run as sudo.
They do automatically update though, and updates may define a systemd daemon that runs automatically, and that daemon runs as root, unconfined. Nothing stops a classic snap from doing this beyond trust in the publisher.
One of my snaps is used in a pretty professional setting, being university level education in physics. It’s had development for 30 years and is built by qualified scientists with PHD’s.
Unfortunately, whilst trying to replace X11 and deciding upon a WebGUI as the future proofed tech of choice, the app embedded an HTTP server in itself to facilitate those requests. All that peer review failed to catch a trivial authentication bypass, that meant for two years, running this tool as a normal user was enough to enable remote code execution.
That applied to every package format on every OS. But it didn’t apply equally to all, because for the users who used it as a snap, access to secrets like in .ssh or .config is cut off. I don’t consider it an amazingly practical success case, because there’s still a lot of damage that can be done with $HOME. However, that RCE is also cut off from the network drives & etc on any users who used it. The sandboxing objectively provided some guarantees in a worst case scenario that wasn’t expected to happen but did. We can argue how likely hackers are to try turn on someones webcam and spy on them, but they couldn’t on the snap if they tried, root or no root.
Everything is security theater until it suddenly isn’t. IT spent weeks trying to get people off of the effected versions, and upstream had to revert that functionality in its entireity to redo the authentication in place, 2 years after the problem already existed.
On a more practical note, you don’t need to re-define plugs as you do on line 41, because you’re not actually adding any config. On per app:, you can add plugs: and just use home, network, and network-bind without having to give them custom names and attributes.
Your server app appears then to be missing network, I’m not sure network-bind is a superset, I think you do genuinely need both for that component.
You can also remove the patchelf stuff, it’s needed for classic snaps, not strict ones, your definition on Github is still using strict and whilst it won’t (probably) be causing any issues, it can cause issues, and there’s no gain in doing it. The preferred route for strict snaps is the $LD_LIBRARY_PATH variable because it works more reliably than binary patching.
Otherwise, I don’t see any reason your snap would struggle to serve files from $HOME (as long as it’s not e.g., $HOME/.ssh/) on ports greater than 1024, you wouldn’t need sudo or classic.
And if people did use sudo, the effect should be similar to the QHTTP snap, $HOME now becomes /root, files are served from there instead, and you’d be able to use e.g., port 80 or 443. However, root doesn’t own your files in /home/$USER - and would have to store the files in roots home instead.
To me, the best design for this app, given the data being 40GB and the presence of a HTTP server, and abiding by the intended snap sandboxing philosophy, would be to make the HTTP server a daemon, store the downloaded files in $SNAP_DATA, and serve them from there. Admins can disable the service with snap start and snap stop, so the missing piece is just a bit of documentation encouraging them to do so.
A common design philosophy in similar apps would be that the app doesn’t even bind to anything other than localhost and explicitly demands another server deals with network access beyond localhost, explicitly so that the main app can run with reduced permissions whilst the more complex security bits are offloaded to a component with more eyes on it, i.e., a reverse proxy.
I appreciate that ultimately peoples philosophies on how software should work is personal and a lot of it comes down to experiences and individual needs, however, with my IT admin hat on, it doesn’t seem unusual to me at all that a server is opinionated on paths, or that it refuses to use certain ports.
Try run postgresql as root. It’s hard. Chrome/ium will tell you off and actively fight it too.
Ewww… Sorry, that’s horrid. Please don’t let publishers do that! It’s remote privilege escalation, pure and simple! Considering uninstalling my --classic snaps right now.
I have been administering *nix machines in datacenters for 25yrs. For sysadmin, this security model makes no sense to me. On one hand you are refusing the let the user do what they type even when they explicitly specify a path or other resource as root. And you, a package distribution service, are justifying removing this control from the user and the publisher, because, on the other hand, you are bandying about remote privilege escalation…?!
Maybe this make sense on the desktop, I don’t know? Even though I have been running Ubuntu on desktop for 25yrs also, I am not really as interested or informed about what goes on there. But for the datacenter, this is not cool, IMHO.
How a service like this should typically be run IMO, and how sysadmins of all the great daemons have been doing it for decades: Explicitly start it, optionally as root, grab the, optionally privileged, resources the daemon needs, su to an unprivileged user and drop to background. Usually the daemon’s files would reside in a dedicated directory typically under /var somewhere, and be owned by the user that the daemon runs under. Again the sysadmin should be able to change that, and I have done so on many occasions. eg IMO,. it is not really wise to let nginx own the php scripts it executes, even though that is usually the default config.
That is how, let me see, nginx and apache work by default on any *nix system I have ever administered. It’s also how I have written, and/or setup dozens of small in-house daemons. Once running, these daemons should not be “automagically” stopped, or restarted, or upgraded, or have their privileges remotely escalated. Unless the sysadmin, as root, explicitly says otherwise, or has explicitly scheduled some periodic service to do so. And when the sysadmin wields root, then they can wield it unconstrained, and not be stopped by the package distribution service or the publisher. With great power comes great responsibility…
Can I do that with a snap? I suspect the answer is “no”? Maybe we need --normal, or --sysadmin, a serious suggestion: A separate, much simpler, security model for snaps for the datacenter.
“Do exactly what you are told to do, and nothing more. Don’t get in the way, and don’t do automatic stuff. Principle of least surprise.”
I will respond separately to you practical comments, which are helpful.
Can’t quite remember why I added that, I think it was because it couldn’t find runtime libs. But yes, seems to work without.
However, what does bother me still, is this snappy-debug output when I run one of the apps:
$ sudo journalctl --output=short --follow --all | sudo snappy-debug
= AppArmor =
Time: Nov 16 06:38:34
Log: apparmor="DENIED" operation="open" class="file" profile="snap.hibp.download" name="/snap/core24/609/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.33" pid=519162 comm="hibp-download" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0
File: /snap/core24/609/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.33 (read)
Suggestion:
* adjust program to read necessary files from $SNAP, $SNAP_DATA, $SNAP_COMMON, $SNAP_USER_DATA or $SNAP_USER_COMMON
# ... and about 20 similar entries..
Clearly it is able to load these .sos because it wouldn’t run otherwise. And that path is “inside the snap”, so surely AppArmor should be fine with that?
So why the noise? How can I “shut it up”?
Is this related to LD_LIBRARY_PATH. Like it’s trying a path which is forbidden and fails, and then one which is permissible and succeeds, but the former triggers the log entry?
CORRECTION: removing removing patchelf fixed this issue. Thanks!
I agree with daemonisation, and I see there is some support for that. However, the point here is that if you hibp.download (download and convert to binary format)… then perhaps hibp.topn or otherwise manipulate the database for your needs, and then hibp.server .
This requires passing arbitrary file names and/or using pipes at each stage. I don’t currently see how I can do that with the snapcraft confinement and daemon options
Not sure $SNAP_DATA would work, for the same reasons, as it is not clear to me how interactive file manipulation would then pass the final file to the server “into the snap and $SNAP_DATA” so that the daemonised server can use it".
I can see some effort has been made to support some of this, but again, it is frustrating, that I cannot “just do it with existing techniques”. I have an executable, I have a filesystem, I know how to setup and run a service, I know how to make it su to a lower user. I, the user, am root, just let me do these things. I can destroy my own machine, in a million different ways, easily. You are not protecting me from anything. “Confinement” is just making it all harder by forcing me to work through an abstraction layer which is not adding anything to my life. --sysadmin please?
I agree this is common and my preferred way, but other sysadmins may have a different opinion or architecture.