Snap interface for /dev/shm

To proceed more linearly, I was looking for information on how to [modify or] disable the AppArmor profile.

I see that niemeyer, at Shared memory in /dev/shm rewriting ,
specifies that the enforcing code is

/{dev,run}/shm/snap.@{SNAP_NAME}.** mrwlkix,

but unfortunately I do not see that in /etc/, and I find it instead in /usr/lib/snapd/snapd
which is an ELF (not, as I would have hoped, an extended-text-configuration file). Pretty difficult to edit.

I suspect there should still be a way to disable the AppArmor profile?

@mdp

The user may easily want to access any file in /dev/shm

The whole point of snap sandbox is to prevent that access :grin: you can disable snap sandboxing by installing gimp with: sudo snap install gimp --devmode EDIT: Don’t do this unless you want to be immediately hacked by @lucyllewy :joy:

If you do this, don’t complain when I inevitably turn evil and embed something into gimp that does nasties with all your files… :wink:

Seriously, though, --devmode is not meant as a bypass, it is meant as a debugging thing for developers. End users shouldn’t install snaps with that flag.

1 Like

I would really like --classic to be functioning as a switch to disable security confinement for non-classic snaps.

For the record I absolutely don’t recommend doing this but if someone is so desperate to manually disabling apparmor profiles there is no other help.

The whole point of snap sandbox is to prevent that access [to /dev/shm] :grin:

But can I ask why? I could not locate a document that justifies such purpose… Temporary files have their most appropriate storage in volatile memory; to have your applications work on, hence of course access, files in /dev/shm is the most normal process I can think of, ranked in normality just before the other one of saving final files on persistent storage. I cannot get what alternative process the project drafter was supposing - nor what risk he tried to curb.

If you do this, don’t complain when I inevitably turn evil and embed something into gimp that does nasties with all your files… :wink:

Daniel, all of the applications access most of the filesystem, very surely files that I certainly do not want to be compromised, but this is the *nix model… Unless I created a user per file class… Do packages access /home by default? Than any of them could act as a rogue and vandalize! Of course, if anything like that happened, that would be dire, but communications would spread pretty fast. Now, in the case of image processors or browsers, it is very natural that they are made to access for example mounted storages (for potential bulk of their intended data), which makes them an accepted threat by their own very nature…

In snap they are stored in volatile memory too but in /dev/shm/snap.@{SNAP_NAME}.* to prevent other apps from accessing files and to prevent accessing files of other apps.

Of course not. Otherwise sandbox wouldn’t make sense.

1 Like

Of course not. Otherwise it wouldn’t make sense

Great, interesting, it is starting to make sense. In fact, I see that ‘home’ is in the list of the interfaces.
But would not it make then even more sense to give the ability to whitelist access to a list of directories?
As in: allow the Gimp (for example) access to /media/images, /dev/shm/temp_images, /home/mypictures …
I have met snap for not even one day, so I am still in the process of knowing/understanding it, but I understood that granularity of access stops at “”“big”"" categories (not at specific directories).

BTW, this would solve my case, because I would add some /dev/shm/shared to the whitelist and be happy thereafter.

BTW, this would solve my case, because I would add some /dev/shm/shared to the whitelist and be happy thereafter.

By the way2, I wanted to propose the introduction of a special directory

/{dev,run}/shm/snap.shared/.**  mrwlkix,

(I am not sure if the syntax is correct.)
Wanted, but had not: at this stage, I am not sure of what are the most sound proposals I can make.

We are researching something that will offer a feature for user-driven customization of access into individual directories, similar to what you propose. It will still take a while as we have other high priority features in front of it, but it may show up at some point.

Not clear what that gives you since you didn’t describe the semantics. But in general, any directories that are shared by default break the isolation across snaps in ways that cannot be anticipated, as others pointed out above.

1 Like

Having /dev/shm as a workspace seems quite atypical. It sounds like you are using this as /tmp?

Perhaps this helps (note that while it is titled “Ubuntu Core”, it covers also the basics of snaps and their security model) ?

1 Like

Having /dev/shm as a workspace seems quite atypical. It sounds like you are using this as /tmp?

Nice to meet you Niemeyer,
that is the idea, but /tmp , compared to /dev/shm , is slower and it uses the hard drive, which is unnecessary and oftentimes bad practice. The RAM-disk is as fast as you can get and does not strain.
Everytime I have to operate on files that require processing, and after that will be dismissed, I place them in /dev/shm (“normal practice”). You would not write them on disk, because that is unneeded, right? The same goes, especially, for files that will be subject to very frequent update and need to be processed with the highest speed (“bounded practice”).

The occasion that triggered this thread was simply the conversion of a .png into a .jpg - the .png was a fresh disposable file which I had in /dev/shm . This is a case of normal practice.

But I wish to give a couple of examples, that may show usual procedures where using some space in /dev/shm , and shared between applications, is bounded even more than normal.

Suppose you have to populate an SQLite database. Some automation will impose on it thousands or millions of committals, and as fast as possible: you place it somewhere in /dev/shm . You will store it in the hard drive only after completion. Well, you could rename it as /dev/shm/snap.myIDE.myDB - acceptable hassle, especially if you have created a renamer for these cases and placed it in the popup menu of your desktop manager (although not modifying names and placing files as they are originally named in specific directories is better: that DB you created is not for the “myIDE” application, it is prepared for other ones to use - its name will probably be pretty much constant and in general not related to “myIDE”).
Now, you may also want to check what is happening periodically while the populating process is working, with a GUI-based database manger. If the populating process, “myIDE”, and the database browser application, “myDB-GUI”, are both snap based and observing current segregation model, you cannot access the same file - not simultaneously, and not reasonably even sequentially, since this would mean renaming the file constantly, or finding some other hack.

The same metaphor can be applied to other similar cases: suppose you are extracting data from markups (XML), disposable, placed in /dev/shm to be processed, and at the same time (before, during, after), you want to check the contents with a text editor or similar…

You need the ability to have in the RAM-disk shared spaces between otherwise segregated applications. I supposed that the straightforward way would have been an interface.

The purpose was to create a directory, /dev/shm/snap.shared/, shared by all snap-based applications.
If the original coder did not assume such directory - he will not even assume snap -, and the snap-packager will not by regulation take it for granted, is there really a scenario in which a rogue application may peek there?
The application will not place data on it: it will be the final user.

Right, that’s exactly the purpose of /tmp. The /dev/shm/ directory is supposed to hold content oriented towards IPC via shared memory. More details here:

https://www.kernel.org/doc/gorman/html/understand/understand015.html

Note that this is not just a convention that users follow either. The POSIX shm functions on the standard libc on Linux (e.g. shm_open) will actually refer to /dev/shm/foo internally when one asks for “/foo”.

So if the goal is using temporary files on a tmpfs (same used by /dev/shm), that’s trivially done by adding a line to /etc/fstab. That’s the real location for temporary files, and having those files in memory is a fine choice that many users make (maybe that’s why it’s named tmpfs instead of ramfs?).

Otherwise, abusing of the incorrect location may easily end up creating that sort of conflict across tools that interpret the locations after their intended purpose.

1 Like

While I fully agree with that statement, would it actually help in the snap case given a snap cant see the actual system wide /tmp dir ?

If you have working files to work across several applications, the typical place to put them is in the user’s home, or some media location. We have interfaces for sharing these already, and we also are researching user customized lists, as pointed out above.

We also have the content interface, that allows a more explicit and polished sharing across defined snaps.

Several options addressing common and slightly different cases.

1 Like

I thought /tmp is already loaded in RAM on most distros by default.

I’m having the same problem, GIMP can not access files in my simple RAM disk, which actually is a folder link in my home directory to /dev/shm/Ramdisk

Works fine with all other programs which are installed through repositories. Maybe the easiest solution is to install GIMP from a repository instead of snap store? What do you think?