Having /dev/shm as a workspace seems quite atypical. It sounds like you are using this as /tmp?
Nice to meet you Niemeyer,
that is the idea, but /tmp , compared to /dev/shm , is slower and it uses the hard drive, which is unnecessary and oftentimes bad practice. The RAM-disk is as fast as you can get and does not strain.
Everytime I have to operate on files that require processing, and after that will be dismissed, I place them in /dev/shm (“normal practice”). You would not write them on disk, because that is unneeded, right? The same goes, especially, for files that will be subject to very frequent update and need to be processed with the highest speed (“bounded practice”).
The occasion that triggered this thread was simply the conversion of a .png into a .jpg - the .png was a fresh disposable file which I had in /dev/shm . This is a case of normal practice.
But I wish to give a couple of examples, that may show usual procedures where using some space in /dev/shm , and shared between applications, is bounded even more than normal.
Suppose you have to populate an SQLite database. Some automation will impose on it thousands or millions of committals, and as fast as possible: you place it somewhere in /dev/shm . You will store it in the hard drive only after completion. Well, you could rename it as /dev/shm/snap.myIDE.myDB - acceptable hassle, especially if you have created a renamer for these cases and placed it in the popup menu of your desktop manager (although not modifying names and placing files as they are originally named in specific directories is better: that DB you created is not for the “myIDE” application, it is prepared for other ones to use - its name will probably be pretty much constant and in general not related to “myIDE”).
Now, you may also want to check what is happening periodically while the populating process is working, with a GUI-based database manger. If the populating process, “myIDE”, and the database browser application, “myDB-GUI”, are both snap based and observing current segregation model, you cannot access the same file - not simultaneously, and not reasonably even sequentially, since this would mean renaming the file constantly, or finding some other hack.
The same metaphor can be applied to other similar cases: suppose you are extracting data from markups (XML), disposable, placed in /dev/shm to be processed, and at the same time (before, during, after), you want to check the contents with a text editor or similar…
You need the ability to have in the RAM-disk shared spaces between otherwise segregated applications. I supposed that the straightforward way would have been an interface.