I’m working towards a demo/prototype that will give us more information about how to proceed with openGL/openCL/Vulkcan/vdapu/cuda runtimes so that snap applications can benefit from drivers that were created after the instant the snap application was made.
Currently snapd contains a bit of code for special support of the NVIDIA user space libraries provided by the host. The system is fragile and complex and only works with NVIDIA. We’d like to expand that to something more generic while making it simpler to support at the same time.
I’ve been conducting some research over the past few days and I’d like to share my straw man plans.
GPU Support Proposal
snapd team makes two prototype snaps snapd-nvidia-418 and snapd-core18-mesa for the amd64 architecture
later on: additional NVIDIA driver versions
later on: support for i386 architecture
later on: support for core16
later on: support for Wayland
later on: support for Radeon Pro drivers via snapd-core18-amdpro
prototypes provide gpu-support slot with the following directories:
opengl-runtime - this ships libGL.so and dependencies
opencl-runtime - TBD
vdapu-runtime - TBD
vulcan-runtime - TBD
cuda-runtime (nvidia-specific) - TBD
libraries common across this set should be placed in a dedicated directory
snapd probes presence of loaded nvidia kernel driver
no need to handle PCI IDs or ask the store for anything
the host system provides the kernel driver and udev rules
snapd installs either snapd-nvidia-418 or snapd-core18-mesa
cooperating application snaps provide gpu-support plug
snaps using classic confinement are not supported yet
when disconnected bundled libraries are used, like today
when disconnected NVIDIA from the host works like today
when connected startup logic orders LD_LIBRARY_PATH from gpu-support connection ahead of $SNAP_LIBRARY_PATH and internal bundled libraries.
Perhaps yes but I wanted to focus on something that is explicitly opt-in before we venture further:
We could define the plug automatically from snapd, on all the snaps that have one of the other plugs (e.g. OpenGL). We could then use the new wrapper replacement (forgive me, the naming of the feature eludes me now) where snapd would inject the GPU aware LD_LIBRARY_PATH changes automatically.
The downside of doing this like that is that it is done without cooperation with the snap developer and without a way for users to opt out. Perhaps that’s okay but I wanted to avoid that for now.
Perhaps i386 is really popular, but I personally think seeing this supported on the Jetson Nano (which is arm64) would be more useful/interesting than i386. Just my 2 cents
The Nvidia binary driver distribution does not support ARM AFAIK. Perhaps there’s a separate bundle (and I’d love to get the Jetson Nano but it is not available yet) but this needs to be investigated with an actual device in hand.
Actually I’m not sure if this is useful as really with the Nano you’re just using CUDA not actually the Nvidia driver (think headless IoT systems), so I don’t know if it’s super useful to be able to have the Nvidia driver available to snaps running on the Nano. The one example I can think of where it would be useful to use the Nvidia driver itself would be in digital signage situations where you are using the GPU on the Nano to drive a 4K digital signage display or something.
The main motivation of this is to ensure application snaps have high longevity by allowing them to work on hardware made later than the application itself. A common example would be allowing to inject updated MESA into the execution path of an old game so that it can run on future intel GPUs.
+1 here. Note that the CUDA kernel drivers for the Jetson devices are fully open source and included as part of the kernels provided by Nvidia, although unsurprisingly userspace libraries are closed. With latest changes I added to the OpenGL interface CUDA applications can run on it, however this bug interferes when using UC18:
The Jetson Nano is already available (and so cheap!) so probably it would be worth getting one . We already have a Core image for it too.
I would also like to mention that the Nano is most probably going to become the RPi of AI/ML…
On the GL side I just want to mention we’re bundling the mesa userspace libraries with both “application” snaps and “server” snaps (both confined and classic).
Actually I am currently working on a project that requires full screen UI support for a kiosk point of sale system, and we are using the nano. I really need this to work.
Hey @zyga-snapd and team – really interested in this topic for some of our upcoming work, specifically for the Tegra family of NVIDIA devices. Has there been any more work towards this goal?
I comment this, because not only opengl would be interesting, vulkan for rpi4 which is a very close milestone(and a closed one for rp3 rpi-vk-driver), would open many possibilities. But i see the thread is dead, because I guess is like a Santa Claus’ letter. Otherwise,@zyga-snapd if you could bring us some light about this…
Everyone, this item is not on the current (20.10) schedule. I would love to pick it up but there are higher-priority tasks that I’m assigned to. I will send updates when this changes.
… every user without the snap ecosystem can use it’s rpi with vulkan. And I know that IOT and 3dgraphics doesn’t seem to be necessary linked issues. But the point is, the “opening” of the Broadcom in rpi, let users use libraries like vukfft for signal processing (and that is very IOT).
@zyga-snapd , I think it should be reconsired. It’s just my point of view.
you can always include the libs you need in your snap (see https://github.com/ogra1/omxplayer-snap which includes the vc4 userspace libs) which is what is typically done for IoT focused snaps anyway … there might be additional bits needed in certain interfaces though, if you could try your app and submit the list of apparmor denials as printed by snappy-debug when using vukfft these can easily be extended…