I’m setting up my GTX 1060 for my Jellyfin container. Currently using Podman on it, but I’m moving it to Docker (snap).
I tried both stable and edge installs.
$ sudo docker run -d --name jellyfin --net=host --volume /home/.jellyfin/docker/config:/config --volume /home/.jellyfin/docker/cache:/cache --mount type=bind,source=/media,destination=/media,ro=false --user 1001:1001 --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-modeset:/dev/nvidia-modeset --device /dev/nvidia-uvm:/dev/nvidia-uvm --device /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools --runtime=nvidia --gpus all jellyfin/jellyfin
docker: Error response from daemon: unknown or invalid runtime name: nvidia.
channels:
latest/stable: 20.10.24 2023-05-25 (2893) 135MB -
latest/candidate: 20.10.24 2023-09-29 (2904) 135MB -
latest/beta: 20.10.24 2023-10-02 (2910) 135MB -
latest/edge: 24.0.5 2023-10-07 (2915) 136MB -
core18/stable: 20.10.17 2023-03-13 (2746) 146MB -
core18/candidate: ↑
core18/beta: ↑
core18/edge: ↑
installed: 24.0.5 (2915) 136MB -
also tried removing --runtime=nvidia
, and got this:
$ sudo docker run -d --name jellyfin --net=host --volume /home/.jellyfin/docker/config:/config --volume /home/.jellyfin/docker/cache:/cache --mount type=bind,source=/media,destination=/media,ro=false --user 1001:1001 --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-modeset:/dev/nvidia-modeset --device /dev/nvidia-uvm:/dev/nvidia-uvm --device /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools --gpus all jellyfin/jellyfin
a1931bf82e62bc391ca595b42227c314aeba9d04e713a7b87f0558d70733e208
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.
elbar
October 29, 2023, 1:51pm
2
Docker should have GUI, but its problem also is that it is so heavy weight like when you have something like Ubuntu 22.04 LTS or something that based(with defined packages(think of something like LSB standard)) then you don’t need full running operating system in the background, but if you move such docker image to another computer, it could bring required thing instead of full operating system but that is not currently that way.
If Nvidia is supported this way under docker, don’t know but probably not…
jocado
June 27, 2024, 8:09am
3
Hi @YamiYukiSenpai
Bit of a delayed response…
Support for nvidia on Ubuntu Desktop/Server [ non Ubuntu Core ] was only recently added to the docker snap. If you install the revision from the latest/beta
channel, you should have a version that works. You also have to make sure your system has the correct nvidia drivers and nvidia container toolkit installed on your system.
There is some info available in the readme
I hope it works for you. Please let me know if not, and perhaps I can help.
Cheers,
Just
2 Likes
Thanks so much!
I’ll give this a spin when I can
Do I just run sudo nvidia-ctk runtime configure --runtime=docker
or am I supposed to add additional parameters to the command?
I tried to play a video, but it seems to still be going to my Intel CPU as I hear it whining
$ nvidia-smi
Thu Sep 12 02:53:51 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA T400 4GB Off | 00000000:01:00.0 Off | N/A |
| 66% 53C P0 N/A / 31W | 1MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
$ sudo nvidia-ctk runtime configure --runtime=docker --config=/var/snap/docker/current/etc/docker/daemon.json --set-as-default
WARN[0000] Ignoring runtime-config-override flag for docker
INFO[0000] Config file does not exist; using empty config
INFO[0000] Wrote updated config to /etc/docker/daemon.json
INFO[0000] It is recommended that docker daemon be restarted.
$ cat /var/snap/docker/current/etc/docker/daemon.json
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
}
}
$ cat /var/snap/docker/current/etc/cdi/nvidia.yaml
---
cdiVersion: 0.5.0
containerEdits:
deviceNodes:
- path: /dev/nvidia-modeset
- path: /dev/nvidia-uvm
- path: /dev/nvidia-uvm-tools
- path: /dev/nvidiactl
env:
- NVIDIA_VISIBLE_DEVICES=void
hooks:
- args:
- nvidia-ctk
- hook
- create-symlinks
- --link
- libglxserver_nvidia.so.550.90.07::/var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/nvidia/xorg/libglxserver_nvidia.so
hookName: createContainer
path: /snap/docker/2963/usr/bin/nvidia-ctk
- args:
- nvidia-ctk
- hook
- update-ldcache
- --folder
- /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu
hookName: createContainer
path: /snap/docker/2963/usr/bin/nvidia-ctk
mounts:
- containerPath: /run/nvidia-persistenced/socket
hostPath: /run/nvidia-persistenced/socket
options:
- ro
- nosuid
- nodev
- bind
- noexec
- containerPath: /lib/firmware/nvidia/550.90.07/gsp_ga10x.bin
hostPath: /lib/firmware/nvidia/550.90.07/gsp_ga10x.bin
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /lib/firmware/nvidia/550.90.07/gsp_tu10x.bin
hostPath: /lib/firmware/nvidia/550.90.07/gsp_tu10x.bin
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/bin/nvidia-cuda-mps-control
hostPath: /var/lib/snapd/hostfs/usr/bin/nvidia-cuda-mps-control
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/bin/nvidia-cuda-mps-server
hostPath: /var/lib/snapd/hostfs/usr/bin/nvidia-cuda-mps-server
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/bin/nvidia-debugdump
hostPath: /var/lib/snapd/hostfs/usr/bin/nvidia-debugdump
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/bin/nvidia-persistenced
hostPath: /var/lib/snapd/hostfs/usr/bin/nvidia-persistenced
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/bin/nvidia-smi
hostPath: /var/lib/snapd/hostfs/usr/bin/nvidia-smi
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libcuda.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libcuda.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libcudadebugger.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libcudadebugger.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvcuvid.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvcuvid.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-egl-gbm.so.1.1.1
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-egl-gbm.so.1.1.1
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-encode.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-encode.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-gpucomp.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-gpucomp.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-nvvm.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-nvvm.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-pkcs11-openssl3.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-pkcs11-openssl3.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-pkcs11.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-pkcs11.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-tls.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-tls.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvoptix.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvoptix.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/share/nvidia/nvoptix.bin
hostPath: /var/lib/snapd/hostfs/usr/share/nvidia/nvoptix.bin
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/share/X11/xorg.conf.d/10-nvidia.conf
hostPath: /var/lib/snapd/hostfs/usr/share/X11/xorg.conf.d/10-nvidia.conf
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json
hostPath: /var/lib/snapd/hostfs/usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/share/glvnd/egl_vendor.d/10_nvidia.json
hostPath: /var/lib/snapd/hostfs/usr/share/glvnd/egl_vendor.d/10_nvidia.json
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/share/vulkan/icd.d/nvidia_icd.json
hostPath: /var/lib/snapd/hostfs/usr/share/vulkan/icd.d/nvidia_icd.json
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/share/vulkan/implicit_layer.d/nvidia_layers.json
hostPath: /var/lib/snapd/hostfs/usr/share/vulkan/implicit_layer.d/nvidia_layers.json
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/nvidia/xorg/libglxserver_nvidia.so.550.90.07
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/nvidia/xorg/libglxserver_nvidia.so.550.90.07
options:
- ro
- nosuid
- nodev
- bind
- containerPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/nvidia/xorg/nvidia_drv.so
hostPath: /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/nvidia/xorg/nvidia_drv.so
options:
- ro
- nosuid
- nodev
- bind
devices:
- containerEdits:
deviceNodes:
- path: /dev/nvidia0
- path: /dev/dri/card0
- path: /dev/dri/renderD129
hooks:
- args:
- nvidia-ctk
- hook
- create-symlinks
- --link
- ../card0::/dev/dri/by-path/pci-0000:01:00.0-card
- --link
- ../renderD129::/dev/dri/by-path/pci-0000:01:00.0-render
hookName: createContainer
path: /snap/docker/2963/usr/bin/nvidia-ctk
- args:
- nvidia-ctk
- hook
- chmod
- --mode
- "755"
- --path
- /dev/dri
hookName: createContainer
path: /snap/docker/2963/usr/bin/nvidia-ctk
name: "0"
- containerEdits:
deviceNodes:
- path: /dev/nvidia0
- path: /dev/dri/card0
- path: /dev/dri/renderD129
hooks:
- args:
- nvidia-ctk
- hook
- create-symlinks
- --link
- ../card0::/dev/dri/by-path/pci-0000:01:00.0-card
- --link
- ../renderD129::/dev/dri/by-path/pci-0000:01:00.0-render
hookName: createContainer
path: /snap/docker/2963/usr/bin/nvidia-ctk
- args:
- nvidia-ctk
- hook
- chmod
- --mode
- "755"
- --path
- /dev/dri
hookName: createContainer
path: /snap/docker/2963/usr/bin/nvidia-ctk
name: all
kind: nvidia.com/gpu
$ docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.
version: '3.4'
services:
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
restart: unless-stopped
environment:
- NVIDIA_VISIBLE_DEVICES=all
ports:
- 8096:8096
network_mode: host
volumes:
- /home/docker/jellystash/jellyfin/cache:/cache
- /home/docker/jellystash/jellyfin/config:/config
- type: bind
source: /media/ExtHDD
target: /media/ExtHDD
devices:
- /dev/dri/renderD129:/dev/dri/renderD129
- /dev/nvidia-modeset:/dev/nvidia-modeset
- /dev/nvidia-uvm:/dev/nvidia-uvm
- /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools
- /dev/nvidia0:/dev/nvidia0
- /dev/nvidia-caps:/dev/nvidia-caps
- /dev/nvidiactl:/dev/nvidiactl
jocado
September 12, 2024, 7:04am
6
I normally don’t change the default, and just run it with --runtime nvidia
. If you do that, you don’t have to modify the docker daemon config at all, and everything is setup for you.
But, more importantly, the docker daemon config file you’re modifying isn’t the correct path. The snap uses /var/snap/docker/current/config/daemon.json
Also, you don’t need the --gpus all
, so I would try without that to start with [ and just --runtime nvidia
].
Cheers,
Just
jocado:
I normally don’t change the default, and just run it with --runtime nvidia
. If you do that, you don’t have to modify the docker daemon config at all, and everything is setup for you.
$ docker run --rm --runtime=nvidia ubuntu nvidia-smi
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "nvidia-smi": executable file not found in $PATH: unknown.
$ docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "nvidia-smi": executable file not found in $PATH: unknown.
cat /var/snap/docker/current/config/daemon.json
{
"log-level": "error",
"runtimes": {
"nvidia": {
"args": [],
"path": "/snap/docker/2969/usr/bin/nvidia-container-runtime"
}
}
}
I was browsing the GitHub issues and found a command that you asked another person to run:
$ sudo snap logs -n 100 docker.nvidia-container-toolkit
[sudo] password for yamiyuki:
2024-10-17T22:12:27Z systemd[1]: Starting snap.docker.nvidia-container-toolkit.service - Service for snap application docker.nvidia-container-toolkit...
2024-10-17T22:12:28Z docker.nvidia-container-toolkit[1239]: Running on Classic system
2024-10-17T22:12:28Z docker.nvidia-container-toolkit[1324]: lspci: Cannot open /sys/bus/pci/devices
2024-10-17T22:12:28Z systemd[1]: snap.docker.nvidia-container-toolkit.service: Deactivated successfully.
2024-10-17T22:12:28Z systemd[1]: Finished snap.docker.nvidia-container-toolkit.service - Service for snap application docker.nvidia-container-toolkit.
2024-10-17T22:22:51Z systemd[1]: Starting snap.docker.nvidia-container-toolkit.service - Service for snap application docker.nvidia-container-toolkit...
2024-10-17T22:22:51Z docker.nvidia-container-toolkit[5532]: Running on Classic system
2024-10-17T22:22:51Z docker.nvidia-container-toolkit[5564]: lspci: Cannot open /sys/bus/pci/devices
2024-10-17T22:22:51Z systemd[1]: snap.docker.nvidia-container-toolkit.service: Deactivated successfully.
2024-10-17T22:22:51Z systemd[1]: Finished snap.docker.nvidia-container-toolkit.service - Service for snap application docker.nvidia-container-toolkit.
2024-10-17T22:23:47Z systemd[1]: Starting snap.docker.nvidia-container-toolkit.service - Service for snap application docker.nvidia-container-toolkit...
2024-10-17T22:23:47Z docker.nvidia-container-toolkit[6044]: Running on Classic system
2024-10-17T22:23:47Z docker.nvidia-container-toolkit[6076]: lspci: Cannot open /sys/bus/pci/devices
2024-10-17T22:23:47Z systemd[1]: snap.docker.nvidia-container-toolkit.service: Deactivated successfully.
2024-10-17T22:23:47Z systemd[1]: Finished snap.docker.nvidia-container-toolkit.service - Service for snap application docker.nvidia-container-toolkit.
2024-10-17T22:25:29Z systemd[1]: Starting snap.docker.nvidia-container-toolkit.service - Service for snap application docker.nvidia-container-toolkit...
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7502]: Running on Classic system
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7502]: NVIDIA hardware detected: 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1ff2 (rev a1)
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7502]: 01:00.1 Audio device: NVIDIA Corporation Device 10fa (rev ff)
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7502]: Waiting for device to become available: /dev/nvidiactl
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7502]: Checking device: 0/10
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7502]: Device found
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7502]: NVIDIA ready
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:30Z" level=info msg="Auto-detected mode as \"nvml\""
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:30Z" level=info msg="Selecting /dev/nvidia0 as /dev/nvidia0"
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:30Z" level=info msg="Selecting /dev/dri/card1 as /dev/dri/card1"
2024-10-17T22:25:30Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:30Z" level=info msg="Selecting /dev/dri/renderD129 as /dev/dri/renderD129"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Using driver version 550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=warning msg="failed to get custom firmware class path: open /sys/module/firmware_class/parameters/path: permission denied"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /dev/nvidia-modeset as /dev/nvidia-modeset"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /dev/nvidia-uvm-tools as /dev/nvidia-uvm-tools"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /dev/nvidia-uvm as /dev/nvidia-uvm"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /dev/nvidiactl as /dev/nvidiactl"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-egl-gbm.so.1.1.1 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-egl-gbm.so.1.1.1"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/share/glvnd/egl_vendor.d/10_nvidia.json as /var/lib/snapd/hostfs/usr/share/glvnd/egl_vendor.d/10_nvidia.json"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/share/vulkan/icd.d/nvidia_icd.json as /var/lib/snapd/hostfs/usr/share/vulkan/icd.d/nvidia_icd.json"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=warning msg="Could not locate vulkan/icd.d/nvidia_layers.json: pattern vulkan/icd.d/nvidia_layers.json not found"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/share/vulkan/implicit_layer.d/nvidia_layers.json as /var/lib/snapd/hostfs/usr/share/vulkan/implicit_layer.d/nvidia_layers.json"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json as /var/lib/snapd/hostfs/usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=warning msg="Could not locate egl/egl_external_platform.d/10_nvidia_wayland.json: pattern egl/egl_external_platform.d/10_nvidia_wayland.json not found"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/share/nvidia/nvoptix.bin as /var/lib/snapd/hostfs/usr/share/nvidia/nvoptix.bin"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/nvidia/xorg/nvidia_drv.so as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/nvidia/xorg/nvidia_drv.so"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/nvidia/xorg/libglxserver_nvidia.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/nvidia/xorg/libglxserver_nvidia.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/share/X11/xorg.conf.d/10-nvidia.conf as /var/lib/snapd/hostfs/usr/share/X11/xorg.conf.d/10-nvidia.conf"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libcuda.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libcuda.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libcudadebugger.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libcudadebugger.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvcuvid.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvcuvid.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-encode.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-encode.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-gpucomp.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-gpucomp.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-nvvm.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-nvvm.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-pkcs11-openssl3.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-pkcs11-openssl3.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-pkcs11.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-pkcs11.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-tls.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvidia-tls.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvoptix.so.550.90.07 as /var/lib/snapd/hostfs/usr/lib/x86_64-linux-gnu/libnvoptix.so.550.90.07"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /run/nvidia-persistenced/socket as /run/nvidia-persistenced/socket"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=warning msg="Could not locate /nvidia-fabricmanager/socket: pattern /nvidia-fabricmanager/socket not found"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=warning msg="Could not locate /tmp/nvidia-mps: pattern /tmp/nvidia-mps not found"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /lib/firmware/nvidia/550.90.07/gsp_ga10x.bin as /lib/firmware/nvidia/550.90.07/gsp_ga10x.bin"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /lib/firmware/nvidia/550.90.07/gsp_tu10x.bin as /lib/firmware/nvidia/550.90.07/gsp_tu10x.bin"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/bin/nvidia-smi as /var/lib/snapd/hostfs/usr/bin/nvidia-smi"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/bin/nvidia-debugdump as /var/lib/snapd/hostfs/usr/bin/nvidia-debugdump"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/bin/nvidia-persistenced as /var/lib/snapd/hostfs/usr/bin/nvidia-persistenced"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/bin/nvidia-cuda-mps-control as /var/lib/snapd/hostfs/usr/bin/nvidia-cuda-mps-control"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Selecting /var/lib/snapd/hostfs/usr/bin/nvidia-cuda-mps-server as /var/lib/snapd/hostfs/usr/bin/nvidia-cuda-mps-server"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7560]: time="2024-10-17T22:25:31Z" level=info msg="Generated CDI spec with version 0.5.0"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7632]: time="2024-10-17T22:25:31Z" level=info msg="Loading config from /var/snap/docker/2969/config/daemon.json"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7632]: time="2024-10-17T22:25:31Z" level=info msg="Wrote updated config to /var/snap/docker/2969/config/daemon.json"
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7632]: time="2024-10-17T22:25:31Z" level=info msg="It is recommended that docker daemon be restarted."
2024-10-17T22:25:31Z docker.nvidia-container-toolkit[7502]: Conainter Toolkit setup complete
2024-10-17T22:25:31Z systemd[1]: snap.docker.nvidia-container-toolkit.service: Deactivated successfully.
2024-10-17T22:25:31Z systemd[1]: Finished snap.docker.nvidia-container-toolkit.service - Service for snap application docker.nvidia-container-toolkit.
$ snap --version
snap 2.66
snapd 2.66
series 16
ubuntu 24.04
kernel 6.8.0-47-generic
$ snap info docker
name: docker
summary: Docker container runtime
publisher: Canonical✓
store-url: https://snapcraft.io/docker
contact: https://github.com/docker-snap/docker-snap/issues?q=
license: (Apache-2.0 AND MIT AND GPL-2.0)
...
snap-id: sLCsFAO8PKM5Z0fAKNszUOX0YASjQfeZ
tracking: latest/edge
refresh-date: 31 days ago, at 10:55 UTC
channels:
latest/stable: 24.0.5 2024-09-03 (2932) 138MB -
latest/candidate: 24.0.5 2024-09-03 (2932) 138MB -
latest/beta: 27.2.0 2024-09-19 (2963) 146MB -
latest/edge: 27.2.0 2024-09-20 (2969) 146MB -
core18/stable: 20.10.17 2023-03-13 (2746) 146MB -
core18/candidate: ↑
core18/beta: ↑
core18/edge: ↑
installed: 27.2.0 (2969) 146MB -
jocado
October 23, 2024, 7:06am
8
This is documented behaviour
NOTE : library path and discovery is automatically handled, but binary paths are not, so if you wish to test using something like the nvidia-smi
binary passed into the container from the host, you could either specify the full path or set the PATH environment variable.
docker run --rm --runtime=nvidia --gpus all --env PATH="${PATH}:/var/lib/snapd/hostfs/usr/bin" ubuntu nvidia-smi
Cheers,
Just