Nvidia CUDA on Ubuntu Core

Regardless the snap was launched correctly or not, snap run --strace <snap>.<app> gives an endless output.

Regarding the /dev nodes and drivers loaded (lsmod?) I counted the number of elements with wc -l but I couldn’t see any difference between before the standalone launch and after it.

Are you saying that CUDA works just fine when running under strace? If the application works a worker or somesuch, you may need to pass -f to strace, eg. snap run --strace='-vf' <snap>.<app>

I meant that when launching the snap under strace, it kept showing an output even when the program using the CUDA driver crashed. This was probably happening because the lauching of the program needing CUDA is wraped in a roslaunch and that automatically runs a ROS core that will survive even if the main application crash, and thus, the endless strace output.

Tomorrow I’ll give it a try with the other arguments, thanks!

I could manage to have an useful output using strace. I basically did two tests:

  • System start-up -> standalone application execution -> run the snap (this will work, as well as the afterwards calls)
  • System start-up -> run the snap directly (fail, subsequent calls to the standalone app or the snap will also fail)

I examined the logs looking for any nvidia/cuda related information. At the beginning, both contain the same:

PID  access("/sys/module/nvidia/version", F_OK) = -1 ENOENT (No such file or directory)
...
PID  futex(0xb659a1d4, FUTEX_WAKE_PRIVATE, 2147483647) = 0
PID  open("/dev/shm/cuda_injection_path_shm", O_RDWR|O_NOFOLLOW|O_CLOEXEC) = -1 ENOENT (No such file or directory)
PID  open("/home/ubuntu/snap/my-snap/x1/.nv/nvidia-application-profile-globals-rc", O_RDONLY) = -1 ENOENT (No such file or directory)
PID  open("/home/ubuntu/snap/my-snap/x1/.nv/nvidia-application-profiles-rc", O_RDONLY) = -1 ENOENT (No such file or directory)
PID  open("/home/ubuntu/snap/my-snap/x1/.nv/nvidia-application-profiles-rc.d", O_RDONLY) = -1 ENOENT (No such file or directory)
PID  open("/etc/nvidia/nvidia-application-profiles-rc", O_RDONLY) = -1 ENOENT (No such file or directory)
PID  open("/etc/nvidia/nvidia-application-profiles-rc.d/", O_RDONLY) = -1 ENOENT (No such file or directory)
PID  open("/usr/share/nvidia/nvidia-application-profiles-21.4-rc", O_RDONLY) = -1 ENOENT (No such file or directory)
PID  open("/usr/share/nvidia/nvidia-application-profiles-rc", O_RDONLY) = -1 ENOENT (No such file or directory)
PID  geteuid32()                       = 1000
PID  open("/tmp/nvidia-mps/control", O_WRONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)

However, later, there’s a slight difference when accessing to the board:


Running the snap after executing the standalone application

5890  stat64("/usr/bin/nvidia-modprobe", 0xbeb00e80) = -1 ENOENT (No such file or directory)
5890  open("/proc/driver/nvidia/params", O_RDONLY) = -1 ENOENT (No such file or directory)
5890  stat64("/dev/nvidiactl", 0xbeb00d48) = -1 ENOENT (No such file or directory)
5890  mknod("/dev/nvidiactl", S_IFCHR|0666, makedev(195, 255)) = -1 EACCES (Permission denied)
5890  geteuid32()                       = 1000
5890  stat64("/usr/bin/nvidia-modprobe", 0xbeb00e80) = -1 ENOENT (No such file or directory)
5890  open("/dev/nvidiactl", O_RDWR|O_LARGEFILE) = -1 ENOENT (No such file or directory)
5890  open("/dev/nvhost-as-gpu", O_RDWR) = 11
5890  close(11)                         = 0
5890  open("/dev/nvmap", O_RDWR|O_DSYNC|O_CLOEXEC) = 11
5890  open("/dev/nvhost-as-gpu", O_RDWR|O_DSYNC) = 12
5890  ioctl(12, _IOC(_IOC_READ|_IOC_WRITE, 0x41, 0x2, 0x18), 0xbeb00fc0) = 0
5890  open("/dev/nvhost-prof-gpu", O_RDWR) = 13
5890  open("/sys/devices/platform/host1x/gk20a.0/ptimer_scale_factor", O_RDONLY) = 14
5890  fstat64(14, {st_dev=makedev(0, 11), st_ino=7356, st_mode=S_IFREG|0444, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=0, st_size=4096, st_atime=1529388093 /* 2018-06-19T15:01:33.222825993+0900 */, st_atime_nsec=222825993, st_mtime=1529388093 /* 2018-06-19T15:01:33.222825993+0900 */, st_mtime_nsec=222825993, st_ctime=1529388093 /* 2018-06-19T15:01:33.222825993+0900 */, st_ctime_nsec=222825993}) = 0
5890  read(14, "2.604166\n", 4096)      = 9
5890  close(14)                         = 0
5890  sysinfo({uptime=241, loads=[28256, 19104, 8192], totalram=2034458624, freeram=1112154112, sharedram=0, bufferram=80490496, totalswap=0, freeswap=0, procs=186, totalhigh=1271922688, freehigh=485691392, mem_unit=1}) = 0
5890  ioctl(12, _IOC(_IOC_READ|_IOC_WRITE, 0x41, 0x2, 0x18), 0xbeb01030) = 0
5890  sysinfo({uptime=241, loads=[28256, 19104, 8192], totalram=2034458624, freeram=1112154112, sharedram=0, bufferram=80490496, totalswap=0, freeswap=0, procs=186, totalhigh=1271922688, freehigh=485691392, mem_unit=1}) = 0
5890  sysinfo({uptime=241, loads=[28256, 19104, 8192], totalram=2034458624, freeram=1112154112, sharedram=0, bufferram=80490496, totalswap=0, freeswap=0, procs=186, totalhigh=1271922688, freehigh=485691392, mem_unit=1}) = 0
5890  sysinfo({uptime=241, loads=[28256, 19104, 8192], totalram=2034458624, freeram=1112154112, sharedram=0, bufferram=80490496, totalswap=0, freeswap=0, procs=186, totalhigh=1271922688, freehigh=485691392, mem_unit=1}) = 0
5890  sysinfo({uptime=241, loads=[28256, 19104, 8192], totalram=2034458624, freeram=1112154112, sharedram=0, bufferram=80490496, totalswap=0, freeswap=0, procs=186, totalhigh=1271922688, freehigh=485691392, mem_unit=1}) = 0
5890  open("/sys/kernel/tegra_gpu/gpu_available_rates", O_RDONLY) = 14

From this point, there are some accesses more to /sys/kernel/tegra_gpu. Those accesses are not seen in the next test.


Running the snap directly

1957  stat64("/usr/bin/nvidia-modprobe", 0xbee73e80) = -1 ENOENT (No such file or directory)
1957  open("/proc/driver/nvidia/params", O_RDONLY) = -1 ENOENT (No such file or directory)
1957  stat64("/dev/nvidiactl", 0xbee73d48) = -1 ENOENT (No such file or directory)
1957  mknod("/dev/nvidiactl", S_IFCHR|0666, makedev(195, 255)) = -1 EACCES (Permission denied)
1957  geteuid32()                       = 1000
1957  stat64("/usr/bin/nvidia-modprobe", 0xbee73e80) = -1 ENOENT (No such file or directory)
1957  open("/dev/nvidiactl", O_RDWR|O_LARGEFILE) = -1 ENOENT (No such file or directory)
1957  open("/dev/nvhost-as-gpu", O_RDWR) = 10
1957  close(10)                         = 0
1957  open("/dev/nvmap", O_RDWR|O_DSYNC|O_CLOEXEC) = 10
1957  open("/dev/nvhost-as-gpu", O_RDWR|O_DSYNC) = 12
1957  ioctl(12, _IOC(_IOC_READ|_IOC_WRITE, 0x41, 0x2, 0x18), 0xbee73fc0) = -1 ENOENT (No such file or directory)
1957  ioctl(12, _IOC(_IOC_READ|_IOC_WRITE, 0x41, 0x3, 0x10), 0xbee73f28) = -1 EINVAL (Invalid argument)
1957  close(10)                         = 0
1957  close(-1)                         = -1 EBADF (Bad file descriptor)
1957  close(12)                         = 0

As pointed before, there’s no access to /sys/kernel/tegra_gpu. Also, /dev/nvhost-prof-gpu and other devices don’t appear here.

Please, let me know if you need something else.

Interesting, in the snap access case, the ioctl() to /dev/nvhost-as-gpu fail. Looked up some random documentation on the web, and found this http://switchbrew.org/index.php?title=NV_services#.2Fdev.2Fnvhost-as-gpu. If I’m not mistaken, the IOCTLs are NVGPU_AS_IOCTL_ALLOC_SPACE (the first one), followed by NVGPU_AS_IOCTL_FREE_SPACE (second one). If the description in that page is true, then an attempt reserve some pages in device address space has failed. Hard to tell why tough.

Is there anything suspicious in dmesg after the snap runs?

Could you paste somewhere the full strace logs up to a point where these ioctls happen? I see the file descriptor numbers differ by 1, maybe there is some extra device node opened/fiddled with in the non-snap case.

@ogra do you guys have access to TK1 by any chance and could take a look at what happens there? I feel this is approaching the point where further debugging will he hard without having access to the board.

Also, can you post your kernel version, and the output of snap debug sandbox-features?

I dont think so but i can ask around this afternoon (personally my latest nvidia arm HW is tegra2 based …)

I did the tests again checking the dmesg output. There’s nothing strange if I launch the standalone and then the snap, however if I run the snap directly…

[  289.128931] gk20a gk20a.0: failed to get firmware
[  289.135795] gk20a gk20a.0: failed to get firmware
[  289.142534] gk20a gk20a.0: gk20a_init_pmu_setup_sw: failed to load pmu ucode!!
[  289.151806] gk20a gk20a.0: gk20a_pm_finalize_poweron: failed to init gk20a pmu

Here you have the requested logs:

Kernel version: 3.10.105-gbd6a2f8

“snap debug sandbox-features” output:

confinement-options:  classic devmode
dbus:                 mediated-bus-access
kmod:                 mediated-modprobe
mount:                freezer-cgroup-v1 layouts-beta mount-namespace per-snap-persistency per-snap-profiles per-snap-updates per-snap-user-profiles stale-base-invalidation
seccomp:              bpf-argument-filtering
udev:                 device-cgroup-v1 tagging

The dmesg kind of makes sense now. It’s possible that the driver tries to load the firmware for the Kepler GPU and fails. Now the questions is whether firmware loading (either directly from the kernel or via the userspace helper) works with mount namespaces set up by snap-confine. @zyga-snapd do have any idea?

My rough guess is the actual paths tried are inside the mount namespace. Since the firmware files are obviously not there, loading will fail. When you run it outside of snap, this works an firmware is successfully loaded.

Do you know where the firmware is located? It roughly depends on the place and the set of syscalls needed to achieve that.

These are the paths tried by that version of the kernel: https://elixir.bootlin.com/linux/v3.10.105/source/drivers/base/firmware_class.c#L262

static const char * const fw_path[] = {
	fw_path_para,
	"/lib/firmware/updates/" UTS_RELEASE,
	"/lib/firmware/updates",
	"/lib/firmware/" UTS_RELEASE,
	"/lib/firmware"
};

Given that this is done on a core system you will have a mount namespace with minimal changes (no pivot root, almost no bind mounts). I suspect this is a confinement problem more than the mount namespace problem.

@mbeneto is this on a Ubuntu Core system?

@mbeneto also please upload a strace -f of the standalone program.

@mbeneto a quick check to try after rebooting:

  • sudo /usr/lib/snapd/snap-discard-ns <your-snap>
  • sudo mount -o bind /lib/firmware /snap/core/current/lib/firmware
  • run the snap application

i asked after the team meeting and there is no jetson TK1 anywhere in the devices team atm, sorry …

Thanks for asking, appreciated!

Thanks for your support @mborzecki, @zyga-snapd, @ogra.

Yes, trying that after rebooting it works properly! (here is the strace, just in case)

Answering to the other questions:

  • This is not an Ubuntu Core system, it’s a L4T with a custom kernel, as the board being used is based on TK1 (basically the same but adding CAN interfaces).
  • Standalone strace

@zyga-snapd I think the issue still persists. Maybe we should consider mounting host’s /lib/firmware inside the mount namespace to cover for this. What’s your thought?

Not sure how to cover this with AppArmor properly though. Maybe @jdstrand has some suggestions.

We can try doing that, yeah. Could you make this a part of the opengl interface? That would be sufficient. Just use the mount specification please.