We are kindly requesting classic confinement for daisytuner-cli:
name: daisytuner-cli
description: Daisytuner is a platform for continuous benchmarking and continuous tuning. With integrations into Gitlab and GitHub, users can continuously track the performance of classical software applications and neural networks. With the daisytuner-cli, we provide several command line tools to set up self-hosted runners for performance measurements on the users’ local machines.
snapcraft: PRIVATE
upstream: PRIVATE
upstream-relation: Owner and maintainer. Verified Publisher.
supported-category: public cloud agent / HPC orchestration
reasoning: The cli tools install a cloud runner as a system daemon. When registered by the user with the platform (token-based), the runner listens for benchmarking jobs triggered by developers of the user via GitHub/GitLab. The runner executes those jobs by building and executing a docker image with several tools mounted into the container. This includes the system’s docker installation, devices (GPUs, TPUs, etc.), drivers (CUDA, ROCm, compilers), performance profiling tools (perf, py-spy, nvidia nsights), and special registers (MSR). The system must be set up by the user following our documentation on-premise.
I understand that strict confinement is generally preferred over classic.
I’ve tried the existing interfaces to make the snap to work under strict confinement.
This looks like you could get away with simply using the docker interface instead of classic confinement given your tool seems to only orchestrate local docker containers (or do I misunderstand the description of the tool)
It seems like the plug only works when docker is also installed as a snap. I am afraid, that most users prefer the apt-version (official docker docks), and many features such as GPU support don’t seem to work for the snap version.
We love to ship via snap, but forcing users to install docker from snap is not possible. Consider an NVIDIA Jetson with NVIDIA’s container runtime + docker.
Anything, we can do to get this done? We just talked to several beta users and they have no control over where docker is installed, because they use modules on a shared system