Previously, with core20 and lower snaps, we have been able to build snaps as non root user. This is the case with snapcraft 7.x and lower at least 6.x too IIRC. However, trying to build a core22 snap with snapcraft 7 as non-root user fails on apt operations, like this:
2022-11-23 20:32:47.615 Executing: ['apt-get', 'update']
2022-11-23 20:32:48.214 :: Reading package lists...
2022-11-23 20:32:48.223 :: E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
2022-11-23 20:32:48.223 :: E: Unable to lock directory /var/lib/apt/lists/
2022-11-23 20:32:48.224 Failed to refresh package list: failed to run apt update.
2022-11-23 20:32:48.228 Traceback (most recent call last):
2022-11-23 20:32:48.228 File "/snap/snapcraft/8567/lib/python3.8/site-packages/craft_parts/packages/deb.py", line 403, in refresh_packages_list
2022-11-23 20:32:48.228 process_run(cmd)
2022-11-23 20:32:48.228 File "/snap/snapcraft/8567/lib/python3.8/site-packages/craft_parts/packages/deb.py", line 746, in process_run
2022-11-23 20:32:48.228 os_utils.process_run(command, logger.debug, **kwargs)
2022-11-23 20:32:48.228 File "/snap/snapcraft/8567/lib/python3.8/site-packages/craft_parts/utils/os_utils.py", line 370, in process_run
2022-11-23 20:32:48.228 raise subprocess.CalledProcessError(ret, command)
2022-11-23 20:32:48.228 subprocess.CalledProcessError: Command '['apt-get', 'update']' returned non-zero exit status 100.
Looking at the code, it seems to be a difference in the way apt update is handled in the “legacy” snapcraft native code vs craft_parts.
Previous code does:
sudo --preserve-env apt-get
Whereas craft_parts code just does:
This is a bit limiting if you are building using --destructive-mode as a normal user. If you have to sudo the entire snapcraft command, you end up with root owned files in an otherwise non-root working directory [ like somewhere in a user home directly, for instance ].
Is there any reason for the difference in the craft _parts implementation ?
Would a PR that added back default sudo be accepted ?
In the most common and supported case, where the build runs inside a managed instance, there will be no changes for the user (since snapcraft runs as root inside the container). It’s expected that if you run in destructive mode there will be a number of details you’ll need to take care of manually, such as providing a suitable environment to run on.
Destructive mode is provided for very specific cases such as CI or packing inside pre-built container environments (e.g. docker), where operations are scripted and sudo is unnecessary. It’s not intended for general interactive use.
I understand that. In fact, I am using docker. But, I am using docker as a non-root user.
It may help if I explain the entire use case a bit further.
We use snapcraft in docker in our CI environment. We don’t have much choice there. Sometimes, it’s useful to run the same container locally, either because you are debugging something and want a quicker dev cycle than a CI pipeline, or are debugging something and want to be able to get a shell in the container too. In those cases, it makes it easier to run the snapcraft process in the container using a local users UID/GID, because artifacts from the build all have the correct ownership [ parts, stage, prime, and the snap itself ].
It’s also good practice to not run things as root, where possible.
The above workflow is working fine for core20 snaps, because the snapcraft code adds sudo for apt operations that require elevated privileges. Unfortunately, the same workflow does not work with core22 snaps.
Given that this has worked for previous core version snaps, from my end user perspective it’s a breaking change in this instance.
Would it be reasonable to include sudo in the code where it may be required, specifically for pkg operations that need elevated privileges ? Or are there some concerns around it, like comparability/portability ?
Ah, I see. But in this case, how do you deal with sudo, supposing that your CI workflow runs non-interactively?
It’s unfortunate that your workflow was established on a non-supported case, but we can try to find a way to mitigate this problem. If package installation is the only issue, maybe wrapping apt in a script that calls it with sudo could help? Or, since its called with sudo anyway, presumably without user interaction, can it be made setuid inside your container?
In CI, we do run it as root. Pretty much everything runs as root in those cases, it seems the norm, probably because it’s all composable “throw away” infrastructure.
It’s just when running on a local machine, there are more considerations, such as file ownership and security. In that case we pass in some extra args to the container with the preferred UID/GID, and the container sets up a matching local user and runs snapcraft like that.
That is an interesting possibility, that would make it fairly transparent from the end users point of view. It would have to be done “properly” with dpkg-divert, because there’s always a chance that get’s updated at container runtime or some such. I will have a think about that
Can I ask again what is the reason to not want sudo in the apt calls in the craft_parts code, like it was in the original snapcraft code in ? It would be nice to understand the reason.
Sudo usage was a leftover from the pre-containerization era and considered redundant with build instance usage since snapcraft already runs as root in the managed environment. Additionally, we tried to avoid any interaction during the lifecycle processing to prevent blocking, and the CLI library in today’s UI reflects that by not implementing support to user input. If you just put sudo there today it wouldn’t work as expected.