Issues with snap dissabled

I have an urgent issue with snap/lxd on my server.
After a reboot almost none of my snaps are available.
If I run “lxc --list” i get:
cannot perform operation: mount --rbind /dev /tmp/snap.rootfs_iqH9Cy//dev: No such file or directory
If I run “snap list --all” I get:
Name Version Rev Tracking Publisher Notes
canonical-livepatch 9.6.1 98 latest/stable canonical✓ disabled
canonical-livepatch 9.6.2 99 latest/stable canonical✓ -
core 16-2.50.1 11167 latest/stable canonical✓ core,disabled
core 16-2.51 11187 latest/stable canonical✓ core
core18 20210309 1997 latest/stable canonical✓ base,disabled
core18 20210507 2066 latest/stable canonical✓ base
lxd 4.0.5 19647 4.0/stable/… canonical✓ disabled
lxd 4.0.6 20326 4.0/stable/… canonical✓ -
thelounge 4.1.0 200 latest/stable snapcrafters disabled
thelounge 4.2.0 280 latest/stable snapcrafters -

How Can I recover from this without reinstalling everything?

did you try to just call snap enable ... for the disabled snaps ?

Why would you want to enable the older revisions?
Surely you want to fix the mount problem, not the fact that previous releases are disabled - which is perfectly normal?

1 Like

I did not see that the disabled snaps where the old versions. Then that is not the issue. Can you help me with the mount issue?
I get a similar issue when I try to install any snaps.

hah, thanks for pointing that out, i totally missed the --all switch above (and a normal snap list wouldnt show any disabled ones)

what is the full output of snap version ? (do you use any non standard kernel on your machine ?)

output of “snap version”:
snap 2.51.1
snapd 2.51.1
series 16
ubuntu 20.04
kernel 5.4.0-74-generic

No, standard kernel.

PS: I tried to create the dir under /tmp/snap.rootfs*, but it seems it is generated at random after snap install.

yeah, that dir is nothing you would create manually … do you have enough (non zero) disk space free ?

I know. I just tested it to see if it would help :slight_smile:
I have space available on the drive. Output of df -h:
Filesystem Size Used Avail Use% Mounted on
udev 12G 0 12G 0% /dev
tmpfs 2.4G 3.1M 2.4G 1% /run
/dev/sda1 220G 74G 135G 36% /
tmpfs 12G 4.0K 12G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 12G 0 12G 0% /sys/fs/cgroup
/dev/loop0 9.2M 9.2M 0 100% /snap/canonical-livepatch/98
/dev/loop1 56M 56M 0 100% /snap/core18/1997
/dev/loop2 71M 71M 0 100% /snap/lxd/19647
/dev/loop3 68M 68M 0 100% /snap/lxd/20326
/dev/md0 2.7T 2.2T 421G 84% /Volumes/Media
/dev/sdd2 489G 70G 395G 15% /Volumes/Timeshift
Media2 3.6T 572G 3.0T 16% /Volumes/Media2
/dev/loop6 9.2M 9.2M 0 100% /snap/canonical-livepatch/99
/dev/loop7 100M 100M 0 100% /snap/core/11187
/dev/loop9 56M 56M 0 100% /snap/core18/2066
tmpfs 2.4G 0 2.4G 0% /run/user/1000
/dev/loop10 100M 100M 0 100% /snap/core/11316

yeah, that looks like plenty of space …

do you see anything interesting if you run journalctl -f in one terminal and the lxc list in a second terminal when the error occurs ?

I get this:
Jun 23 15:56:27 aa-srv3 systemd[2468]: Started snap.lxd.lxc.24915465-9651-44cb-a75d-a7354c6da5fb.scope.
Jun 23 15:56:27 aa-srv3 systemd[2468]: snap.lxd.lxc.24915465-9651-44cb-a75d-a7354c6da5fb.scope: Succeeded.

I guess that’s normal?

When I run “snap install thelounge” (just an example) I get this in the log:

Is it a bad idea to apt purge snap and reinstall?

hmm, that log looks like you have some weird systemd things going on there with the systemd-sysv-generator, not really sure what to make out of this …

if you uninstall snapd to get to a pristine state, make sure to use apt remove --purge snapd, that should remove all snaps and all of their data alongside …

I ended up removing (purging) snapd and reinstalling lxd. I am in the process of manually reinstalling my containers. If this where to happen again. Is there any way of manually copying my containers and import them after the reinstall of lxd?

i’'m not sure where exactly lxd stores its containers but snaps come with a builtin backup mechanism via snap save ... (see the other snapshot options in snap --help) and i would expect you to be able to save your container setup this way …

That’s the thing… If snapd get’s destroyed again I will have no way of recovering my lxc containers if there is no way of copying them before I --purge snapd and lxc… Is it not possible to copy them? Or am I thinking wrong?

did you try creating a snapshot using the builtin snap backup features ?

Sorry for the delay. I did not try the snap snapshots as they would not help me in this situation. If snap is not working I would not be able to recover without removing snapd anyway. I will be looking into snapshots for other reasons. Thank you for the help!

1 Like