Seed new snap into the image?

We have a yocto based image with read-only rootfs which contains a few snaps in it as well. We use snap prepare-image command to include those snaps in the image.

On first boot those “seeded” snaps get installed and our device bootstrap completes.

Now we want to add a new default snap into the image via an OTA update, it seems that doesn’t work. The new snap gets added into the image and can be seen at /var/lib/snapd/seed/snaps/, however it doesn’t get installed automatically.

Is there a way I could “force” snapd to look into the seed to check if it needs to install a new snap ?

I have found that if I delete /var/lib/snapd/state.json (it’s overlayfs as actual file lives somewhere else) before boot then new snaps get “seeded” as expected.

It would be great if a “reference image” of this kind with snapd and all and was available somewhere for the community!

1 Like

I’ll get to that eventually. I think the natural place for something like that would be https://github.com/morphis/meta-snappy

Also, wouldn’t you just want to POST to the snapd API to add the assertion(s) and/or POST to the snapd API to sideload the snap?

It seems like you are overloading seeding in a way that it was not intended. If you already have an agent on the box, it seems like using the snapd API would be preferable?

Great reference, thanks!

In this case the snap is already in the image through an OTA update. I am expecting snapd to check if a new snap as appeared in /var/lib/snapd/seed/snaps/ and install it automatically as it did on the first boot.

I don’t have an agent/daemon yet but it’s coming eventually. So in the absence of a definitive solution at snapd side, I’ll have to figure something out indeed.

Modifying the /var/lib/snapd/seed directory is not something we support, seeding is by definition a process that happens once per the lifetime of a device unless you are using UC20 and use recovery systems via snap reboot --install etc.

1 Like

This makes sense for the traditional use cases of snapd on classic environments. However in a A/B partition scheme the OEMs may want to add a new “default” app on their OS using an OTA update. This also means that even after a “factory reset” the device will have that new snap by default because it was shipped in the rootfs seed directory.

So in the current scenario, I believe I’d have to find a way to manually install the snaps that wouldn’t get installed as one-time seeding already happened.

How destructive is it to delete /var/lib/snapd/state.json to retrigger “seeding” ? Can it potentially result in snapd breakage ?

Seeding is a one time thing. Things may get confusing when there is some on disk state related to snaps that are left over. Given that snapd handles unexpected reboots it shouldn’t complain about mount units, service units and files. If the snaps on the device were subsequently refreshed and you had newer revisions installed, after seeding runs old revisions from the seed will be brought back. Interface connections are kept in the state, so those will be lost, you’ll have to recreate them. If those generated additional files, things may be a bit shaky, but again care is taken to handle unexpected reboots, so preexisting files should not cause trouble. I think that the store registration will go away too (the macaroon is stored in the state AFAIR). Anyways, YMMV but it’s definitely unsupported.

TBH can’t you extend your OTA solution to just install the new snap and separately update the seed in case a factory reset is executed at some later time?