Hi team,
We’re wrapping up the release of the gemma3 snap, which is an inference snap that installs a silicon-optimized Gemma 3 engine for local inference. This snap requires access similar to qwen-vl and deepseek-r1 snaps. The software stack is identical to the other snaps.
- name: gemma3
- description: An inference snap, installing a silicon-optimized gemma3 engine
- snapcraft: gemma3-snap/snap/snapcraft.yaml at main · canonical/gemma3-snap · GitHub
- upstream: GitHub - canonical/gemma3-snap: Inference snap for the Gemma 3 model
- upstream-relation: -
- interfaces:
- hardware-observe:
- request-type: auto-connection
- reasoning: To detect the host hardware (CPU, accelerator, etc) and install the right engine for AI inference
- home: + read all
- request-type: installation, connection
- reasoning: The engine that runs inside as a root daemon requires access to other users’ home for loading local models based on user configuration. We expect the user to manually connect
homeif they need to sideload a model, but if the reviewers think that is unusual in snaps, we could go with an auto-connection.
- hardware-observe:
There is an upload of the snap with home read all (revision 1) which has rejected the store review.
Thank you