Developer sprint Sep 17th, 2018


  • Daemon and client command missing
  • Commands will be as discussed (save, forget, etc)
  • Needs an iteration to handle revisions better and epochs
  • Need to snapshot on refresh, remove, etc
  • Currently gzip level 9 but might be too slow for small systems
  • Perhaps a week of coding for commands working, then reviews
  • Documentation while first chunk of work is in review
  • Another week or two for handling revisions and epochs


  • Any interface can be made hotplug aware with a few lines of code
  • Serial port is being done as first example
  • Integration with udev monitor and enumeration is in master
  • Almost everything else implemented in a sketch branch
  • Missing support for snapd restarts, by enumerating devices again
  • Must write more tests as well
  • We might write a tiny kernel serial driver for tests
  • Probable branch proposals coming soon:
    • Restructure what’s in master slightly
    • Integration with interface manager that reacts to add/remove events
    • Handling of snapd restarts with enumeration
    • Spread end-to-end tests
  • Documentation needs to be written
  • Interface-specific documentation must say when hotplug is supported
  • The “interface” command can tell whether that’s the case too

APT integration

  • APT has a bunch of hooks
  • If an installation fails because package not available, snap is suggested
  • Not yet integrated into the search hook
  • Display the first N (5… 10?) entries from find results, so it doesn’t get too busy
  • Store is already ordering results based on relevance
  • Might be presented along the lines of:
$ apt search ...
<apt results>
<empty line>
Related snaps matching "aws-cli":
 * aws-cli - Universal Command Line Interface for Amazon Web Services
See 'snap info <snap name>' for available versions.
  • Improve “snap info” output with another field at the end:
examples: |
  snap install aws-cli
  snap install aws-cli --channel=edge
  • Only show the first line if stable exists.
  • Only show the second line if non-stable exists, and then show it with a proper channel.

Parallel installs

  • What works today in master:
    • Installation from file using --name
    • Installation from store
    • Refreshing of snap instances
    • Isolation across instances
    • Overname in snap-confine (reviewed, about to land)
    • Spread tests
  • AppArmor support is still missing, installation depends on --devmode for now
  • Classic snaps also missing
  • Support for refresh-control and validation under review
  • Aliasing support still needs evaluation
  • Snapshot integration still missing (should allow restore onto any instance of same snap)
  • Documentation missing
  • Two or three weeks until it’s all working


  • State integration is complete and integrated
  • Missing command line interface and documentation
  • Refactoring of client for single-point prints is up for review
  • PR with last bits should be up next week
  • Documentation required for covering user-oriented UI

Health checks

  • Hook named “check-health” (or similar) is called by snapd to check for health
  • Hook needs to be idempotent so snapd can call it at any time
  • Hook might be called every 5 minutes to begin with, maybe? We can always increase later (but we can’t easily reduce it, without risking breaking applications that expect having more time for it)
  • Have special status “unknown” if the snap hasn’t bothered to define it
  • If the health-check hook is called and it doesn’t update the status, it’s set to “unknown”
  • Hook can call snapctl set-status --code=[<error code>] <status> [<message>] (or set-health?)
  • Message is a free form sentence, hopefully capitalized and readable
  • Message is forbidden for active status
  • Reserve “snapd-*” error code namespace for snapd
  • Error code is a custom string [a-z]+(-?[a-z0-9]){3,} (dashes in the middle only)
  • Reuse statuses from juju (maybe not all of them)
    • active / maintenance / waiting / blocked / error
  • Still need to consider whether to use “active”, due to conflict between “active daemon” and “health=active”
  • Client reports to server the status in two situations:
    • When the health checks are run during a change, the pre/post status is immediately reported afterwards
    • When the health checks are run “at rest”, the pre/post status is aggregated and reported on the next exchange
  • Should also report a green→green situation so the server can differentiate between “bricked” and “all good”
  • When do we actually revert a refresh automatically:
    • Status going from active→error across a refresh
    • Status going from active→blocked across refresh, requesting manual refresh instead? Discuss this further, probably not for first release.
  • Reverts should never be triggered at rest, because we don’t know what caused the error
  • Health hook name as check-health might be more indicative of desired behavior than update-status

Avoid issues in juju status hook:

  • Frequent updates cause writes too frequently
  • Not require message for the active status

Nickname in interfaces

  • Revert to return the real snap name in the response
  • Do not return nicknames for now - probably YAGNI
  • Keep translating requests so it maps appropriately
  • Keep translating state from “core” snap into “snapd”

Support for cups as a snap

  • Goal is to make cups installed as snap to be available for applications to print on it
  • The “control” in cups-control is there because snaps can run as root, and that gives connected snaps full administrative capabilities via the cups socket
  • We might have a “cups” interface that bind-mounts /etc/cups/client.conf when connected
  • Classic applications would not see that file, though, and wouldn’t be able to print into the snapped cups
  • The cups snap can also try to allocate port 631 by default anyway, and make itself available for printing for the whole system
  • Another more complex alternative:
    • Cups is able to chain two cups installation together; easiest is probably to allow the cups snap to cups-control the system cups, and make itself available for printing automatically
    • Apparently some of that already works today with the snap, but it might be automated further to make the process less painful for users

On going maintenance in Debian

  • Current status
    • 2.30 released
    • Dependencies packaged: juju/ratelimit, kr/pretty, kr/text
    • Dependencies to drop: golang-seccomp
    • Dependencies to update:
      • check.v1 (diffing improvements, not fundamental)
      • yaml.v2 (a few fixes, not fundamental)
      • bson (change about random, not fundamental)
  • Re-exec working fine
  • Enable AppArmor when possible (newer releases)
  • Double-check that the copyright is okay
  • Package is about to get updated to 2.35
1 Like

How to move out of Go 1.6

  • Main driver is using things which are not in 1.6
  • Ideal release is 1.10 because it’s in Bionic
  • Juju embeds Go suite because it was more practical for oldest supported distributions
  • Issue is introducing a new version into past distributions increases support burden
  • EL7 has Go 1.9.4, so a limiter
  • It’s a shame we cannot use snaps to distribute snapd in the first place! :slight_smile:
  • Best strategy might be to draw a line and keep a bootstrap release for packaging that works with 1.6
  • Distributions that do not yet re-exec would be left behind, though
  • Should only happen after the snapd split is completed
  • Core devices could always pick the latest one
  • Need to finish technical issues preventing global re-exec first

Supporting snaps for UA customers

  • Unlike debs and rpms, publishers push the built snap, so there’s no strict source package
  • That said, snapcraft provides a manifest of how it built the snap, and packs it inside the snap, so that may be used for informational purposes
  • Open source snaps may be rebuilt using the snapcraft.yaml and manifest that was generated
  • Snaps published by the “canonical” account may use branches to publish hotfixes
  • Third-party publishers need to publish their own hotfixes, and in some cases contracts may exists so that Canonical can publish hotfixes as well.
  • There’s a line of responsibility that should not be crossed without an agreement in place. Canonical shall only publish a modified snap into a channel of a third-party snap when there’s a contract in place for that.
  • Alternatively, snaps may be locally installed using --dangerous, but that may behave differently with connections/etc since the store information (snap-declaration) will be missing.
  • Alternatively², snaps may be published under a different name altogether.

Dynamic snap-update-ns for user mounts

  • Thee parts to this:
    • Change snap-confine to repeat the same process for non-user mount namespaces, effectively saving the snap namespace and then the per-user mount namespace on top of that
    • Before that’s all in, there should be a way to convey experimental flags into snap-confine
    • Relatively simple change to snap-discard-ns and snap-update-ns to glob uids in the directory with mount namespaces (/run/snapd/ns)
  • After that we can enable the content interface to use user data!
  • Realistic timeframe: 1 week to land the C code, and 1 more week for the Go part
  • We need to get agreement on the experimental flag first

Root namespace for classic snaps

  • Allow classic snaps to use:
    • Layouts
    • Parallel instances
    • Some otherwise problematic interface
  • Mounts would stop propagating, which is a breaking change
  • Perhaps do the per-instance unsharing and stop propagation of events in /snap, but still consume them; that’d keep parallel instances happy and things working as usual otherwise
  • That’s related but not entirely to that other mode where the snap is unconfined but in a mount setup that matches that of a strict snap; the snap would need to opt-in and pass review per classic rules.


  • Basic idea: allowing entire snaps or specific features of snaps to be turned on or off depending on specific conditions
  • Think in-app purchases, except entitlements won’t necessarily involve payments
  • Design is half-baked and needs to be finished
  • Possibly specific to user, model, device, brand, etc
  • Installation of the whole snap might also be restricted, and in that case the blob shouldn’t be made available
  • We need a better name for “entitlements” that is shorter and simpler
  • Entitlements might have two kinds:
    • Store conveys to snap via assertion everything that might work offline
    • Sometimes a token will be required for speaking to a third party and proving right of access
  • In theory we might also convey the first kind via an assertion, but that’d mean refreshing the whole assertion which is more expensive than refreshing a single token
  • Assertion might be refreshed when a new question is asked, perhaps with a threshold for caching up to N minutes so we’re not trashing the server
  • Entitlements (not the assertion) must have an “until” field so it’s not valid past that point, must be refreshed before that
  • Should we design so that we have different commands for “give me” vs. “do I have it”?
    • snapctl use hats=10
    • snapctl used hats?
    • snapctl use autopilot # Binary, same as =1
    • snapctl used autopilot?
  • These are not the real commands… these need to be defined still
  • Next steps
    • Get some actual design going
    • Get some code working in snapd for experimentation
    • Discuss more at next sprint

Snap renames

  • Use cases
    • Trademark violations - upstreams with trademarks requesting their mark not to be used
    • Renaming a good-name-someone to simply good-name
    • Upstream project name has changed
  • To allow automated refreshes over renames, we might leverage the alias feature so that the original commands remain working
  • The new warnings feature allows conveying to the administrator that the rename has happened
  • To support reverts, we’ll need to mount the new name over the old name so that the snap sees itself over its original location. We might make this more general by always forcing the snap to be mounted using the name in the snap.yaml file. That feature would reuse the “overname” logic implemented for parallel installs.
  • Snapshots also need to do something about renames, so that they can be associated with the correct snap
  • The store might get a lookaside table that has a list of renamed snaps, so it can still accept snaps published with the old name. This would only work for as long as the entry in that table exists, and if another snap takes over the old name the lookaside table is cleaned and the old name is associated with the proper snap.
  • For daemons being kept alive across refreshed, they are restarted for that one refresh.
  • We might start by doing just the easiest parts of the problem, requesting a manual refresh, and then tackling more of the problem later.
1 Like

Snapcraft builds where VMs aren’t supported

  • New snapcraft will build in VMs by default
  • But not all environments support VMs :slight_smile:
  • Point of the exercise was to make it clear how to build a snap, and make the results consistent. Whatever we do, we should avoid breaking these properties.
  • Launchpad can only build Ubuntu for now
  • Best path might be to have a VM image and a LXD container that are equivalent, and allowing Snapcraft to recognize both
  • Cloud image is not as stable as we’d like for being a build environment
  • We need a new image that is a VM image, a LXD container image, and a tarball, all with the same content (or very simliar) that can be used by Snapcraft and Launchpad
  • When snapcraft detects that it is inside a destructable environment (e.g. Travis build) it can set itself up as necessary
  • snapcraft [–lxd | --multipass]
  • Snapcraft with needs a registry of names associated with images
  • Images must be almost exactly the same across lxd and multipass

New documentation web site

  • Site is live on staging location, and overall looks great
  • Left bar is overlapping, some color issues, but just a matter of polishing
  • Plan is to have it into
  • Ideally the site should keep the cache if it cannot reload
  • Current reload timeout is 5 minutes
  • We’ll add a section to the documentation outline with a map of high-profile link names to URLs
  • After snapcraft docs are out, next step is implementing the importer for MaaS
  • Snapshoting of documentation for different major versions will work by copying the content to a separate category and adding some kind of e.g. “2.42” text to the topic title, so it doesn’t conflict with the latest version

Better errors on different architectures

  • The store and snapd itself were already improved to return nice error messages when the snap is only available on a different architecture
  • The find command won’t return snaps that are only available on different architectures, though. This may be the best approach, so snapd won’t suggest snaps that can’t possibly be installed locally.
  • It’s all good right now then.
1 Like

Please implement 301 Redirects on the old document pages to their respective new locations. Each page will need to be routed to it’s counterpart, not to a single generic front-page. This will capture the SEO juice of the old URLs and forward anyone to the correct page when clicking out of date links.

For example, the old “interfaces” page needs to redirect to the new “interfaces” page, not to the front page of the new docs site!

1 Like

Yep, we’re going to do that.


Regarding this I would like to mention the following topic:

1 Like