Week 22 of 2017 in snapcraft

This week the snapcraft team had a mini sprint to discuss direction and strategic goals for snapcraft. Most of the conversations delved around how to make snapcraft easier to use as a developer, easier to iterate on, easier to ramp up on, have easier onboarding and easy cross compilation support.

As you can see, the theme was making things easy. To get this going we had many long back and forth conversations which will be documented once the notes are polished for a wider audience to read (there is no sense in reading confusing notes now :slight_smile: ).

The other thing we did is take advantange of being at the London offices where the design team resides to get design input on most of our user facing parts of the developer experience. On one side we went over our CLI as a whole end to end to get their input and from our side we gave a primer on all store interactions so that the dashboard, CLI and other possible future incantations of snap usage have a consistent view across the board.

So onto the details…

Fire fighting setuptools

Unrelated to the sprint, we were hit by the release of setuptools 36.0.0 which basically broke the world, including our python plugin. Our idea to follow the upstream world with regards to plugins to not live in a potential echo chamber fired back at us with this (and everyone in the world). We are planning to be able to pin these things in the upcoming work ahead so when it happens again, there is a way to stick to what works on your project.

click vs click

The python-click and python-click-package conflict has been broken and we are currently seeking an Ubuntu core-dev to validate our work. More on this work can be found about at https://bileto.ubuntu.com/#/ticket/2790 and the SRU bug.

Scripts rethought

The introduction of the versions-script in retrospect has raised many reconsiderations around scripts themselves and what they can do.

One consideration is that knowing when the version-scripts is run is not easy to grasp at first sight. A rename to something closer to the lifecycle step where it is run makes sense. A working name of pre-meta is being used to describe this.

The previous paragraph also applies to prepare, build and install, while they solve many issues today, probable better across the board names would have been pre-<lifecycle-step>, <lifecycle-step> (replacing the step) and post-<lifecycle-step>. Indeed making install a proper plugin lifecycle step as well.

Looking back at the version-script concept which echoes the value of the version and its redefinition of being able to call functions to execute its task such as set-version and set-grade and the concept of being able to script anything gives us a powerful view of the control of when things are run according to the needs we have in each project.

Fallback grammar

When the stage-packages grammar was introduced it was only implemented for stage-packages entries, nothing is stopping us from making this a global attribute of every keyword for a part, this would allow us to have in some cases architecture specific source entries or build-packages ones.


While we did a review of the snapcraft cli with the design team, we had a specific focus design review on the store interaction, the goal being a consistent model across all possible means of interaction with the store. The feedback from design here did not impact snapcraft aside from some minor details which would need API support on the store side first such as being able to colour expired channel branches in red.

We went on a tangent to discuss all the metadata we have, there is a proposal in preparation on how to deal with this. Specifically related to things like description, summary and icon. If interested please wait for the forum thread discussing this specifically.

Cross compilation

We did a recap on the cross compilation, the gist of it is that the plugin side is rather easy as long as the language has support for it by adding the architecture specific bits but it gets tricky where it always does, creating the sysroot like environment in an automated fashion and having enough hand holding for people to get the just works feeling for it.

We reviewed the current go plugin work and the upcoming rust one as well as the more complex dealing of build-packages and stage-packages with the corresponding sources.list management when cross compiling.


This is a subject of much debate, the story is easily conflated into a story about recording for analysis and one for recreating. We want have a design that works really well for the former and leave room to evolve to the later when in comes to it, but the former attribute is what we want to cater for.

We thought of renaming the resulting file that holds these artifacts as recording.yaml since it might hold information not relevant to something strictly related to a common snapcraft.yaml. This requires further discussion and the key points are being worked on to have a final call by the time of the next sprint end of June.

During the conversation we schemed through what would be an API for plugins to work with recording and provide back the assets brought into the part, this is particularly useful for language package managers which have their own thing like .lock files or specific recording.

Better developer story

In order to have a path to rebuild we are coming up with a distinction that remote sources are commonly dependencies or what one would use while having a packager (imaging in our world) hat on and that local sources are your actual project that you would want to iterate on.

We discussed the offline story, that is, not having to connect to the internet and have a clear path to iterate without accidentally wiping the assets you need from an online source. The fact that most plugins, given their under pinning technology assume you are online will not make this easy, but we have plans to get there.

The scenario for a a part with remote sources failing to build will go down a path to continue building where it left off and not repull like it is done today.

We will internally implement a proper state for prepare-pull and prepare-build. Still not user callable, we will propose better names fetch-stage-packages and unpack-stage-packages.


We discussed what the expectations of architecture to build for are when using --remote and the resulting view is that it should be the architecture of the remote. We went over the tasks in snapcraft required to get this going.

We have a proposal on how folks are going to be prompted if they want to use persistent build containers instead of their host.

An overview of the cache strategy was done and we came to the agreement we only want to do this for local remotes.

For the case of using a remote remote, we would need openssh-sftp-server to keep file ownership and permissions sane. We will treat the requirement to install this by prompting for it just like lxd.

A way to cleanup stale containers is needed, a cleanup strategy involving lxd’s last_used_at information is being worked on. How to expose this to the user is something we need to propose, it will most likely be a new argument to clean or a new command entirely.

To keep containers sane, we need to select a proper archive mirror depending on the location of the snapcraft user.


We are at a point where it would be wise to introduce configuration to snapcraft in a global and per project manner. Things like selecting the default lxd remote to use or the fact that you want persistent containers assigned to a project by default are worthy of having.

This is also going to be useful, specifically in the case of containers, to have default proxies setup, either in the case of needing one at all to talk to the outside or setting up one for a package repo in particular. You don’t want to be doing this every time you start a new project.



We discussed parts support for go. For this we need to extend GOPATH to have stage paths in there. The documentation for the plugin needs to be extended. There are reports that appending to GOPATH aren’t that good, we need to test this, if not figure out something local.

Support for build-snaps would solve the discoverability for using the latest go. This is pending on buildd support at the minimum.


We discussed how to easily rebuild old python projects and track changes. Vendoring was the first proposal and having patches applied through the prepare script to keep the delta. A better counter proposal was made to just vendor the source trees into project owned repositories and have requirements.txt point to those, this is a much better new world view on how to do this.

We also raised the priority to bring back the .pyc files.

Closing thoughts

The past week was one with many discussions and reevaluation. There wasn’t much in the name of PRs that showed. More on each topic will be followed up on specific existing forum posts or make their way into new ones. These topics aren’t final either as we want to communicate them to a larger audience where we would be allowed to work on the feedback given.

Thanks to Evan (ev) and Michał (Saviq) for their particiation and valuable feedback during the sprint.

I can see how this would work for the primary project that the snap targets (for example in OpenStack this would be ‘glance’ or ‘keystone’ for example); however I think that’s only part of the build-reproducability challenge - typically the project will depend on 10-100’s of other Python modules - would the discussed approach also include vendoring those into the snap source tree?

I had ideals about using pypi-mirror to maintain a point in time index of pypi for a particular series of OpenStack that could be used across all OpenStack snaps of a specific release.

Having a good solution for this is important where the snaps in the store need to be maintained for much longer that the standard upstream support period (which for OpenStack is only 12 months).