Asset recording for a built snap

This one adds the recording of build packages:

https://github.com/snapcore/snapcraft/pull/1295

But actually, I found a bug on the pull tracking. It doesn’t save the dependencies of build packages. So what I said before about saving all the dependencies works only for stage packages now: https://bugs.launchpad.net/snapcraft/+bug/1688151

I’m trying to fix that bug.

1 Like

Consider recording the dependencies of build-essential or the equivalent for snap builds as well as those packages explicitly listed in stage-packages and build-packages.

If you haven’t already, you should probably look at dpkg-genbuildinfo as prior art here; as well as build-dependencies, it also records things like a subset of interesting environment variables. Quite a lot of work has gone into that from reproducible-builds folks already.

1 Like

And this should fix that bug:

https://github.com/snapcore/snapcraft/pull/1299

It comes with an integration test to check that undeclared build dependencies are recorded.

Thanks @cjwatson!
I found it here: https://alioth.debian.org/scm/browser.php?group_id=30261
We’ll be checking it.

Here is the addition of global build packages to the recorded yaml:

https://github.com/snapcore/snapcraft/pull/1306

Another small fix, because the keys we were recording during pull had the wrong name.

https://github.com/snapcore/snapcraft/pull/1312

And this is the last piece, to finish copying the pull state to prime/snap/snapcraft.yaml

https://github.com/snapcore/snapcraft/pull/1317

Next, cleanups, refactors, record fancier information during pull…

During the 2.30 release and afterwards, I will be testing the build from this recorded snapcraft.yaml to identify non-reproducible builds and missing pieces of information. I would appreciate if everybody could look at their prime/snap/snapcraft.yaml too, and let us know if they see something weird.

1 Like

By pure luck, or extreme foresight, one of our test snaps caught a problem on the state tracking of build packages. Not something new, but it got worse because we now collect all the dependencies.

Here are the fixes:

https://github.com/snapcore/snapcraft/pull/1322

https://github.com/snapcore/snapcraft/pull/1323

1 Like

I introduced a regression, because in some cases this will try to install packages that are not in the archive anymore:

Here is a quick fix:

https://github.com/snapcore/snapcraft/pull/1333

However, this got us thinking and discussing a lot about build-packages, because the way we handle their state is not nice. The biggest problem is that we first install using apt-get, and later we save the packages, versions and their dependencies. It would be a lot better if we could save the packages right after they were installed. This, however, requires a bigger refactor, that I have started today.

We also agreed on a couple of simplifications:

  • We will only save the build-packages installed by snapcraft. If there is a dependency that was preinstalled in the system, it will not be saved in the state. This means that the recording will be totally accurate only in cleanbuild.
  • We will save all the build-packages before the parts are pulled. This means that build-packages are no longer saved per-part, they will all be saved as global packages. Here we are discussing the location to save those global build packages:
1 Like

Here is a prerequisite of the refactor, to make the tests that depend on the cache use a better fake:

https://github.com/snapcore/snapcraft/pull/1334

1 Like

This is the branch that moves all build-packages saved assets to global build-packages:

https://github.com/snapcore/snapcraft/pull/1340

This simplifies the problem a lot. But as explained in the PR, there are two important things to note:

  • If a build package is already installed, it won’t be saved as an asset. So in the end, the recorded annotated snapcraft.yaml will be accurate only in cleanbuild, and requires that we also record the image used during cleanbuild.
  • The recorded annotated snapcraft.yaml will be slightly different than the original, with all the build packages for all the parts as global packages. As we install all the build packages before any part anyways, this is equivalent.
1 Like

These points make me feel like the feature may not be very useful. If the data is imprecise, and varies wildly depending on unclear external conditions, it’s better to not have the data than to have something arbitrary every time. If it is promising a list of assets, that list must be somewhat precise to be useful.

Maybe we should aggregate this with the full list of packages at the time of build, this seems better than recording the hash of the image used and can lead to an eventual reproduceable build.

I do think it is useful to know what version of libboost you installed which would have generated a different list of binaries, so yes it maybe different but would allow for CVE tracking; that said, the aggregation of packages installed at build time could bring in the full picture.

That said, these things you mention are sort of the reason I haven’t pushed for this harder, my mind is not set entirely on it.

I would like to have some insight on how this is being done currently for the core snap (not necessarily by you) to get some pointers. We are also waiting on some comments from the security for this. I might try and nudge them for some comments.

Also, there are two things that get conflated here, recording what was used to build in order to analyze and another in order to reproduce a build. I am much more interested in the former with a design that can potentially lead us to the latter but the latter is not the focus here.

What makes you feel like the data will be imprecise?

The hard part here is that there are no external conditions. Your full machine, its state and configuration affect directly the result of the build. The only solution for this is to control the full environment, which is what we do in cleanbuild. In cleanbuild, the saved states will always be the same, even if the build is run in a different machine. In a normal build, the saved states will always be the same if the build is run in the same machine. If you do a normal build, then modify your machine, and build again, the results will be different; but they will be exactly what snapcraft did during the build, without trying to guess anything.

With the previous implementation we were just making a static analysis on the apt cache, which worked ok on cleanbuild but could be different from what actually happened in a normal build. Let’s say for example that you have the make package installed, but you are instead using a modified version that’s installed in your $HOME and accessible through $PATH. If we accept that we can’t control everything that happens in a developer machine, then we can simplify the problem a lot by just saving all the actions that snapcraft executes.

How’s this topic going over there?

Sorry for not answering this back then. The point about the data being imprecise due to what you described above: if the actions that snapcraft execute are based on the status of the local machine, then it may and will execute different actions depending on what happened to be installed on the local machine, which doesn’t seem very useful.

The fact it’s a cleanbuild improves the situation a bit because it’s a shared resource, but it’s still a similar feeling: in 6 months that image will most likely be different, and we’ll have no idea about what actually was used.

Ideally the manifest will provide a more realistic picture of what affected the build, instead of simply hinting at snapcraft actions taken.

The plan is to record the fingerprint of the image (in the case of cleanbuild) which does not change.

We will probably treat a missing fingerprint for an image the same way we would treat a missing package fixed at a version in the case of a rebuild (which we really aren’t going to focus on this cycle).

Although it’s fine and nice to have an image digest in the manifest, this is not a solution for the points made above, and the reason was already hinted above. We can look at it from this perspective: we might also create a digest from your machine at the time you built the snap. Okay, now we know that environment was the same across several builds, but what was on the actual environment when it was built? That’s what the manifest is supposed to answer.

And again, yes, the fact it’s a cleanbuild improves the situation a bit because it’s a shared resource, but it’s still a similar feeling: in 6 months that image will most likely be different, and your digest may not even be around anymore, and we’ll have no idea about what was actually used. Even if the image is around, it would be much better to have data in the manifest which properly hint about what was used, instead of a digest.

The image fingerprint is just a piece of information. We are planning to record lots more. I am opening posts with the topic “record” in the category snapcraft, to discuss each piece of information with the security team and anybody else who wants to participate.

So far, we have PRs or released commits for: all info from the source snapcraft.yaml, build-packages, stage-packages, python-packages, python requirements file, python constraints file, node-packages, yarn.lock file, uname and all the installed packages in the machine.

At a conference last week during a talk I gave about snapcraft, there was a question from the audience regarding this topic. Is there a plan of record for when reproducible builds will be supported / possible?

The feature we are writing right now is for auditing, not for reproducibility. Reproducible builds is not in our roadmap for this cycle.

Having said that, if your build system doesn’t introduce any noise, and your sources are tagged with hashes and versions in the snapcraft.yaml, then if you build the snap twice, it should result in the same snap.

Of course, we are not focusing on this, we are not testing for it, and there are many things that we can do in the future to be able to verify a build bit by bit.

1 Like