Automated testing of newly-built snaps

Hello all,

Launchpad offers some very nice facilities for setting up automated builds of snap packages. However, before uploading to the store, it would be nice to be able to have a testing stage where the newly-built package can be tested on a variety of target systems. This seems particularly important for compiler snaps, which can run into quite finnicky problems on different distros, due to issues around libc, position-independent code, etc.

Are there any current suggested ways for how to do this? And are there any planned enhancements to either Launchpad or to build.snapcraft.io (OK, Launchpad underneath, I guess…) to try to handle this?

Thanks & best wishes,

 -- Joe
1 Like

Do you mean like testing on a CI system such as CircleCI or Travis CI? Or something else?

@joseph.wakeling Do release channels not help with this?

The edge is the intended release channel for CI to be hooked up to. I know you’re using channels and tracks so I’m interested to know why channels aren’t sufficient for your use case?

Yes, although I’m not thinking of any particular system right now. The challenge here is to test on a diversity of different Linux distros that the snap package is expected to work on.

There’s no reason in principle I can’t work from the edge channel. What I’m interested in more than anything else is advice on best practices for doing so (i.e. what other people are doing, how it works, how they handle the many-different-distros case, etc.).

There is a little bit of me that says, well, I’d like to eliminate problems that can be caught by automated testing before something gets into the store, because why expose users to these potential problems? But that’s not a big deal in the grand scheme of things. edge is named edge for a reason, after all :wink:

I am however interested in learning more about what’s planned (if anything) for the store and/or build.snapcraft.io in terms of integrating (and providing) test services.

In our case, every commit to master results in a push and release to edge. When we are ready to move the snap forward so more people can get it, what we do is git tag (annotated git tag) our code so that version: script in our snapcraft.yaml does its magic to get a nice version. For that revision in the store we run our acceptance tests and if they pass we release the corresponding revisions to beta.

Once in beta we do exploratory and manual testing to determine if ready to release to the next channel risk level.

For automated tests we are using a combination of spread tests, boards and openstack instances, and this is the part where it becomes a per project thing, but a combination of webhooks and compute is all you need to get this going.

1 Like

Yeah, with build.snapcraft.io it should be easy to get a script running on some machine that performs tests every time the local version of the snap installed from the edge channel changes. All the rest is already sorted.

We use spread widely for our full system tests, but this is really just a mechanism to control the running of such a script on a fresh machine (remote or local VM) that is discarded shortly afterwards. Since this is just about installing pre-built software and running it, any form of scripting should do.

Thanks for raising this, Joseph. We ultimately want to support two avenues:

  1. Building snaps as part of pull requests (for review by humans)
  2. Webhooks when snaps land in edge (for review by automation)

Reviewing a results dashboard and making the final call on promotion to stable is still left up to a human reviewer.

@evan thanks: this is pretty much the kind of thing I was looking for, and it’s good to see it on the roadmap. Although I would suggest that for PRs as well it would matter to auto-test the built snap (I’ve added a comment to this effect to the first GitHub issue).

@sergiusens @niemeyer thanks for pointing me to spread; I’d not heard of it before and it looks like an interesting tool. Is this something where I’d have to set up my own test server, or is it integrated into Launchpad to the point where I could just define a spread.yaml and have things start working?

I ask because although the spread README is very detailed in describing how to define tests, it doesn’t AFAICS cover how to actually get the tests running :wink:

Hello all – wonder if I could check in on the status of this? Has there been any progress? (The GitHub issue does not appear to have been updated.)