Hi guys! We’re using a custom store and to build our images we whitelist “core” and “pi2-kernel” snaps there. Everything was working fine until this morning when “ubuntu-image” started to fail with the following error:
+ ubuntu-image snap -w /tmp/tmp.mCYVPZmjup --image-file-list=/build/.ubuntu-images.txt -O /build/ --channel=candidate pi3.model
error: received an unexpected http response code (404) when trying to download https://api.snapcraft.io/api/v1/snaps/download/99T7MUlRhtI3U0QFgl5mXXESAiSwt776_4916.snap
I downloaded that snap manually using wget (without any customizations, just did
wget <URL>) and added it with an
Copying "99T7MUlRhtI3U0QFgl5mXXESAiSwt776_4916.snap" (core)
error: received an unexpected http response code (404) when trying to download https://api.snapcraft.io/api/v1/snaps/download/m0rvvnmRdDgexonz1XSP9a9U4K7vUbiy_56.snap
After adding all whitelisted snaps that way we finally built the image.
Notice that snaps located in our custom store were downloaded successfully.
Could you confirm the architecture for which you are building the image? I see that core revision 4916 was built for armhf, and core revision 4915 was built for arm64. Given you are using the pi3 model, maybe the wrong arch for core is tried to be downloaded?
Hi @natalia, we use armhf arch.
Thanks for the response @renat2017.
Could you please enable debug logs for snapd and retry, and send the logs (be careful with potential Authorization headers, those would need obfuscating if there is any sensitive information).
Another question is: when was the last time (before this one) that your image was built successfully? With this info I can try to search if there was any relevant change to the store authorization process that could explain this.
Could you please enable debug logs for snapd and retry
Could you please tell me how is it done? We don’t use snapd during the build process, everything is done using
ubuntu-image command. I know that it runs
snap command under the hood to download snaps.
I tried to pass a
-d flag to
ubuntu-image but it didn’t give anything useful.
when was the last time (before this one) that your image was built successfully?
Today, July, 18th, ~9:30 AM UTC.
This is surprising. I can see why
pi2-kernel would be failing for you, and I have some ideas on how best to fix that, but I can’t see why other snaps would work: as far as I can see they should all be failing in the context of
ubuntu-image for this store.
Unless: are you possibly using the
UBUNTU_STORE_AUTH_DATA_FILENAME environment variable to pass user authentication to
ubuntu-image? Otherwise, could you tell us more? An example of a snap name that does work in the context of
ubuntu-image would be helpful.
are you possibly using the UBUNTU_STORE_AUTH_DATA_FILENAME environment variable to pass user authentication to ubuntu-image?
Yes, we set both
An example of a snap name that does work in the context of ubuntu-image
Thanks. I’m about six hours past my end-of-day at this point, but I’ll dig into this first thing in the morning.
Thanks! Yeah, it’s 2 am in my TZ now, feeling a little bit sleepy too.
Sorry to ask for more details again, but I’m having some difficulty identifying the relevant successful requests in our logs and distinguishing them from other requests from devices for the same snap, as there’s just too much to wade through in production store logs. Could you help me narrow down the field a bit? Ideally, what I’d like is for you to reproduce the failed download of
pi2-kernel and the successful download of
screenly-client, all from within the context of
ubuntu-image, and then give me UTC timestamps for each of those downloads. (Timestamps to the nearest minute or so will be fine.) Once I have that, I should be able to work through things request-by-request and see what’s happening.
core download failed at
Thu, 19 Jul 2018 16:35:27 +0300
pi2-kernel download failed at
Thu, 19 Jul 2018 16:36:46 +0300
our custom snap downloads succeeded at
Thu, 19 Jul 2018 16:40:01 +0300. (This step took ~30-40secs).
Notice the time zone: UTC+3.
Thanks a lot. We’ve discussed this internally and we now understand the (rather subtle) bug in our ACL checking that led to this. I’ve flipped the switch back to the old rules until we get the new implementation fixed, so you should be able to build your images again.
@cjwatson, @natalia, thanks for the help, it works ok now.
Would you mind trying this again? I’ve fixed the bug you ran into last week and flipped the switch back to the new ACL checks we’re putting in place, so I’d like to make sure that your image builds still work. Thanks.
@cjwatson, it still works fine, thanks.