Can no longer build any snaps

I am trying to build my snapcraft app and it no longer works. This snap is already in the snap store, and I’m just trying to edit it. But it seems like the snap build process itself has changed somehow and I can no longer build the snap on my machine.

So, to make sure it is the build process and not my snap itself, I tried snapping the basic hello-world snapcraft app and, sure enough, I cannot make it happen.

Mac

(1) On my Mac with both snapcraft and multipass installed, I get some errors. This did work fine a few months ago, but it stopped working and I switched to using my Ubuntu VM. But since that too has now stopped working, I figured I would give the Mac another try. I’ve unintalled and reinstalled both snapcraft and multipass to no avail.

hello$ snapcraft clean
Failed to get information for snap 'core18': could not connect to 'http+unix://%2Frun%2Fsnapd.socket/v2/snaps/core18'

And then…

hello$ sudo snapcraft
Password:
Launching a VM.
error: no changes of type "auto-refresh" found                                  
snap "snapd" has no updates available
snap "core18" has no updates available
snap "snapcraft" has no updates available
mount failed: source "/Users/myuser/Documents/snappath/hello" is not readable
An error occurred with the instance when trying to mount with 'multipass': returned exit code 2.
Ensure that 'multipass' is setup correctly and try again.

(2) On my Mac I tried building remotely, which I have never done before. Got errors.

hello$ snapcraft remote-build
All data sent to remote builders will be publicly available. Are you sure you want to continue? [y/N]: y
snapcraft remote-build is experimental and is subject to change - use with caution.
Sorry, an error occurred in Snapcraft:
name '_source_handler' is not defined
Traceback (most recent call last):
  File "/usr/local/bin/snapcraft", line 8, in <module>
    sys.exit(run())
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/snapcraft/cli/remote.py", line 157, in remote_build
    _start_build(
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/snapcraft/cli/remote.py", line 188, in _start_build
    repo_dir = wt.prepare_repository()
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/snapcraft/internal/remote_build/_worktree.py", line 322, in prepare_repository
    part_config = self._process_part_sources(part_name, part_config)
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/snapcraft/internal/remote_build/_worktree.py", line 241, in _process_part_sources
    part_config["source"] = self._pull_source(part_name, source)
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/snapcraft/internal/remote_build/_worktree.py", line 163, in _pull_source
    source_handler = self._get_part_source_handler(part_name, source, download_dir)
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/snapcraft/internal/remote_build/_worktree.py", line 92, in _get_part_source_handler
    handler_class = snapcraft.internal.sources.get_source_handler(
  File "/usr/local/Cellar/snapcraft/3.11_1/libexec/lib/python3.8/site-packages/snapcraft/internal/sources/__init__.py", line 170, in get_source_handler
    return _source_handler[source_type]
NameError: name '_source_handler' is not defined
We would appreciate it if you created a bug report at
https://launchpad.net/snapcraft/+filebug with the above text included.
You can find the traceback in file '/var/folders/4v/y1kp9phs1sq2x2t302t_33s00000gn/T/tmp2_ay4fac/trace.txt'.

Ubuntu

(3) On an Ubuntu VM running on my Mac in Parallels, trying to use multipass, I get some errors. This has never worked. I assume a Parallels VM cannot launch another VM? Dunno if that can be changed on my VM somehow? Any suggestions?

hello$ snapcraft
Launching a VM.
launch failed: CPU does not support KVM extensions.                             
An error occurred with the instance when trying to launch with 'multipass': returned exit code 2.
Ensure that 'multipass' is setup correctly and try again.

(4) On my Ubuntu VM running on my Mac in Parallels, trying to use LXD, I get some errors. Now this used to work – in fact, today is the first time I can’t make it work.

Initializing LXD, I used the defaults except for the size of the new loop device. I noticed the default size (12GB) is different than it used to be (wasn’t it 15GB before?), so perhaps something here has changed and is now causing it to fail for me? Should I configure something differently when initializing this? I really don’t know what any of these settings mean.

Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, zfs, ceph, btrfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty disk or partition? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=12GB]: 30GB
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

Here are the errors I am now getting.

hello$ snapcraft clean --use-lxd
An error occurred when trying to communicate with the 'LXD' provider: cannot connect to the LXD socket ('/var/snap/lxd/common/lxd/unix.socket')..

And…

snapcraft --use-lxd
An error occurred when trying to communicate with the 'LXD' provider: cannot connect to the LXD socket ('/var/snap/lxd/common/lxd/unix.socket')..

I wonder if this is related to this question?

(5) On my Ubuntu VM running on my Mac in Parallels, I also tried destructive-mode and got errors.

hello$ snapcraft --destructive-mode
Hit:1 http://us.archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]                          
Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]                           
Get:4 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]       
Get:5 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 DEP-11 Metadata [306 kB]     
Get:6 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 DEP-11 Metadata [279 kB]      
Get:7 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 DEP-11 Metadata [2,468 B] 
Get:8 http://us.archive.ubuntu.com/ubuntu bionic-backports/universe amd64 DEP-11 Metadata [7,972 B]
Get:9 http://security.ubuntu.com/ubuntu bionic-security/main amd64 DEP-11 Metadata [43.7 kB]        
Get:10 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 DEP-11 Metadata [49.2 kB]
Get:11 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 DEP-11 Metadata [2,464 B]
Fetched 942 kB in 1s (831 kB/s)     
Reading package lists... Done
Installing build dependencies: autoconf automake autopoint autotools-dev libltdl-dev libsigsegv2 libtool m4
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  linux-headers-4.15.0-99 linux-headers-4.15.0-99-generic linux-image-4.15.0-99-generic linux-modules-4.15.0-99-generic linux-modules-extra-4.15.0-99-generic
Use 'sudo apt autoremove' to remove them.
Suggested packages:
  autoconf-archive gnu-standards autoconf-doc libtool-doc gfortran | fortran95-compiler gcj-jdk m4-doc
The following NEW packages will be installed:
  autoconf automake autopoint autotools-dev libltdl-dev libsigsegv2 libtool m4
0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
Need to get 1,864 kB of archives.
After this operation, 6,650 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libsigsegv2 amd64 2.12-1 [14.7 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 m4 amd64 1.4.18-1 [197 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 autoconf all 2.69-11 [322 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 autotools-dev all 20180224.1 [39.6 kB]
Get:5 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 automake all 1:1.15.1-3ubuntu2 [509 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 autopoint all 0.19.8.1-6ubuntu0.3 [426 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libltdl-dev amd64 2.4.6-2 [162 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libtool all 2.4.6-2 [194 kB]
Fetched 1,864 kB in 1s (2,610 kB/s)
Selecting previously unselected package libsigsegv2:amd64.
(Reading database ... 203155 files and directories currently installed.)
Preparing to unpack .../0-libsigsegv2_2.12-1_amd64.deb ...
Unpacking libsigsegv2:amd64 (2.12-1) ...
Selecting previously unselected package m4.
Preparing to unpack .../1-m4_1.4.18-1_amd64.deb ...
Unpacking m4 (1.4.18-1) ...
Selecting previously unselected package autoconf.
Preparing to unpack .../2-autoconf_2.69-11_all.deb ...
Unpacking autoconf (2.69-11) ...
Selecting previously unselected package autotools-dev.
Preparing to unpack .../3-autotools-dev_20180224.1_all.deb ...
Unpacking autotools-dev (20180224.1) ...
Selecting previously unselected package automake.
Preparing to unpack .../4-automake_1%3a1.15.1-3ubuntu2_all.deb ...
Unpacking automake (1:1.15.1-3ubuntu2) ...
Selecting previously unselected package autopoint.
Preparing to unpack .../5-autopoint_0.19.8.1-6ubuntu0.3_all.deb ...
Unpacking autopoint (0.19.8.1-6ubuntu0.3) ...
Selecting previously unselected package libltdl-dev:amd64.
Preparing to unpack .../6-libltdl-dev_2.4.6-2_amd64.deb ...
Unpacking libltdl-dev:amd64 (2.4.6-2) ...
Selecting previously unselected package libtool.
Preparing to unpack .../7-libtool_2.4.6-2_all.deb ...
Unpacking libtool (2.4.6-2) ...
Setting up libltdl-dev:amd64 (2.4.6-2) ...
Setting up libsigsegv2:amd64 (2.12-1) ...
Setting up m4 (1.4.18-1) ...
Setting up autotools-dev (20180224.1) ...
Setting up autopoint (0.19.8.1-6ubuntu0.3) ...
Setting up libtool (2.4.6-2) ...
Setting up autoconf (2.69-11) ...
Setting up automake (1:1.15.1-3ubuntu2) ...
update-alternatives: using /usr/bin/automake-1.15 to provide /usr/bin/automake (automake) in auto mode
Processing triggers for install-info (6.5.0.dfsg.1-2) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
autoconf set to automatically installed.
automake set to automatically installed.
autopoint set to automatically installed.
autotools-dev set to automatically installed.
libltdl-dev set to automatically installed.
libsigsegv2 set to automatically installed.
libtool set to automatically installed.
m4 set to automatically installed.
Pulling gnu-hello 
Sorry, an error occurred in Snapcraft:
[Errno 1] Operation not permitted

I tried sudo, but that didn’t help.

hello$ sudo snapcraft --destructive-mode
Running with 'sudo' may cause permission errors and is discouraged. Use 'sudo' when cleaning.
Pulling gnu-hello 
Sorry, an error occurred in Snapcraft:
[Errno 1] Operation not permitted

The long and the short of it is, I cannot build snaps! I used to be able to, but now I can’t! Can someone please help!?!

Thank you!!!

For (1) Maybe @Saviq can help?

For (2): Mac OS X & remote-build - the homebrew recipe needs to be updated, I think it was fixed around 4.0.0.

For (3): You need to turn on nested virtualization to use multipass in a VM, if supported in Parallels.

For (4): When initializing LXD, I always use --auto to give me the defaults, but I doubt you have anything wrong. Is your user in the lxd group?

For (5) Can you retry with --debug and/or --enable-developer-debug and see if it gives you anything else?

For (2), are you saying I need to update something on my Mac? Or that something out there in the www-ether is broken and needs to be updated by someone else?

For (4), when I type in groups I do not see lxd, but when I type in members lxd, there I am. So I’m not sure what that means. Either way, I added myself:
sudo usermod -a -G lxd myusername

…but nothing changed, even after restarting terminal. Still getting this same error:
An error occurred when trying to communicate with the 'LXD' provider: cannot connect to the LXD socket ('/var/snap/lxd/common/lxd/unix.socket')..

For (5), I got more data with --enable-developer-debug. Here it is:

hello$ snapcraft --destructive-mode --enable-developer-debug
Starting snapcraft 4.0.4 from /snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/cli.
snapcraft is running in a docker or podman (OCI) container
Parts dir /path/hello/parts
Stage dir /path/hello/stage
Prime dir /path/hello/prime
ignoring or passing through unknown parts=OrderedDict([('gnu-hello', OrderedDict([('source', 'http://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz'), ('plugin', 'autotools')]))])
Parts dir /path/hello/parts
Stage dir /path/hello/stage
Prime dir /path/hello/prime
ignoring or passing through unknown parts=OrderedDict([('gnu-hello', OrderedDict([('source', 'http://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz'), ('plugin', 'autotools')]))])
ignoring or passing through unknown parts={'gnu-hello': {'source': 'http://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz', 'plugin': 'autotools', 'stage': [], 'prime': []}}
Loading plugin module 'autotools' with sys.path ['/path/hello/snap/plugins', '/snap/snapcraft/4862/bin', '/snap/snapcraft/4862/usr/lib/python36.zip', '/snap/snapcraft/4862/usr/lib/python3.6', '/snap/snapcraft/4862/usr/lib/python3.6/lib-dynload', '/snap/snapcraft/4862/usr/lib/python3/dist-packages', '/snap/snapcraft/4862/lib/python3.6/site-packages']
Setting up part 'gnu-hello' with plugin 'autotools' and properties {'source': 'http://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz', 'plugin': 'autotools', 'stage': [], 'prime': []}.
Requested build-packages: ['autoconf', 'automake', 'autopoint', 'libtool', 'make']
Marking 'make' (and its dependencies) to be fetched
package: <Package: name:'make' architecture='amd64' id:458>
Marking 'autopoint' (and its dependencies) to be fetched
package: <Package: name:'autopoint' architecture='amd64' id:389>
Marking 'autoconf' (and its dependencies) to be fetched
package: <Package: name:'autoconf' architecture='amd64' id:359>
Marking 'libtool' (and its dependencies) to be fetched
package: <Package: name:'libtool' architecture='amd64' id:368>
Marking 'automake' (and its dependencies) to be fetched
package: <Package: name:'automake' architecture='amd64' id:363>
snapcraft is running in a docker or podman (OCI) container
Preparing to pull gnu-hello 
Pulling gnu-hello 
snapcraft is running as a snap True, SNAP_NAME set to 'snapcraft'
Sorry, an error occurred in Snapcraft:
[Errno 1] Operation not permitted
Traceback (most recent call last):
  File "/snap/snapcraft/4862/bin/snapcraft", line 11, in <module>
    load_entry_point('snapcraft==4.0.4', 'console_scripts', 'snapcraft')()
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/click/core.py", line 1236, in invoke
    return Command.invoke(self, ctx)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/click/decorators.py", line 21, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/cli/_runner.py", line 102, in run
    snap_command.invoke(ctx)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/cli/_command.py", line 88, in invoke
    return super().invoke(ctx)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/cli/lifecycle.py", line 273, in snap
    _execute(steps.PRIME, parts=tuple(), pack_project=True, output=output, **kwargs)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/cli/lifecycle.py", line 78, in _execute
    lifecycle.execute(step, project_config, parts)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/lifecycle/_runner.py", line 137, in execute
    executor.run(step, part_names)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/lifecycle/_runner.py", line 191, in run
    self._handle_step(part_names, part, step, current_step, cli_config)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/lifecycle/_runner.py", line 205, in _handle_step
    getattr(self, "_run_{}".format(current_step.name))(part)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/lifecycle/_runner.py", line 247, in _run_pull
    self._run_step(step=steps.PULL, part=part, progress="Pulling")
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/lifecycle/_runner.py", line 325, in _run_step
    getattr(part, step.name)()
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/pluginhandler/__init__.py", line 477, in pull
    self._do_runner_step(steps.PULL)
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/pluginhandler/__init__.py", line 268, in _do_runner_step
    return getattr(self._runner, "{}".format(step.name))()
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/pluginhandler/_runner.py", line 80, in pull
    steps.PULL,
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/pluginhandler/_runner.py", line 126, in _run_scriptlet
    call_fifo = _NonBlockingRWFifo(os.path.join(tempdir, "function_call"))
  File "/snap/snapcraft/4862/lib/python3.6/site-packages/snapcraft/internal/pluginhandler/_runner.py", line 235, in __init__
    os.mkfifo(path)
PermissionError: [Errno 1] Operation not permitted
snapcraft is running as a snap True, SNAP_NAME set to 'snapcraft'
We would appreciate it if you anonymously reported this issue.
No other data than the traceback and the version of snapcraft in use will be sent.
Would you like to send this error data? (Yes/No/Always/View) [no]: Yes
Configuring Raven for host: <raven.conf.remote.RemoteConfig object at 0x7fb44def9f60>
Sending message of length 4017 to https://sentry.io/api/277754/store/
Thank you, sent.
You can find the traceback in file '/tmp/tmp01pc5695/trace.txt'.

Restarting the terminal is not enough, either restart the session or run newgrp lxd in that terminal before running snapcraft again.

1 Like

Is hello (or the working directory) shared/mounted from you Mac?

I needed to restart the VM. That enabled the group membership to take effect. So now LXD (4) works for me! Thank you guys for your help!

But, to answer your question @sergiusens, yes. The working directory is shared from my Mac, where it resides in my Documents folder. The path in my Ubuntu VM to get to the files looks like this (if this provides clarification):
/media/psf/AllFiles/Users/myusername/Documents/mypath/hello/

Does that make a difference for (5)?

Glad the reboot worked for you. For future reference, you should logout of your desktop session after you change your logged-in user’s groups. For a specific shell instance, you can adjust the current group with newgrp <group-name>.

1 Like

I needed to restart the VM. That enabled the group membership to take effect. So now LXD (4) works for me! Thank you guys for your help!

Great!

But, to answer your question @sergiusens, yes. The working directory is shared from my Mac, where it resides in my Documents folder. The path in my Ubuntu VM
to get to the files looks like this (if this provides clarification):
/media/psf/AllFiles/Users/myusername/Documents/mypath/hello/

Does that make a difference for (5)?

Yes it does, in --destructive-mode we run mkfifo in the subdir and that may as
well be the cause of the error due to file system support.

1 Like