Snapd causing lengthy boot time?

I don’t boot my system very often, when I do, it takes longer than I’d like.

alan@hal:~$ uprecords 
     #               Uptime | System                                     Boot up
     1    32 days, 23:23:16 | Linux 4.15.0-45-generic   Wed Feb 13 12:14:29 2019
     2    21 days, 01:23:50 | Linux 4.15.0-43-generic   Wed Jan 23 10:42:03 2019
     3    14 days, 15:28:32 | Linux 4.15.0-43-generic   Tue Jan  8 17:43:12 2019
     4    13 days, 08:28:58 | Linux 4.15.0-42-generic   Wed Dec 19 09:41:51 2018
     5    11 days, 09:07:33 | Linux 4.15.0-39-generic   Mon Dec  3 00:08:59 2018
     6     6 days, 23:26:56 | Linux 4.15.0-43-generic   Tue Jan  1 18:15:21 2019
     7     5 days, 00:17:56 | Linux 4.15.0-42-generic   Fri Dec 14 09:22:39 2018
->   8     0 days, 00:11:49 | Linux 4.15.0-45-generic   Mon Mar 18 11:48:04 2019
1up in     5 days, 00:06:08 | at                        Sat Mar 23 12:06:00 2019
no1 in    32 days, 23:11:28 | at                        Sat Apr 20 12:11:20 2019
    up   105 days, 09:48:50 | since                     Mon Dec  3 00:08:59 2018
  down     0 days, 02:02:04 | since                     Mon Dec  3 00:08:59 2018
   %up               99.920 | since                     Mon Dec  3 00:08:59 2018

I consider this to be a “powerful” computer…

… with a “fast” storage medium for the boot/root (and /var/disk) …

Yet it takes a long time to boot, and a significant chunk of that is apparently down to snapd - if I am reading the output of systemd-analyze blame right? Here is the full output, snippet below.

alan@hal:~$ systemd-analyze blame
         20.504s snapd.service
          3.439s NetworkManager-wait-online.service
          3.406s plymouth-quit-wait.service
          2.787s snap.lxd.activate.service
          2.747s dev-nvme0n1p3.device
          2.518s snap-alexa-x1.mount
          2.516s snap-review\x2dtools-753.mount
          2.513s snap-marge-14.mount
          2.512s snap-terminal\x2ddescent-x2.mount
          2.501s snap-isometric\x2dgame-x5.mount
          2.498s snap-planetlander-2.mount
          2.495s snap-core18-782.mount
          2.494s snap-gnome\x2dlogs-37.mount
          2.492s snap-mrrescue-64.mount
          2.488s snap-vault-1674.mount
          2.475s snap-hyperfine-x1.mount
          2.468s snap-bitwarden-16.mount
          2.466s snap-legit-x1.mount
          2.462s snap-manta-x1.mount
          2.459s snap-hextris-14.mount
          2.446s snap-qemu\x2dvirgil-40.mount
          2.439s snap-auryo-139.mount
          2.436s snap-eclipse-39.mount

Yes, I have a lot of snaps installed…

alan@hal:~$ snap list | wc -l

Is there any way we can make this faster? I imagine someone with a slightly less powerful computer would feel this more than I do, especially if they boot more often than once every couple of weeks or so like me.


One way to make it faster would be to not mount snaps on startup and instead delay that until they are really needed.

1 Like

And only mount when it’s really needed, not all of them in the first place.

could you share systemd-analyze plot?

blame talks about “how long the unit took to initialize”, and it’s not clear to me how that interacts with a notify unit (like snapd.service) waiting for other things without actually holding up the boot itself.

Had to reboot (moved PC to another desk as part of an office re-jig) so this isn’t the same boot as the previous one, but should be similar. Only difference is I didn’t have the network attached for this boot, in case that matters.

1 Like

Just last week I played a bit with tracing to figure out why it takes so long to start the mount units and we discussed that during the standup a bit. The observation was that it took particularly look to run a very specific ioctl() on the loopback device.

The ioctl in question is:

ioctl(4, LOOP_SET_STATUS64, 
      {lo_device=makedev(0, 0), lo_inode=0, lo_rdevice=makedev(0, 0), 
       lo_offset=0, lo_sizelimit=0, lo_number=0, lo_encrypt_type=LO_CRYPT_NONE, 
       lo_encrypt_key_size=0, lo_flags=LO_FLAGS_AUTOCLEAR, 
       lo_crypt_name="", lo_encrypt_key="", lo_init=[0, 0]}) = 0 <0.014573>

The call associates a file descriptor corresponding to an open /dev/loop* device with a backing file assigned to the device.

One does not really need to mount snaps, just running losetup -f .. is enough. Take a look at this trace:

$ sudo perf trace losetup -f /var/lib/snapd/snaps/gedit_master_138.snap                     
         ? (         ): losetup/26609  ... [continued]: execve())                                           = 0                                                              
     1.099 ( 0.006 ms): losetup/26609 openat(dfd: CWD, filename: 0x48b974c0, flags: RDWR|CLOEXEC)           = 3                                                              
     1.108 ( 0.012 ms): losetup/26609 openat(dfd: CWD, filename: 0x76415cc0, flags: RDWR|CLOEXEC)           = 4
     1.122 ( 0.155 ms): losetup/26609 ioctl(fd: 4</dev/loop13>, cmd: (NONE, 0x4c, 0, 0), arg: 0x3)          = 0
     1.281 (53.719 ms): losetup/26609 ioctl(fd: 4</dev/loop13>, cmd: (NONE, 0x4c, 0x4, 0), arg: 0x7ffc76415d60) = 0
    55.012 ( 0.002 ms): losetup/26609 close(fd: 3)                                                          = 0
    55.016 ( 0.006 ms): losetup/26609 stat(filename: 0x76416d58, statbuf: 0x7ffc76415e80)                   = 0
    55.024 ( 0.002 ms): losetup/26609 close(fd: 4</dev/loop13>)                                             = 0
    55.040 ( 0.002 ms): losetup/26609 close(fd: 1)                                                          = 0
    55.045 ( 0.002 ms): losetup/26609 close(fd: 2)                                                          = 0
    55.059 (         ): losetup/26609 exit_group()  

The problem is not snap specific, but rather inherent to loopback devices. Same thing is visible when using an empty data file (dd if=zero of=data kind).

From very naiive testing, the average time it takes to set up the device, would increase with the number of devices being set up in parallel (suggesting some resource contention).

Tracing this further in the kernel with kprobes set up inside the loop driver I got this:

$ sudo perf trace -e 'syscalls:sys_enter_ioctl,syscalls:sys_exit_ioctl,probe:*,block:*' losetup -f gnome-3-28-1804_19.snap
     0.000 losetup/1206 syscalls:sys_enter_ioctl:fd: 3</home/guest/gnome-3-28-1804_19.snap>, cmd: (NONE, 0x4c, 0x82, 0), arg: 0x80002
     0.021 losetup/1206 probe:loop_control_ioctl:(ffffffffc0395340)
     0.033 losetup/1206 probe:loop_lookup:(ffffffffc0394840)
     0.044 losetup/1206 syscalls:sys_exit_ioctl:0x0
     0.084 losetup/1206 probe:lo_open:(ffffffffc03948f0)
     0.094 losetup/1206 syscalls:sys_enter_ioctl:fd: 4</dev/loop0>, cmd: (NONE, 0x4c, 0, 0), arg: 0x3
     0.101 losetup/1206 probe:lo_ioctl:(ffffffffc0396330)
     0.108 losetup/1206 probe:loop_validate_file:(ffffffffc0393010)
     0.316 losetup/1206 syscalls:sys_exit_ioctl:0x0
     0.324 losetup/1206 syscalls:sys_enter_ioctl:fd: 4</dev/loop0>, cmd: (NONE, 0x4c, 0x4, 0), arg: 0x7ffd6fd975c0
     0.331 losetup/1206 probe:lo_ioctl:(ffffffffc0396330)
     0.338 losetup/1206 probe:loop_set_status64:(ffffffffc0393f70)
     0.345 losetup/1206 probe:loop_set_status_1:(ffffffffc0393850)
     0.351 losetup/1206 probe:loop_set_status:(ffffffffc0393850)
     0.359 losetup/1206 probe:blk_mq_freeze_queue:(ffffffffba447180)
     0.365 losetup/1206 probe:blk_freeze_queue_start:(ffffffffba445c50)
     0.374 losetup/1206 probe:blk_mq_freeze_queue_wait:(ffffffffba444c60)
     0.380 losetup/1206 probe:init_wait_entry:(ffffffffba0fcb50)
     <---- notice how long it takes here ----->
     0.387 losetup/1206 probe:prepare_to_wait_event:(ffffffffba0fcfb0)
     6.530 losetup/1206 probe:prepare_to_wait_event:(ffffffffba0fcfb0)

     6.547 losetup/1206 probe:finish_wait:(ffffffffba0fd150)
     6.559 losetup/1206 probe:blk_mq_freeze_queue_wait__return:(ffffffffba444c60 <- ffffffffc039390f)
     6.562 losetup/1206 probe:blk_mq_freeze_queue__return:(ffffffffba447180 <- ffffffffc039390f)
     6.581 losetup/1206 probe:loop_release_xfer_1:(ffffffffc0393240)
     6.592 losetup/1206 probe:loop_release_xfer:(ffffffffc0393240)
     6.602 losetup/1206 probe:blk_queue_flag_set:(ffffffffba4356d0)
     6.611 losetup/1206 probe:blk_mq_unfreeze_queue:(ffffffffba445760)
     6.622 losetup/1206 probe:loop_set_status__return:(ffffffffc0393850 <- ffffffffc0393fb0)
     6.629 losetup/1206 syscalls:sys_exit_ioctl:0x0
     6.836 losetup/1206 probe:lo_release:(ffffffffc03945e0)

This is the place where the time is spent: Clearly, the blk queue is not empty/unused.

I did try to investigate this further, but I was not able to determine why there might be any blk requests waiting on the device (which was just created BTW). It is probably worth spending a bit more time on this and either add some printk() in strategic places, or write more elabore BPF probe attached at loop_set_status() or blk_mq_freeze_queue().


The best way to do something faster is … not to do it.


if you look at the boot graph, the mount times are not related to snapd startup slowness.

@popey, would you happen to have debug enabled (or, can you enable it and reboot and show us the new boot graph), and can we see the debug logs?


1 Like

I didn’t have debug enabled previously. Unfortunately (for our debugging) - for other unrelated reasons (shiny) I have upgraded to 18.10 on that machine. So many things have changed - boot speed isn’t one of them - sorry about that. I have put everything here. firstboot is the files from first booting after upgrading. I am discounting that because there maybe post-upgrade “stuff” that slows things down? So I rebooted and that’s reboot which may be cleaner. Finally I enabled debug which is debug and that’s the most recent boot. The systemd-analyze blame is the .txt file and the journalctl is the .log file.

1 Like

As part of the bigger “snapd measurements” story that I’ve been working on recently I’ll soon be adding code to collect timing measurements on critical snapd activities; this will hopefully give some answers. I’ll report when I’ve something (and - when ready - I’ll probably ask you @popey to run snapd from edge in order to collect some probes with your impressive set of snaps).