Snapd causing lengthy boot time?

Just last week I played a bit with tracing to figure out why it takes so long to start the mount units and we discussed that during the standup a bit. The observation was that it took particularly look to run a very specific ioctl() on the loopback device.

The ioctl in question is:

ioctl(4, LOOP_SET_STATUS64, 
      {lo_device=makedev(0, 0), lo_inode=0, lo_rdevice=makedev(0, 0), 
       lo_offset=0, lo_sizelimit=0, lo_number=0, lo_encrypt_type=LO_CRYPT_NONE, 
       lo_encrypt_key_size=0, lo_flags=LO_FLAGS_AUTOCLEAR, 
       lo_file_name="/var/lib/snapd/snaps/gnome-logs_45.snap", 
       lo_crypt_name="", lo_encrypt_key="", lo_init=[0, 0]}) = 0 <0.014573>

The call associates a file descriptor corresponding to an open /dev/loop* device with a backing file assigned to the device.

One does not really need to mount snaps, just running losetup -f .. is enough. Take a look at this trace:

$ sudo perf trace losetup -f /var/lib/snapd/snaps/gedit_master_138.snap                     
         ? (         ): losetup/26609  ... [continued]: execve())                                           = 0                                                              
     ....
     1.099 ( 0.006 ms): losetup/26609 openat(dfd: CWD, filename: 0x48b974c0, flags: RDWR|CLOEXEC)           = 3                                                              
     1.108 ( 0.012 ms): losetup/26609 openat(dfd: CWD, filename: 0x76415cc0, flags: RDWR|CLOEXEC)           = 4
     1.122 ( 0.155 ms): losetup/26609 ioctl(fd: 4</dev/loop13>, cmd: (NONE, 0x4c, 0, 0), arg: 0x3)          = 0
     1.281 (53.719 ms): losetup/26609 ioctl(fd: 4</dev/loop13>, cmd: (NONE, 0x4c, 0x4, 0), arg: 0x7ffc76415d60) = 0
    55.012 ( 0.002 ms): losetup/26609 close(fd: 3)                                                          = 0
    55.016 ( 0.006 ms): losetup/26609 stat(filename: 0x76416d58, statbuf: 0x7ffc76415e80)                   = 0
    55.024 ( 0.002 ms): losetup/26609 close(fd: 4</dev/loop13>)                                             = 0
    55.040 ( 0.002 ms): losetup/26609 close(fd: 1)                                                          = 0
    55.045 ( 0.002 ms): losetup/26609 close(fd: 2)                                                          = 0
    55.059 (         ): losetup/26609 exit_group()  

The problem is not snap specific, but rather inherent to loopback devices. Same thing is visible when using an empty data file (dd if=zero of=data kind).

From very naiive testing, the average time it takes to set up the device, would increase with the number of devices being set up in parallel (suggesting some resource contention).

Tracing this further in the kernel with kprobes set up inside the loop driver I got this:

$ sudo perf trace -e 'syscalls:sys_enter_ioctl,syscalls:sys_exit_ioctl,probe:*,block:*' losetup -f gnome-3-28-1804_19.snap
     0.000 losetup/1206 syscalls:sys_enter_ioctl:fd: 3</home/guest/gnome-3-28-1804_19.snap>, cmd: (NONE, 0x4c, 0x82, 0), arg: 0x80002
     0.021 losetup/1206 probe:loop_control_ioctl:(ffffffffc0395340)
     0.033 losetup/1206 probe:loop_lookup:(ffffffffc0394840)
     0.044 losetup/1206 syscalls:sys_exit_ioctl:0x0
     0.084 losetup/1206 probe:lo_open:(ffffffffc03948f0)
     0.094 losetup/1206 syscalls:sys_enter_ioctl:fd: 4</dev/loop0>, cmd: (NONE, 0x4c, 0, 0), arg: 0x3
     0.101 losetup/1206 probe:lo_ioctl:(ffffffffc0396330)
     0.108 losetup/1206 probe:loop_validate_file:(ffffffffc0393010)
     0.316 losetup/1206 syscalls:sys_exit_ioctl:0x0
     0.324 losetup/1206 syscalls:sys_enter_ioctl:fd: 4</dev/loop0>, cmd: (NONE, 0x4c, 0x4, 0), arg: 0x7ffd6fd975c0
     0.331 losetup/1206 probe:lo_ioctl:(ffffffffc0396330)
     0.338 losetup/1206 probe:loop_set_status64:(ffffffffc0393f70)
     0.345 losetup/1206 probe:loop_set_status_1:(ffffffffc0393850)
     0.351 losetup/1206 probe:loop_set_status:(ffffffffc0393850)
     0.359 losetup/1206 probe:blk_mq_freeze_queue:(ffffffffba447180)
     0.365 losetup/1206 probe:blk_freeze_queue_start:(ffffffffba445c50)
     0.374 losetup/1206 probe:blk_mq_freeze_queue_wait:(ffffffffba444c60)
     0.380 losetup/1206 probe:init_wait_entry:(ffffffffba0fcb50)
      
     <---- notice how long it takes here ----->
     0.387 losetup/1206 probe:prepare_to_wait_event:(ffffffffba0fcfb0)
     6.530 losetup/1206 probe:prepare_to_wait_event:(ffffffffba0fcfb0)

     6.547 losetup/1206 probe:finish_wait:(ffffffffba0fd150)
     6.559 losetup/1206 probe:blk_mq_freeze_queue_wait__return:(ffffffffba444c60 <- ffffffffc039390f)
     6.562 losetup/1206 probe:blk_mq_freeze_queue__return:(ffffffffba447180 <- ffffffffc039390f)
     6.581 losetup/1206 probe:loop_release_xfer_1:(ffffffffc0393240)
     6.592 losetup/1206 probe:loop_release_xfer:(ffffffffc0393240)
     6.602 losetup/1206 probe:blk_queue_flag_set:(ffffffffba4356d0)
     6.611 losetup/1206 probe:blk_mq_unfreeze_queue:(ffffffffba445760)
     6.622 losetup/1206 probe:loop_set_status__return:(ffffffffc0393850 <- ffffffffc0393fb0)
     6.629 losetup/1206 syscalls:sys_exit_ioctl:0x0
     6.836 losetup/1206 probe:lo_release:(ffffffffc03945e0)

This is the place where the time is spent:
https://elixir.bootlin.com/linux/v5.0/source/drivers/block/loop.c#L1200 Clearly, the blk queue is not empty/unused.

I did try to investigate this further, but I was not able to determine why there might be any blk requests waiting on the device (which was just created BTW). It is probably worth spending a bit more time on this and either add some printk() in strategic places, or write more elabore BPF probe attached at loop_set_status() or blk_mq_freeze_queue().

3 Likes