Ubuntu Core VMWare install fails - SCSI not recognized, NVMe so là là, SATA stable

Hi,
I got lost as well in troubleshooting as I could not reproduce the issues that often.

The main problem is that SCSI drives are not recognised by Ubuntu Core although it boots partly and Ubuntu Live Image see the entire disks.

I’m now up to date using a SATA disk

I suspect part of the problem is that you were using the image specifically tailored for an Intel NUC. You might have better experience by using the image from the KVM page (https://ubuntu.com/download/kvm) which is designed to run on a virtual machine and is more generic in nature, although not specifically a VMware machine so there might still be incompatibilities.

The image URL is http://cdimage.ubuntu.com/ubuntu-core/18/stable/current/ubuntu-core-18-amd64.img.xz

I did with both

The question is, why it does not detect /dev/sda3, labelled writeable, although it is seen by Ubuntu Live Image. Moreover /dev/sda1 and /dev/sda2, labelled system-boot, are detected as the system is initiating boot.

It could be that we are missing kernel modules in the initramfs that would allow mounting/using this device from the initramfs, but these kernel modules are loaded in the live image. Do you know precisely what kernel module is necessary for your writable partition device?

first of all thx to all for helping me out, didn’t expect that on such a tricky topcic :slight_smile:

nope but Core is echos the drivers used according to

image image

I don’t know if Ubuntu does the same thing. It may tell us what driver is used. I should be able to figure out what driver is used by the Kernel running the Live Image, shouldn’t I? Could anyone tell me how to figure out this?

looks like SATA does not work flawless
I’m getting now
image

running df gives
image
all /dev/loop... are full

this is an SD card driver, unlikely you are using it to boot in a VM …

These are drivers for USB input devices … nothing to do with booting

i’m pretty sure @ijohnson is right here … the initramfs in the kernel snap in Ubuntu Core uses a fixed set of controller modules, most likely one you need is missing …

1 Like

why would you think SATA does not work here ? also, this does not look like you are booting Ubuntu Core … nothing in Ubuntu Core should ever use any of the union filesystems to boot … if this message shows up during boot you should check what image you are actually booting there (also the info you provide as piecemeal here is kind of sparse so here is a lot of guesswork needed on our sides)

one would hope so :wink: given they are readonly squashfs files …

1 Like

This are examples, when I boot Ubuntu Live Image it should show the driver

Ubuntu Core 18
image

google led me to
https://askubuntu.com/questions/1068588/boot-problem-after-update-missing-lowerdir
so I guessed full drive

I 'm following
https://stackoverflow.com/questions/17878843/determine-linux-driver-that-owns-a-disk
now

dmesg | grep sda

image

sd 20:0:0:0: [sda] Attached SCSI disk

image

cd /sys/block/sda/

image

in addition, I checked

/sys/class/scsi_host/host20/proc_nam

guessing that that 20 is the same in .../scsi_host as in /sys/block/sda/
image
so /sys/class/scsi_host/host20/proc_name reads mptspi:
image

does this help?

https://serverfault.com/questions/888996/does-scsi-hba-ata-piix-or-mptspi-correspond-to-vms-virtual-disk-in-vmware
should result the same

reading through

LSI Logic SAS and Parallel SCSI drivers are not supported

The LSI Logic SAS driver ( mptsas ) and LSI Logic Parallel driver ( mptspi ) for SCSI are no longer supported. As a consequence, the drivers can be used for installing RHEL 8 as a guest operating system on a VMWare hypervisor to a SCSI disk, but the created VM will not be supported by Red Hat.

let me think that Ubunto Core misses a driver for LSI Logic Parallel driver ( mptspi ) for SCSI.
Is this correct?

the mptspi module is really old and definitely included in the kernel snap:

$ find /lib/modules/5.4.0-45-generic/ -name '*mptspi*'
/lib/modules/5.4.0-45-generic/kernel/drivers/message/fusion/mptspi.ko

but to boot from such a device the drivers need to be included in the initramfs, Ubuntu core uses a pre-generated initrd so the drivers need to be added there at generation time (i think @ijohnson said that above already)

how to get this done, file a bug report?

1 Like

excatly

since this is for uc18 / uc16 please file at bugs.launchpad.net/initramfs-tools, probably needs to go to the uc16/uc18 specific package but I can’t remember the name of that off the top of my head

done

I’ve finally been able to get some testing done (I didn’t have vmware available before).

  1. When creating the Virtual Machine you need to choose “Custom” rather than “Typical”.
  2. Accept the default of LSI Logic for the SCSI controller (we won’t be using this).
  3. Finally, on the create or choose a disk image page, select use an existing image and set its interface to “SATA”.

This process successfully boots for me using VMWare Workstation 15.5.

VMWare prompted me to upgrade the format of my VMDK image the first time I started the VM, but that is likely due to the qemu-img utility, that I used to convert the .img to .vmdk, creating an older format.

2 Likes

If anyone else hits this issue, which actually will manifest slightly differently these days with a message like this:

the-tool[375]: error: Failed to make path /dev/disk/by-partuuid/9696c9f0-4343-4d23-b345-b2e819c84b2 absolute: No such file or directory 
[FAILED] Failed to start the-tool.service.

A more fool-proof way to diagnose what kernel module/driver is needed is to boot a working image (like the live ubuntu classic image mentioned here), and then follow instructions from https://unix.stackexchange.com/a/125272, essentially asking the kernel directly what module was loaded for the disk. Then file a bug against either initramfs-tools for Ubuntu Core 16 / Ubuntu Core 18, or ubuntu-core-initramfs for Ubuntu Core 20.