Ubuntu Core VMWare install fails - SCSI not recognized, NVMe so là là, SATA stable

I was able to install Unbuntu Core in Hyper-V, see https://askubuntu.com/questions/1209234/ubuntu-core-as-hyper-v-virtual-machine-image/1210966?noredirect=1#comment2132773_1210966

Currently, I have to work with VMWare, so I thought lets do the same thing again or just convert the disk to vmdk.
It looks like that it is not that easy at it may sounds.
I’m on VMWare Palyer 15.5.6 and Ubuntu 20 as live image.
I worked with dawson-uc18-m7-20190122-10.img.xz and ubuntu-core-18-amd64.img.xz

Converting didn’t work it always failed with findfs: unable to resolve ‘LABEL=writable’, so I decided to do a reinstall, using standard settings. It failed again as, throwing the same error.



Occasionally, the boot dropped to a (initramfs) prompts/busybox
So, I started to some research.

I read

Eventually, it just clicked and I started searching for the drive. As only a few commands are available I used cat /proc/scsi/scsi and cat /proc/partitions (as all other commands in linux - How do I find out what hard disks are in the system? - Unix & Linux Stack Exchange won’t work) what told me there is nothing besides the CDROM but I wondered how it could even boot then???

I started experimenting adding a SATA and NVMe drive what was found by the system



I gave up with the standard SCSI drive, as there was only https://kb.vmware.com/s/article/1006621 leftover but all was set correctly.
I tried to install Ubuntu Core on NVMe as I expected a better performance compared to SATA,

based on

I cannot remember if I got that after the very first flash and install attempt
but once again I felt somehow destroyed. So again I asked Google for help and found [Ubuntu 18.04] snapd.service fails on boot based “fully seeded” screenshot below.
Once again it just clicked. The comment it might be related to missing disk space made me think that it could be related to the dynamic disk space allocation of VMWare as the disk space is not pre-allocated.
So I resized the writable partition ni the drive manager of Ubuntu live to around 3.5 GB what resolved the problem

Installing on a NVMe resulted in the follwoing first
Ubuntu Core started to Bootstrap but got stuck in Stopped/Started Getty
somehow like in Ubuntu Core 18 on Raspberry Pi 3 doesn't bootstrap and https://superuser.com/questions/1553890/initializing-serial-getty-in-ubuntu-core-18
After the forced restart I saw
and was asked to configure but with an error echoed before
The configuration failed after entering my Ubuntu Email

It could be that due to a redownload of the install images that the setup worked once. Honestly, I cannot remember, what made it work.
After hours and hours, I was able to boot into Ubuntu Core. I got the impression that some errors were echoed during start-up and due to resize trick, I did not trust the install.
I tried to refresh the snap packages but it always failed with EOF

In the end, I repeated the install using a SATA drive what gives me the impression that there is not any error echoed during start-up, although snap refresh fails with EOF but that could be due to my very slow internet connection currently.
I was only able to refresh snapd.

When I restarted after the flash I got always something like


at least I’m now on latest stable

Name       Version         Rev   Tracking       Publisher   Notes
core18     20200707        1880  latest/stable  canonical✓  base
pc         18-2            36    18/stable      canonical✓  gadget
pc-kernel  4.15.0-112.113  568   18/stable      canonical✓  kernel
snapd      2.45.2          8542  latest/stable  canonical✓ f

EOF was properly due to weak internet connection

Hi, I’m sorry you’ve been having trouble using Ubuntu Core with VMWare, but I had a bit of trouble figuring out from your post what your specific issues are as at the end it seems you can’t download snaps due to network issues?

I got lost as well in troubleshooting as I could not reproduce the issues that often.

The main problem is that SCSI drives are not recognised by Ubuntu Core although it boots partly and Ubuntu Live Image see the entire disks.

I’m now up to date using a SATA disk

I suspect part of the problem is that you were using the image specifically tailored for an Intel NUC. You might have better experience by using the image from the KVM page (https://ubuntu.com/download/kvm) which is designed to run on a virtual machine and is more generic in nature, although not specifically a VMware machine so there might still be incompatibilities.

The image URL is http://cdimage.ubuntu.com/ubuntu-core/18/stable/current/ubuntu-core-18-amd64.img.xz

I did with both

The question is, why it does not detect /dev/sda3, labelled writeable, although it is seen by Ubuntu Live Image. Moreover /dev/sda1 and /dev/sda2, labelled system-boot, are detected as the system is initiating boot.

It could be that we are missing kernel modules in the initramfs that would allow mounting/using this device from the initramfs, but these kernel modules are loaded in the live image. Do you know precisely what kernel module is necessary for your writable partition device?

first of all thx to all for helping me out, didn’t expect that on such a tricky topcic :slight_smile:

nope but Core is echos the drivers used according to

image image

I don’t know if Ubuntu does the same thing. It may tell us what driver is used. I should be able to figure out what driver is used by the Kernel running the Live Image, shouldn’t I? Could anyone tell me how to figure out this?

looks like SATA does not work flawless
I’m getting now

running df gives
all /dev/loop... are full

this is an SD card driver, unlikely you are using it to boot in a VM …

These are drivers for USB input devices … nothing to do with booting

i’m pretty sure @ijohnson is right here … the initramfs in the kernel snap in Ubuntu Core uses a fixed set of controller modules, most likely one you need is missing …

1 Like

why would you think SATA does not work here ? also, this does not look like you are booting Ubuntu Core … nothing in Ubuntu Core should ever use any of the union filesystems to boot … if this message shows up during boot you should check what image you are actually booting there (also the info you provide as piecemeal here is kind of sparse so here is a lot of guesswork needed on our sides)

one would hope so :wink: given they are readonly squashfs files …

1 Like

This are examples, when I boot Ubuntu Live Image it should show the driver

Ubuntu Core 18

google led me to
so I guessed full drive

I 'm following

dmesg | grep sda


sd 20:0:0:0: [sda] Attached SCSI disk


cd /sys/block/sda/


in addition, I checked


guessing that that 20 is the same in .../scsi_host as in /sys/block/sda/
so /sys/class/scsi_host/host20/proc_name reads mptspi:

does this help?

should result the same

reading through

LSI Logic SAS and Parallel SCSI drivers are not supported

The LSI Logic SAS driver ( mptsas ) and LSI Logic Parallel driver ( mptspi ) for SCSI are no longer supported. As a consequence, the drivers can be used for installing RHEL 8 as a guest operating system on a VMWare hypervisor to a SCSI disk, but the created VM will not be supported by Red Hat.

let me think that Ubunto Core misses a driver for LSI Logic Parallel driver ( mptspi ) for SCSI.
Is this correct?

the mptspi module is really old and definitely included in the kernel snap:

$ find /lib/modules/5.4.0-45-generic/ -name '*mptspi*'

but to boot from such a device the drivers need to be included in the initramfs, Ubuntu core uses a pre-generated initrd so the drivers need to be added there at generation time (i think @ijohnson said that above already)

how to get this done, file a bug report?

1 Like


since this is for uc18 / uc16 please file at bugs.launchpad.net/initramfs-tools, probably needs to go to the uc16/uc18 specific package but I can’t remember the name of that off the top of my head