Armhf 32bit binaries on aarch64 OS - within snap?


#1

I have armhf 32 bit binaries that run fine on a 32bit OS (Raspbian) via a snap

I just discovered (and verified) its possible to run those same armhf binaries on an aarch64 (arm64) OS (Armbian) as long as all the libs are installed to the OS.

  1. Does snap/snapcraft support running this?

I can verify the 32 binaries run on the 64bit OS by running the commands outside the snap / app armor. When I run them from within the snap --shell i got this error “No such file or directory

This was the same error i got running the armhf with missing libraries… so it may be a missing dependencies.

  1. Is it possible to get more detailed info from Snap / AppArmor regarding attempted library loads so I can narrow down which dependency is missing?

#2

This is the exact problem I am having:

but replace i386 with armhf and x86_x64 with aarch64 (arm64)

so my question is more or less an “is this supported” feature of snap?
should this work if I include all of the required /lib and /usr/lib with in my snap?


#3

it is supported for x86, the amd64 core snap ships libc6:i386 and sets up the architecture …

it has not been a priority for arm64 yet though, so it has not been implemented in the arm64 core snap…

the code responsible for setting up the x86 part is at:

if you want to make a PR to add the same setup to the arm64 core snap i guess that would be appreciated :slight_smile:

(build instructions to roll your own local test core snap when making such changes are at https://github.com/snapcore/core )


#4

I will definitely attempt it!

it looks fairly simple.

but there has to be more to it?

my current snap does not recognize run armhf executables within strict confinement even though my base system has the interpreter /lib/ld-linux-armhf.so.3

Are there any changes required for apparmor / confinement restricting access to the interpreter lib?

for example: my snap has procps:armhf installed and as long as I install libc:armhf on my base system:

\snap\mysnap\current\bin\ps (works!)

snap run --shell mysnap
\snap\mysnap\current\bin\ps (No such file or directory)


#5

the first call uses the armhf libc directly on your rootfs,
the second one uses the core snap as its rootfs instead (like all snaps when executed under “snap run” or from /snap/bin) and that does not have the armhf libc available …


#6

ok. i get it now… I’ll test the script modification locally and see if it works.


#7

Either my build setup is incorrect, or the build is broken (I suspect my setup is incorrect)

I have a x64 vm running ubuntu 16.0.4.3 lts and it fails in the same place on my physical arm64. The error output below is from the unpatched build on x64 (the error is a line after the libc6 gets installed in the file you pointed me to)

The build steps say to have a clean chroot or container… I am not familiar with setting up a ‘clean room’ in this context and attempting to research it seemed a bit confusing. I thought a “clean chroot” could just be swapped out for a vanilla os install (ie fresh vm)

Can you recommend any links / docs for a ‘clean chroot’ ?

> I: Checking if we are amd64 and libc6:i386 should be installed
> + dpkg --print-architecture
> + [ amd64 = amd64 ]
> + echo I: Enabling i386 multiarch support on amd64
> I: Enabling i386 multiarch support on amd64
> + dpkg --add-architecture i386
> + apt-get -y update
> Hit:1 http://security.ubuntu.com/ubuntu xenial-security InRelease
> Get:2 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages [384 kB]
> Hit:3 http://archive.ubuntu.com/ubuntu xenial InRelease
> Hit:4 http://ppa.launchpad.net/snappy-dev/image/ubuntu xenial InRelease
> Hit:5 http://archive.ubuntu.com/ubuntu xenial-updates InRelease
> Get:6 http://security.ubuntu.com/ubuntu xenial-security/restricted i386 Packages [7224 B]
> Get:7 http://security.ubuntu.com/ubuntu xenial-security/universe i386 Packages [160 kB]
> Hit:8 http://ppa.launchpad.net/snappy-dev/edge/ubuntu xenial InRelease
> Get:9 http://security.ubuntu.com/ubuntu xenial-security/multiverse i386 Packages [3376 B]
> Get:10 http://archive.ubuntu.com/ubuntu xenial/main i386 Packages [1196 kB]
> Get:11 http://ppa.launchpad.net/snappy-dev/image/ubuntu xenial/main i386 Packages [5936 B]
> Get:12 http://ppa.launchpad.net/snappy-dev/edge/ubuntu xenial/main i386 Packages [564 B]
> Get:13 http://archive.ubuntu.com/ubuntu xenial/restricted i386 Packages [8684 B]
> Get:14 http://archive.ubuntu.com/ubuntu xenial/universe i386 Packages [7512 kB]
> Get:15 http://archive.ubuntu.com/ubuntu xenial/multiverse i386 Packages [140 kB]
> Get:16 http://archive.ubuntu.com/ubuntu xenial-updates/main i386 Packages [653 kB]
> Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/restricted i386 Packages [7600 B]
> Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/universe i386 Packages [533 kB]
> Get:19 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse i386 Packages [15.3 kB]
> Fetched 10.6 MB in 14s (727 kB/s)
> Reading package lists...
> + echo I: Installing libc6:i386 in amd64 image
> I: Installing libc6:i386 in amd64 image
> + apt-get -y install libc6:i386
> Reading package lists...
> Building dependency tree...
> Reading state information...
> The following additional packages will be installed:
>   gcc-6-base:i386 libgcc1:i386
> Suggested packages:
>   glibc-doc:i386 locales:i386
> The following NEW packages will be installed:
>   gcc-6-base:i386 libc6:i386 libgcc1:i386
> 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
> Need to get 2330 kB of archives.
> After this operation, 10.0 MB of additional disk space will be used.
> Get:1 http://security.ubuntu.com/ubuntu xenial-security/main i386 libc6 i386 2.23-0ubuntu9 [2269 kB]
> Get:2 http://archive.ubuntu.com/ubuntu xenial/main i386 gcc-6-base i386 6.0.1-0ubuntu1 [14.3 kB]
> Get:3 http://archive.ubuntu.com/ubuntu xenial/main i386 libgcc1 i386 1:6.0.1-0ubuntu1 [46.8 kB]
> Fetched 2330 kB in 2s (806 kB/s)
>                                 Selecting previously unselected package gcc-6-base:i386.
> (Reading database ... 12410 files and directories currently installed.)
> Preparing to unpack .../gcc-6-base_6.0.1-0ubuntu1_i386.deb ...
> Unpacking gcc-6-base:i386 (6.0.1-0ubuntu1) ...
> Selecting previously unselected package libgcc1:i386.
> Preparing to unpack .../libgcc1_1%3a6.0.1-0ubuntu1_i386.deb ...
> Unpacking libgcc1:i386 (1:6.0.1-0ubuntu1) ...
> Selecting previously unselected package libc6:i386.
> Preparing to unpack .../libc6_2.23-0ubuntu9_i386.deb ...
> Unpacking libc6:i386 (2.23-0ubuntu9) ...
> Processing triggers for libc-bin (2.23-0ubuntu9) ...
> Setting up gcc-6-base:i386 (6.0.1-0ubuntu1) ...
> Setting up libgcc1:i386 (1:6.0.1-0ubuntu1) ...
> Setting up libc6:i386 (2.23-0ubuntu9) ...
> Processing triggers for libc-bin (2.23-0ubuntu9) ...
> + echo I: Removing /var/lib/apt/lists/*
> I: Removing /var/lib/apt/lists/*
> + find /var/lib/apt/lists/ -print0 -type f+ 
> xargs -0 rm -f
> rm: cannot remove '/var/lib/apt/lists/': Is a directory
> rm: cannot remove '/var/lib/apt/lists/partial': Is a directory
> E: config/hooks/12-add-foreign-libc6.chroot failed (exit non-zero). You should check for errors.

#8

please “bzr pull” again, there was a change recently that introduced that error, it was corrected today …


#9

The standard way to get a clean build environment using snapcraft is to use the lxd integration. If you don’t have lxd installed, it can be installed via snap:

sudo snap install lxd
sudo adduser <your-user> lxd

You’ll need to log-out and back-in to get the lxd group assignment to take-root in your session.

Finally you need to initialise lxd:

sudo lxd --init

It has sane defaults so it’s ok to just press enter at each option unless you know you need a different setting.

Now you can use either:

snapcraft cleanbuild

OR

SNAPCRAFT_CONTAINER_BUILDS=1 snapcraft

The first option creates a new container for every attempted build, while the second option creates one container which persists and is reused for subsequent builds run while the SNAPCRAFT_CONTAINER_BUILDS=1 environment variable is visible. If you want to, you may export the SNAPCRAFT_CONTAINER_BUILDS variable into future sessions automatically via editing your $HOME/.bash_profile and adding a line:

export SNAPCRAFT_CONTAINER_BUILDS=1

If you have done that and restarted your session you can revert to using snapcraft without specifying the variable each time. While the export is set in your profile each of the following commands will work as-is using a persistent lxd container:

snapcraft pull
snapcraft build
snapcraft stage
snapcraft prime
snapcraft clean -s pull
snapcraft

Note that snapcraft clean with no arguments will delete the container used for the snap package build. Snapcraft will use a different container for each package name as specified in the snapcraft.yaml, so as to allow working on multiple packages without trampling on the persistent container used for each.


#10

@ogra- I was successful in building the core snap but can’t seem to install since I already have core installed:

removing core seems not possible via normal commands?:

snap remove core
error: cannot remove "core": snap "core" is not removable

I have no other snaps installed:

snap list
Name  Version  Rev   Developer  Notes
core  16-2.30  3751  canonical  core

@daniel thanks for that info… I will try that for the next build I do.


#11

i fear you need to build the image yourself with ubuntu-image, use the --extra-snaps option and point to the local core snap … on an image built like this you will be able to install core at runtime as well (using “snap install --dangerous …” )… “normal” images simply do not allow the core, kernel or gadget to be replaced by a local snap.


#12

I think im in luck since Im using a custom image builder for Armbian which builds the ubuntu image along with my kernel modifications (for app armor)

so some good news: I installed the custom build core with the libc6:armhf and now my armhf executable are at least attempting to run within the snap on arm64/aarch64… I get library load errors instead of the “no such file or directory”

within my snapcraft.yaml, I set the LD_LIBRARY_PATH to $SNAP/usr/lib/ and on a physical armhf, the libs were loading fine using that env variable. The folder definetly has all the libs since it ran fine on the armhf hardware.

Seems to be now be library loading / search path related, but though the LD_LIBRARY_PATH should take care of that.

any thoughts?


#13

LD_LIBRARY_PATH should be set to more than just $SNAP/usr/lib. you probably want to also include $SNAP/lib, $SNAP/lib/armhf-linux-gnu, and $SNAP/usr/lib/armhf-linux-gnu. These are guesses based on my experience with the x86_64 directory structure. Investigate the prime folder of your snap package build to see what will be bundled and create you path to match.

It’s worth remembering that snapcraft sets some defaults in the command-foo.wrapper file as the entrypoint into the snap, so you likely want to extend the LD_LIBRARY_PATH rather than outright replacing it.


#14

to be ‘exact’ im setting this in the yaml:

environment:
  LD_LIBRARY_PATH: $LD_LIBRARY_PATH:$SNAP/usr/lib/arm-linux-gnueabihf/:$SNAP/lib/arm-linux-gnueabihf/

#15

hmm hold on… my local yaml and ‘published’ snap are not in sync… that ‘could’ be the issue


#16

yep… that was the issue! so I guess I should put in a merge/pull request to get my change into the master?

disclaimer: the modification was a simple / copy and paste job since I don’t usually write shell scripts. There is some minor duplication of logic which could be extracted out, refactored, etc… but I don’t know what the coding standards guide lines are for submissions. I opted for the ‘least likely to break existing functionality’ approach.


#17

just submit a PR and we’ll work it out on the way :wink:


#18

done! https://github.com/snapcore/core/pull/73


#19

Hey, was this ever released ? Dont think i saw anything in the release notes but it looked like it was merged.


#20

This is in the candidate channel now. It will get released to stable together with the new 2.31 snapd release.