Request for classic confinement: Spack

Hello there Store team -

I am requesting classic confinement for Spack, a flexible, non-invasive package manager/HPC engineer developer tool designed for supercomputers. Spack is functionally similar to other developer tools/workspace managers like conda and homebrew. You can view my current work on the Spack snap on GitHub to audit the source code: https://github.com/canonical/spack-snap. Please let me know if you have any questions about Spack or the Spack snap package :grinning_face_with_smiling_eyes:

Why I need classic confinement for Spack

Spack as a Research Scientist Engineer’s (RSE) developer tool

Classic confinement is required for Spack because Spack is a tool for local, non-root driven configuration of development environments/software stacks for HPC workloads. Spack is used to create environments by research scientist engineers for the purpose of developing and running HPC workloads. These environments are then used during job runtime to provide the dependencies to the workload they developed. There is no fixed or standard location for where these environments must be located on the system. Spack users expect that they can put packages, modules, package recipes, cached builds, etc wherever they need them to be. For example, Spack software stacks can be binned into three distinct categories:

The home, personal-files, and system-files interfaces are not sufficient for Spack because Spack users will often use Spack outside of their home directory due to limited storage quotas, and it would be too onerous to use the personal-files and system-files interfaces to map to each possible directory that a Spack user might try to use. Spack user requirements are heterogeneous, so they would expect the Spack snap to have the same flexibility as if they cloned Spack directly from GitHub.

Spack also requires arbitrary access to the host system because Spack supports using multiple compiler backends and expects to have access to these compilers. For example, Spack by default could use the GNU compiler collection provided by gcc, g++, and gfortran under /usr/bin, but a Spack user might also want to use clang and intel compiler backends as well. They may also want to use an older version of gcc that contains features that they need for an old code base. This old compiler could be installed under the user’s home directory, or it could be located in their group’s shared storage space underneath an arbitrary path. Spack also supports multiple build systems out-of-the-box such as:

  • Python
  • Perl
  • Lua
  • R
  • Maven
  • Racket
  • Octave

Advanced Spack users can also extend Spack to support other build systems not supported by default in Spack. It would be too onerous to bind all these resources into a strictly confined snap, or bundle them inside as stage-packages, so Spack should be classic so it can easily access these resources on the host and still provide the flexibility that Spack users expect.

Breaking Spack’s expected API to work in strict confinement

Attempts have been made to strictly confine Spack in snap, but there have been several challenges due to the overall structure of Spack and the assumptions that it makes about the user’s system. When Spack is installed using git, by default Spack will write everything to the location where it was cloned. For example, if you cloned Spack into /var/lib, Spack will:

  • Install all packages in /var/lib/spack/opt/spack.
  • Store all package license files in /var/lib/etc/spack/licenses.
  • Cache package source archives in /var/lib/spack/var/spack/cache.
  • Store LMOD module files in /var/lib/spack/share/spack/lmod.
  • Store TCL module files in /var/lib/spack/share/spack/modules.

This works if the user installs Spack using git as they can then have multiple installations of Spack on the same system, and these standalone instances will never conflict. However, the user doesn’t really need to have multiple installations of Spack on their system. The user can either use Spack environments to create separate workspaces, or they can override the default configuration in ~/.spack. This makes a snap delivery mechanism for Spack viable, but due to Spack’s assumed behavior, Spack will not work initially inside of a snap. This because - similar to the example with /var/lib above - Spack will:

  • Try to install all packages in /snap/spack/rev-#/opt/spack.
  • Try to store all package license files in /snap/spack/rev-#/etc/spack/licenses.
  • Try to cache package source archives in /snap/spack/rev-#/var/spack/cache.
  • Store LMOD module files in /snap/spack/rev-#/share/spack/lmod.
  • Store TCL module files in /snap/spack/rev-#/share/spack/modules.

Spack will fail to function since /snap/spack/* is immutable. The Spack API needs to be broken in strict confinement to fix this issue. This has to do with snapcraft not knowing about snap runtime variables when building the Spack snap. It was proposed to override where Spack installs packages by updating the default configuration in etc/spack/defaults/config.yaml. However, since snapcraft does not know about snap runtime environment variables, the configuration needs to be set after the Spack snap has been installed by the user. Given that /snap/spack/* is immutable, a configure hook cannot modify etc/spack/defaults/config.yaml, so the system configuration scope needs to be used instead. configure cannot run outside the confinement of the snap, so the system configuration must be set in $SNAP_COMMON:

#!/bin/sh -e
# snap `configure` hook

cp -ra etc/spack/defaults/. $SNAP_COMMON

By default, Spack will look in /etc/spack for the system configuration, not $SNAP_COMMON. Spack can be directed to look in $SNAP_COMMON by setting the SPACK_SYSTEM_CONFIG_PATH environment variable inside the spack app definition in the snapcraft.yaml file:

# Assume full snapcraft.yaml definition.
apps:
  spack:
    command: bin/spack
    environment:
      SPACK_SYSTEM_CONFIG_PATH: $SNAP_COMMON

This is not good because it overrides where the Spack system-level configuration file is stored, and that should generally be configurable by the Spack user, not snap. It is expected that the system-level Spack configuration file is located at /etc/spack because this directory is commonly exported as an NFS share and then mounted across all compute machines within the HPC cluster to ensure a consistent Spack configuration cluster-wide. If the user needs to override where the system-level configuration file is located, that should be their choice, and the Spack snap should not set that override for them.

Therefore, the Spack snap should be classically confined so that the default configuration can be patched to not write to /snap/spack/rev-#, and enable the Spack user to still set a site-level configuration should they choose to.

Thanks @nuccitheboss for the detailed explanation. You have done all the right things trying extensively to make strict confinement work, much appreciated.

To me, Spack qualifies for classic confinement under multiple of the supported categories:

  • tools for local, non-root user driven configuration of/switching to development workspaces/environments
  • HPC or orchestration agents/software […]
  • IDEs (given the use of multiple compiler back-ends)

I therefore support the granting of classic for Spack.

I have vetted the publisher. This is now live.

2 Likes

Awesome! Thank you @dclane for the review and vetting!