Are the various snap cores shared amongst snaps?

Lets say the snap apps snapfoo & snapbar both define core20 in their snapcraft.yaml & a user has snap installed both applications. Do those apps now share the same core20 rootfs come runtime?

the readonly squashfs of core20 is indeed shared in such cases, the sandbox and writable parts are unique per snap though.

1 Like

Hi @oogra,

Thanks for responding.

Sounds like there’s isolation through linux namespaces for individual snaps? Do you know if that includes an isolated network namespace per snap?

The wiki section on network namespaces sums up the interfaces/resources I’m looking to create/isolate. Ideally, defined in the build process of core and created by systemd (or other init sys).

Network namespace

Each namespace will have a private set of IP addresses, its own routing table, socket listing, connection tracking table, firewall, and other network-related resources.


My first thought was to overlay the /etc network configs used by netplan/systemd/NetworkManager onto core as a precursor to building my app, thus giving applications build & runtime the guarantee of these virtualized networks existence.

As an example imagine overlaying the core20 squashfs with

#/etc/netplan/00-snapnet.yaml

network:
    version: 2
    renderer: networkd
    ethernets:
        enp3s0:
         addresses:
             - 10.10.0.0/8
         gateway4: 10.10.0.1

And

# /etc/hosts
10.10.*.*       *.my-snap.internal
10.10.0.1       indexer.my-snap.internal

Then proceeding to have snap build, mount squashfs & systemd init. After which the applications build-process would start in it’s own isolated snap environment with access to these resources/interfaces/

If squashfs would still be shared though, this would likely not be possible? (or secure).

it would probably be easier if you explained why you want to set up a local per-snap network ? by default a snap can not access the network unless you allow it with the network plug … is the above in context of an ubuntu core device or in context of an app/service snap ?

It’s in the context of a snap application using a isolated (offline) virtualized network and a single gateway to userspace, which, looks like it could be created through the use of network plug.

while the layout might work for this you will need a bit more than just the network plug.

you will need a virtual network device of some kind, to set this up you will need access to the kernels network stack and permission to execute the required syscalls, that will need network-setup-control but now you are also granting your app access to the hosts network configuration.

so while i guess you could technically achieve such a setup, you are also opening an attack surface into the host your app could abuse…

i am not sure how deeply (if at all) any network namespacing is integrated in snapd currently, i have moved the topic into the snapd category so someone from the snapd team can comment :slight_smile:

you will need a virtual network device of some kind, to set this up you will need access to the kernels network stack and permission to execute the required syscalls

Yes, this is my question :slight_smile: I’d like to have snaps init system create them instead of my applications run-time.

If you’re curious you can achieve this with a minimal amount of go code. Here’s an example of chrooting, mounting (including /proc) & entering as the uid: 0, pid: 1 (init system) into a new image of focal-server from an unprivileged namespace in go.

Try it for yourself I posted a repository & instructions.

$ make enter

.env/kota -k run -c bash
2021/06/22 15:52:44     +main(uid: 1002, gid: 1003, ppid:25718, pid: 26075)
2021/06/22 15:52:44     ^->+run(uid: 1002, gid: 1003, ppid:25718, pid: 26075)
2021/06/22 15:52:44     +main(uid: 0, gid: 0, ppid:0, pid: 1)
2021/06/22 15:52:44     ^->+child(uid: 0, gid: 0, ppid:0, pid: 1)
2021/06/22 15:52:44 +>nilOr(uid: 0, gid: 0, ppid:0, pid: 1)

root@hostname:/

There’s some wrapper code and a makefile, but the leg-work is done by these 20 odd lines of Go code.

// Recursively calls the current process from
// /proc/self/exe
func (k Session) Nest(c *exec.Cmd) *exec.Cmd {
	k.ToExec = "child"
	k.Depth--
	return Contain(exec.Command("/proc/self/exe", k.Marshall()...))
}

// Changes root to the root specificed in the session
// Mounts /proc
func (k Session) Chmount() error {
	syscall.Chroot(k.FS.RootfsPath)
	os.Chdir("/")
	syscall.Mount("proc", "proc", "proc", 0, "")
	return nil
}

// Wrapes any *exec.Cmd in it's own linux namespace
func Contain(cmd *exec.Cmd) *exec.Cmd {
	cmd.SysProcAttr = &syscall.SysProcAttr{
		Cloneflags: syscall.CLONE_NEWUTS | syscall.CLONE_NEWUSER |
			syscall.CLONE_NEWNS | syscall.CLONE_NEWPID |
			syscall.CLONE_NEWNET,
		UidMappings: []syscall.SysProcIDMap{{ContainerID: 0, HostID: os.Getuid(), Size: 1}},
		GidMappings: []syscall.SysProcIDMap{{ContainerID: 0, HostID: os.Getgid(), Size: 1}},
	}
	return LinkIO(cmd)
}

func LinkIO(cmd *exec.Cmd) *exec.Cmd {
	cmd.Stdin = os.Stdin
	cmd.Stdout = os.Stdout
	cmd.Stderr = os.Stderr
	return cmd
}

Make sure you run make clean after to remove the filesystem if you try it out.

Only mount namespace is set up for the snaps. This is done in snap-confine, when the snap applications is being started.

I don’t think there are any plans to use network namespacing, which is probably a good thing, you know that snapd will not interfere with whatever you set up yourself.

You can try it out for yourself:

$ snap install hello-world
$ snap run --shell hello-world
# inside the snap shell
$ ip l
# back
$ unshare -n /bin/bash
$ snap run --shell hello-world
# inside the snap shell
$ ip l
1 Like