Use personal-files interface for dynocsv app

Below is the original post requested the classic confinement, although, over the discussion and trials and errors, I found that I actually need personal-files interface instead. So renamed the post as has been suggested by @alexmurray, and the actual request starts from here Use personal-files interface for dynocsv app.

=== below the original post for access the classic confinement ===

Hi, I am trying to build the snap distribution for one of my open-source tool.

It needs access to the user’s $HOME/.aws to use the right credentials and setting for the AWS profile specified by the --profile option.

I’ve tried the strict confinement and set up the personal-files interface/plug as well as home, although if running in the strict mode the home directory with the app is expanded into something like /home/alex/snap/dynocsv/x1, and obviously there is no ./aws to read config and credentials from.

Of course, I would prefer to use the strict confinement, but from what I found in various sources this is by design that snap has its own $HOME which is different from the actual user’s $HOME. And I am using AWS SDK for Go which relies on the existence of $HOME/.aws with the proper config and credentials there.

I might within the app analyze the existence of the snap specific env variables as described here https://snapcraft.io/docs/environment-variables, and reset the home into the actual user’s home directory (have to check whether it is possible), but then the question is also whether it would be possible to read $HOME/.aws with the home: read: all plug? As I’ve tried personal-files plug, i.e.

personal-files:
    read:
        - $HOME/.aws/config
        - $HOME/.aws/credentials

but I understood it also would require to ask for the grant to use this plug? And also requires the user to explicitly connect this plug?

Also what is the implication of the classic confinement apart from using --classic during install? Can I release upgrades to the app without further reviews, or every change and so the release will require the manual approval as well?

Also, I have read the following Keep in mind that making it classic would mean that you can’t use the snap on a core device., what it actually means?

I think I made it possible to work around without the need to have the classic confinement by redefining the HOME if the app is running within the snap runtime (although it would be great if SNAP would provide this env in addition to the existing ones): https://github.com/zshamrock/dynocsv/commit/ddda9e6a126584895b62e1a1cdfbede145340957#diff-de4572d7c0118f08948ab376e48525a3R46.

Another way to coerce HOME is to define it in the snapcraft.yaml

apps:
  command: foo
  environment:
    HOME: $SNAP_USER_COMMON

Or similar.

Yes, but I want this to point to the actual user’s home directory, and there is no such env variable per https://snapcraft.io/docs/environment-variables, but would be good if they provide that one like have been suggested in various topics like ORIGINAL_HOME or similar.

It seems like you have this working with strict confinement using the personal-files interface to provide read access to $HOME/.aws/config and $HOME/.aws/credentials? Could you please update this post to request access to this (rather than the current classic confinement request)?

Hi, @alexmurray, I think for now this request can be closed, as indeed it looks like I can workaround using strict confinement by reset the HOME directly from the app (although it would be really great if snap provided the env variable pointing to the original user’s home directory, rather than manually extracting it from the modified HOME value), and I don’t use personal-files interface as well.

This is how my current snapcraft.yaml looks like:

Although it is a little bit suspicious that on why and how it works, as here I don’t even setup read: all for the home plug. Maybe snap somehow caches the previously granted connects? I actually released it yesterday to the stable channel, but have yet to manually check it on another machine whether it indeed works outside of my development machine. Is there a way to ensure it indeed works using only development machine or it is not really reliable due to various snap produced artifacts/caches/side effects?

Here what snap connections reports on my development machine:

$ snap connections dynocsv
Interface        Plug                     Slot      Notes
desktop          dynocsv:desktop          :desktop  -
home             dynocsv:home             :home     -
network          dynocsv:network          :network  -
removable-media  dynocsv:removable-media  -       

The home interface provides read/write access to the users actual $HOME but not to any sensitive files (so no hidden / dot files etc). The use of personal-files is then primary designed to allow snaps to access specific configuration files etc of other, non-snap applications - so say in your case, read-access to $HOME/.aws/config etc makes a lot of sense.

Likely what is happening is that the snap has now created and stored it’s own $HOME/.aws/config but where $HOME is the snap specific ~/snap/dynocsv/<revision>/ (and so I suspect you should see an appropriate config / credential in ~/snap/dynocsv/current/.aws/config etc).

If your snap does not rely on accessing a pre-existing credential / config then I suspect you may not even need the home plug - if however you are expecting to use a pre-existing credential / config then you should be using personal-files as outlined above to provide read-access to $HOME/.aws/config etc. In this case you will need to change this forum request to be a request for granting connection to this via a snap store assertion.

To test your snap, probably the easiest thing is to create a separate user on your machine, or use a virtual machine.

Thank you, @alexmurray.

I tested the app a few hours ago on the dedicate AWS EC2 instance, where even snapd has not been installed at all before.

So I installed my app from the stable channel, i.e. sudo snap install dynocsv, and then was able to successfully run the app, so the app successfully read the config and credentials files from the actual user’s $HOME directory, and was able to access the AWS service. Which is again strange for me on why it works, as I only specified the home plug without read: all property.

To explain/clarify a little bit, the app depends on reading existing user’s AWS profile data, which is located in actual user’s $HOME/.aws directory, so the app doesn’t distribute any of the predefined credentials with it, so there is no such .aws directory in the snap specific home directory (see below).

So I am wondering maybe home plug only restricts access to the dot files, but not dot directories, and anything inside it? As otherwise why it works?

Here is the output of the snap connections from EC2 machine:

$ snap connections
Interface  Plug             Slot      Notes
home       dynocsv:home     :home     -
network    dynocsv:network  :network  -

and the content of the current snap directory:

~/snap/dynocsv/current$ ls -al
total 8
drwxr-xr-x 2 admin admin 4096 Oct  8 09:00 .
drwxr-xr-x 4 admin admin 4096 Oct  8 09:00 ..

This is also the output of the tree from root /snap, i.e.

/snap/dynocsv$ tree
.
├── 4
│   ├── bin
│   │   └── dynocsv
│   ├── command-dynocsv.wrapper
│   ├── meta
│   │   ├── gui
│   │   └── snap.yaml
│   └── snap
│       └── command-chain
│           └── snapcraft-runner
└── current -> 4

And the content of the above snap.yaml:

name: dynocsv
version: 1.0.0
summary: Exports DynamoDB table into CSV
description: 'Exports DynamoDB table into CSV, additionally can filter out the specific
  columns and limit number of items to be exported.

'
base: core18
architectures:
- amd64
confinement: strict
grade: stable
license: MIT
apps:
  dynocsv:
    command: snap/command-chain/snapcraft-runner $SNAP/command-dynocsv.wrapper
    plugs:
    - network
    - removable-media
    - home

are you sure the EC2 kernel in use even offers full strict cofinement ?

you can check with:

snap debug confinement

and get more details with:

snap debug sandbox-features

and in context snap version might also be interesting (to see the kernel version snapd thinks you have)

Thank you, @ogra. I’ve installed the app on the Debian machine, and it reports partial confinement (same as on my development machine, Debian too).

So maybe it makes sense to test it on Ubuntu instead?

Here is the output of the commands:

$ snap debug confinement
partial
admin@build-server:/snap/dynocsv$ snap debug sandbox-features
confinement-options:  classic devmode
dbus:                 mediated-bus-access
kmod:                 mediated-modprobe
mount:                freezer-cgroup-v1 layouts mount-namespace per-snap-persistency per-snap-profiles per-snap-updates per-snap-user-profiles stale-base-invalidation                                              
seccomp:              bpf-actlog bpf-argument-filtering
udev:                 device-cgroup-v1 tagging
admin@build-server:/snap/dynocsv$ snap version
snap    2.41
snapd   2.41
series  16
debian  9
kernel  4.9.0-8-amd64

So, looks like from the confinement-options it doesn’t support strict confinement, and so it explains on why it works on this EC2 and my development machine?

yeah … try an Ubuntu EC2 image instead (i think that uses a proper kernel), what you want to see is:

$ snap debug confinement
strict

and

$ snap debug sandbox-features
apparmor:             kernel:caps kernel:dbus kernel:domain kernel:file kernel:mount kernel:namespaces kernel:network kernel:policy kernel:ptrace kernel:rlimit kernel:signal parser:unsafe policy:default support-level:full
confinement-options:  devmode strict
dbus:                 mediated-bus-access
kmod:                 mediated-modprobe
mount:                freezer-cgroup-v1 layouts mount-namespace per-snap-persistency per-snap-profiles per-snap-updates per-snap-user-profiles stale-base-invalidation
seccomp:              bpf-actlog bpf-argument-filtering kernel:allow kernel:errno kernel:kill kernel:log kernel:trace kernel:trap
udev:                 device-cgroup-v1 tagging

exactly …

1 Like

… specifically the missing pieces in your kernel are the AppArmor capabilities… The running kernel needs AppArmor and to have had a legacy patch applied because snapd is still using an old API.

If you want to build your own kernel, the extra patches needed over and above the AppArmor in upstream Linux are at https://gitlab.com/apparmor/apparmor/tree/master/kernel-patches/v4.17.

Very interesting, actually on my computer I use Debian 10 (and Debian 9 on EC2), and according to the documentation AppArmor is enabled by default for Debian 10, which looks like indeed the case, as this is what snap debug sandbox-features reports on my development machine:

$ snap debug sandbox-features
apparmor:             kernel:caps kernel:domain kernel:file kernel:mount kernel:namespaces kernel:network_v8 kernel:policy kernel:ptrace kernel:query kernel:rlimit kernel:signal parser:unsafe policy:downgraded support-level:partial
confinement-options:  classic devmode
dbus:                 mediated-bus-access
kmod:                 mediated-modprobe
mount:                freezer-cgroup-v1 layouts mount-namespace per-snap-persistency per-snap-profiles per-snap-updates per-snap-user-profiles stale-base-invalidation                                              
seccomp:              bpf-actlog bpf-argument-filtering kernel:allow kernel:errno kernel:kill_process kernel:kill_thread kernel:log kernel:trace kernel:trap                                                        
udev:                 device-cgroup-v1 tagging

Even though still only partial implementation of the confinement, and only devmode and classic are supported. Is there anything else required on Debian 10 apart from AppArmor module enabled to have support for the strict confinement? The patch mentioned by @lucyllewy?

@ogra so if I also compare the output for the apparmor line the difference is:
policy:default support-level:full (yours) vs policy:downgraded support-level:partial (mine).

Additionally, although I don’t think it matters, you have kernel:dbus which I don’t, and I have kernel:query which you don’t have.

Although the major difference is the the support-level.

Just a minor tidbit on this specific attribute, the read: all attribute doesn’t actually change what files in the $HOME are readable, that simply allows the root user to read non-root user home directories/files.
Without this attribute (i.e. by default), the root user in a snap (say running as a service or as a snap app run with sudo) would only be able to access /root/... and not /home/user1/....
I don’t think your snap needs this FWIW

1 Like

@alexmurray @ogra I have finally tested the app on the proper Ubuntu system with the full strict support. And indeed in the strict mode the app can’t access user’s personal files.

So I updated the snapcraft.yaml to use the personal-files interface now. Although I need to be granted to use this interface, and so also to be able to auto connect when installed. Is it possible to give me access to this interface/connection and be able use auto connect?

@akazlou it would be useful to explicitly list in this request what access via personal-files is being requested - so for other @reviewers I have copied the relevant part of the dynocsv snap.yaml below:

plugs:
  aws-profile:
    interface: personal-files
    read:
    - $HOME/.aws/config
    - $HOME/.aws/credentials

In this case, since dynocsv is clearly not the owner of these paths, I am glad you have used a name which is generally representative of the contents (ie. aws-profile) - however I would feel a bit more comfortable if this was named so that users were more explicitly aware that dynocsv has access to their AWS credentials - and so if this were named aws-config-credentials I think that might be more appropriate.

Also as dynocsv is not the clear owner of these paths I am inclined to not auto-connect this by default - as such I vote +1 for access to this, provided it is renamed aws-config-credentials but -1 for auto-connect.

1 Like

Thank you, @alexmurray. I have just pushed the change and renamed the plug to aws-config-credentials as you have suggested: https://github.com/zshamrock/dynocsv/commit/b84b783c6018ee7c5b75f5805aece9b40035fa8f.

I am fine to not have auto-connect. Will then document in the usage that user has to connect this plug over snap connect dynocsv:aws-config-credentials.

How should I proceed with the standing https://dashboard.snapcraft.io/snaps/dynocsv/revisions/6/ review (it is with the original aws-profile name though). Should I reject it and wait it here instead for approval, and then publish the final version once again?

@alexmurray - note, the typical recommendation is for ~/.something/stuff is dot-something so in this case, dot-aws. However, I don’t strictly object to aws-config-credentials and since the publisher already updated the snap, this is fine.

+1 for use of personal-files as @alexmurray suggested, -1 for auto-connection.

2 votes for use of personal-files, 0 against. Granting use of personal-files with the aws-config-credentials interface reference.

0 votes against auto-connection of personal-files, 2 against. Not granting auto-connection.

This is now live. Please note that a corresponding change needs to be made to the review-tools to pass automated review. This is done but not yet in production. Until that is in production, the snap will need to be manually approved.

1 Like