Ipfs classic request

go-ipfs is published at: Install ipfs on Linux | Snap Store

Users report multiple issues related to Snap confinement:

IPFS issues related to Snap appear somewhat consistently with new users, a containerized format does not mesh well with IPFS’s capabilities and use-cases, it also doesn’t help that the errors produced by Snap installs are very vague which makes troubleshooting difficult. This could be partially solved by recommending snap install --classic ipfs

https://github.com/ipfs/go-ipfs/issues/8688

Some examples (new user, so cant post real links…):

https://github.com/ipfs/go-ipfs/issues/7872
https://github.com/ipfs/go-ipfs/issues/8553
https://github.com/ipfs/go-ipfs/issues/8580
https://github.com/ipfs/go-ipfs/issues/3788

In general, user should be able to read any file in the system to do ipfs add myfile.txt (similar use case as in ipfs-cluster which was already approved in here) and maintain the same config and repo across upgrades.

To fix go-ipfs for Snap users we want to switch https://github.com/ipfs/go-ipfs/blob/master/snap/snapcraft.yaml to classic, but my understanding is that we need to get :+1: from this manual process first.

Happy to provide any details needed.

Can you please provide more details as to the errors which are seen when ipfs is run under strict confinement? What operations require classic confinement?

Also can you please explain if ipfs fits within one of the existing categories for classic confinement as per the Process for reviewing classic confinement snaps?

Once again, Snapcraft already granted classic confinement to ipfs-cluster and it is using ipfs daemons / apis / requirements underneath, so my understanding is that this request approval should be just a formality.

See prior discussion in: Classic Confinement Request for the ipfs-cluster Snap

Examples of errors include:

  • inability to mount filesystem files to IPFS (ipfs add --nocopy requires direct FS access) – this means users with big files need to duplicate the required storage space or defeat confinement via sudo mount (see below)
  • CLI client and ecosystem tools are not compatible with confinement model. Security-sensitive operations that users can’t do over HTTP-RPC (key rotation, reading/changing remote pinning service access tokens) require direct filesystem access which is impossible with without Snap classic.

ipfs daemon falls under categories of

  • public cloud agents (IPFS Swarm https://docs.ipfs.io/concepts/glossary/#swarm)
  • programming languages (IPLD Data Model – https://ipld.io/glossary/#data-model)
  • running workloads on systems without traditional users where the systems are otherwise managed outside of the agent (self-organized relay system – libp2p Relay V2 (https://github.com/libp2p/specs/blob/master/relay/circuit-v2.md))

Things have changed a bit since ipfs-cluster was approved for classic confinement, however in this case I agree with the reasoning presented that ipfs cannot operate correctly without classic confinement due to the expectation users have around arbitrary file access and mounting of various paths within the snap.

Unfortunately I don’t agree with the reasoning that this is a public cloud agent (ie. this snap is not used to manage VMs on a public cloud), nor a programming language (ie it doesn’t need to access say libraries or header files etc on the host system or for running workload on systems without traditional users - I can’t see anything in the circuit-v2 documentation which indicates this can be used for running workloads / commands etc - it would appear to be for reserving resources.

As such whilst the snap would appear to require classic confinement to run correctly, it does not appear to fit within one of the existing categories for classic confinement. @pedronis @niemeyer as snapd architects would you be able to comment on whether an additional category for classic confinement would be able to be introduced to support this use-case?

Folks, it’s been over a month, our users keep hitting the same walls with unnecessary snap confinement and report those issues to our issue tracker, instead of this forum (example).

Are we able to move forward with this?

Once again, you already lifted confinement for ipfs-cluster, and the same rational applies here:

looks like ipfs-cluster was granted classic because it was considered a “backup-tool”. since then the system-backup interface has been introduced and backup tools are now supposed to use this for read-only access to all of the host file system.

since you also say your users need to mount things, there is the brand new mount-control interface in snapd 2.54+ that allows app access to mounting things …

Ping @lidel have you had a chance to consider @ogra’s suggestion above re using system-backup and perhaps mount-control?

@lidel - ping, this request cannot proceed without the requested information

The mount-control interface allows the mounting and unmounting of both transient (non-persistent) and persistent filesystem mount points. (src)

This does not seem to help our use case: ipfs binary needs to have access to existing mount points to do its job.

Is the suggestion for ipfs snap to use it with where equal global root / ? It feels like antipattern.

system-backup provides read-only access to the system […] The system-backup interface provides read-only access to files under the following location: /var/lib/snapd/hostfs (src)

My understanding is that this is not a solution, read-only is not enough, commands like ipfs get --output=/writable/path need write access, as requiring users to prefix every path with /var/lib/snapd/hostfs is bad UX.


It is highly unlikely go-ipfs maintainers will be able to spend time on figuring out all incantations here and paying the maintenance costs related to Snap. ipfs-cluster already gave up.

classic confinement provides inexpensive way of fixing UX for go-ipfs users, if it is not granted, we will be forced to follow ipfs-cluster and remove Snap support from official IPFS docs.

This is no longer relevant, Snap support was removed from Kubo.

That is a shame but thank you for letting us know - I will remove this request from our queue.