Use of home and network plugs

the install hook does not run as the user, so $SNAP_USER_DATA (if set at all at this point) would point to /root/snap// …

Of course! I overlooked that sudo part before the install as well :frowning:

Ok, suppose that I run the script above upon initializing the app (if there is no configuration file, then copy one). The question is where the app can copy the bootstrap configuration file from?

have a look at line 31 in

this snap ships the default config in the toplevel of the package … if you have some subdir for your initial defaults you’d add that to your source path in the copy command …

cp $SNAP/mydefaultconfigpath/myconfig.conf $SNAP_USER_DATA/
1 Like

have a look at line 31 in

I see. For the case at hand, the bootstrap configuration (.torxakis.yaml) lives in the root directory of the source repository. But I currently I’m not dumping the whole directory (I’m packaging a pre-built app since Haskell is not a supported language):

parts:  
  torxakis-bin:
    plugin: dump
    source: .stack-work/install/x86_64-linux-nopie/lts-9.7/8.0.2/bin/

Should I put the configuration in a data directory and add:

  torxakis-data:
    plugin: dump
    source: data/

Yeah, something along those lines should work fine.

1 Like

Ok, so the snap seems to be taking some shape. Any idea on how I can automatically alias the commands?

Ok, apparently the installed snap cannot access the configuration file when it is somewhere inside the users home directory. The application I’m packaging looks for the configuration file in the current working directory, and if no found then it looks in the users home directory (which will be $SNAP_USER_DATA in this case). However when the configuration file is in the current working directory (outside $SNAP_USER_DATA) the file is found, but for some reason it cannot be read:

➜  ~ torxakis.txsserver 9000
Found configuration file at `.torxakis.yaml`.
txsserver: InvalidYaml (Just (YamlException "Yaml file not found: .torxakis.yaml"))
CallStack (from HasCallStack):
  error, called at src/TxsServerConfig.hs:152:23 in main:TxsServerConfig
➜  ~ 

Don’t snaps get read access when using the home plug?

Once the snap is somewhere in the snap store, simply make a request in this forum (separate from this post) by following Process for aliases, auto-connections and tracks.

1 Like

Oh, right, the configuration file is a hidden file. However I don’t understand how my application can see it (but not read it…).

The snap won’t have access to $HOME/.* content by default, because there’s a lot of sensitive content in those files or directories. We’re still discussing introducing an interface that would allow accessing well-defined locations there, though.

I’m waiting to get a e-mail alias so that I can register my organization and send a request to publish the snap. In the meantime, another question. If I re-build the binary I’m packaging snap does not seem to notice there’s a new version of it and it reuses the old binary, forcing me to do a:

snapcraft clean torxakis-bin 

Is there a way to avoid this step? What is the rationale behind re-using the first binary snap saw?

Any ideas on this … ?

I just made the request: Request aliases for torxakis

The rationale is not re-doing time and resource consuming tasks that were already done, and the solution is caching. This sort of behavior is pretty standard in the building and packaging area.

I know @sergiusens has some improvements in clean up behavior queued up for quite a while. Not sure if something about this has already landed.

The rationale is not re-doing time and resource consuming tasks that were already done, and the solution is caching. This sort of behavior is pretty standard in the building and packaging area.

In this case I think the behavior of snap might be wrong, because there is a new binary built (hence invalidating the “cache”), but the old one is used. Of am I missing the point here?

@dnadales If the binary update is in the local directory, then that’s a bug we want to see fixed. If the binary is in the network, then the expected behavior of caching it locally is to not re-download it again until requested or the local data is lost.

Which case is it?

As a side note, “snap” is either a package, or the snap tool. Neither of them is involved here. The behavior conversation is all around snapcraft.

Which case is it?

The binary is in a local folder:

parts:  
  torxakis-bin:
    # See 'snapcraft plugins'
    plugin: dump
    source: .stack-work/install/x86_64-linux-nopie/lts-9.7/8.0.2/bin/
    stage-packages: []

As a side note, “snap” is either a package, or the snap tool. Neither of them is involved here. The behavior conversation is all around snapcraft.

You’re right. In this case I’m referring to snapcraft.

@dnadales I’m on the fence about this case. We don’t want to scan the whole tree just to find out that a single text file has changed since the last build. It sounds like the proper behavior here would be for “snapcraft pull partname” to redo the job.

Note you can already do that today. It’s just a bit of a cryptic command, but we’re working to make that nicer:

$ snapcraft clean -s pull torxakis-bin

We don’t want to scan the whole tree just to find out that a single text file has changed since the last build.

Well, when the tree contains a binary that needs to be packaged, and the binary has changed then I would expect the latest version to be put into the snap that gets built. But if the goal of snapcraft is to build from source, and I’m abusing the dump plugin because stack is not supported as a build tool, then there’s no much I can complain about.

Note you can already do that today. It’s just a bit of a cryptic command, but we’re working to make that nicer:

Currently I run:

snapcraft clean torxakis-bin

every time I build my project (which is a PITA if I forget to do it because I would be releasing an outdated package). Would that command also do the job (or is it doing unnecessary work as well?).

The problem is that there’s no right answer in this case that will make everyone happy. Think about the other case: you have a project that takes 2h to build, and then you touch the source code while preparing the next build. Many people would get pretty frustrated if the whole build was invalidated just because that one file was touched.

An explicit “snapcraft build partname” seems like a fair compromise.

It does the job, because the “pull” is today the first phase anyway. So cleaning it all is equivalent to cleaning the pull.