Kontena Lens needs access to user hidden config files and cloud provider binaries via kubeconfig files (not directly). First I thought personal-files would be enough but I was wrong (see previous discussion: Personal-files request for kontena-lens).
The requirements are understood as per the previous thread. @advocacy can you please perform the required vetting?
While I think it is correct to discuss this as a candidate for classic (thanks @alexmurray), it is still not clear to me why it must be classic. In terms of ~/.config/*
, while it might be inconvenient to enumerate all the directories for the clouds kontena supports, that is not typically a justification for classic confinement in and of itself.
I read the other topic for personal-files; can you comment on this statement: âThe problem is that kubeconfig files can execute arbitrary binary with arbitrary cloud credentials to acquire authentication token/certificates. If kontena-lens cannot access those then it does not work at all from user perspective.â
If we take out âarbitrary cloud credentialsâ since it is possible to enumerate the locations of cloud providers (see above), can you detail typical use cases for executing an âarbitrary binaryâ?
How is kontena driven to use these binaries? Ie, is it something like remote management or is it something that the user/admin of the system drives?
Classic confinement for fluxctl seems like perhaps a similar request. Can you comment if it is authentication helpers that you need, or something else?
Fluxctl and kontena-lens have exactly the same problem (or any app that needs to use kubeconfig). Both need to read kubeconfig files which may contain authentication exec plugins. Exec plugin is configured as executable path, args and environment variables⌠basically it could be even a user created shell script that is executed during authentication phase. In short: we cannot know which executables users have configured or where those executables would read their own config files.
Auth exec is probably the biggest issue but there are also other issues with kubeconfig and strict confinement⌠for example kubeconfig might have references to certificate files (paths) that can be basically anywhere (where user has access to).
@pedronis - can you weigh in on this? Weâre now being told that things that need kubeconfig must have classic confinement because the certificates can be anywhere, authentication helpers can be anything and the authentication helpers may use everything.
@jakolehm, @dholbach - normally, just because a thing can use files from anywhere doesnât typically justify classic confinement (Iâm not saying that it isnât justified here; just setting the context for the conversation). It still isnât clear to me why a) the most well used authentication helpers canât be enumerated along with their default paths for personal-files/system-files (perhaps aided via snapcraft parts) and why home plus removable-media canât be used for non-default cases. Where are authentication helpers typically installed?
Rather than talking in generalities, can you provide a couple/few concrete, non-contrived examples for where personal-files, home and removable-media arenât sufficient?
Yes, it would be great to have examples to understand the config access needs and plugin use needs for this kind of k8 add-ons.
Either system-/personal-files or even a dedicated interface for k8 add-ons would be tenable. But itâs hard to decide which direction to go without just falling back to classic without more information about the needed accesses.
@jdstrand I think this topic Classic confinement for fluxctl has also similar issues?
Few examples of kubeconfigs (just the interesting parts):
static client certificates (path can point to anywhere):
- name: minikube
user:
client-certificate: /home/jari/.minikube/client.crt
client-key: /home/jari/.minikube/client.key
authentication via executable (can be anything, anywhere in local filesystem):
- name: jari@eks-kontena-nvme.eu-north-1.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- eks-kontena-nvme
command: aws-iam-authenticator
env:
- name: AWS_PROFILE
value: kontenalabs
authentication via auth-provider (google uses this):
- name: gke_svc-vodka_europe-north1_jakolehm-europe-north1
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Yes, which is why I pulled @dholbach into this conversation.
Thanks for the additional information.
While the path could be anywhere, the user is in a position to configure the paths and so personal-files, home and removable-media are as sufficient for this as any other strict mode snap AFAICT and files outside the paths can be solved via documentation. Please confirm that it is the user of the host that is writing the kube config in question.
This does require that aws-iam-authenticator (and therefore all the popular auth helpers) be packaged in the snap, which is where I was saying snapcraft parts could help. In practice, how many popular/standard auth helpers are there? 5, 10, 50? Do the auth helpers change often and/or are they specific to particular versions of k8s or the providers (eg, is there just one aws-iam-authenticator that can handle all versions of aws auth or are there several/many and the developer has to pick the right one for his/her specific deployment?)
This one requires classic since /usr/local is not in the snapâs runtime. That said, if the gcloud helper were in the snap, this wouldnât be needed (see above; do the various providers fragment their individual auth helpers or is there one auth helper for each provider that is expected to work (eg, where the snapcraft part can always pull the latest for a particular provider and it could be expected to work everywhere for that provider))?
For GKE users, the kubeconfig is modified by the gcloud command.
( Iâm using the gcloud snap which also happens to use classic
confinement. )
When running gcloud container clusters get-credentials
, the resulting kubeconfig sets cmd-path
to the path of the binary in use.
On my ubuntu system, you can see that this path is set to the full path of my snap gcloud binary:
- name: gke_redacted_us-central1-a_test-kubeconfig-1
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /snap/google-cloud-sdk/104/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
This means that snaps like fluxctl
and kontena-lens
canât easily provide their own copy as the path is difficult to anticipate.
Not only is it difficult to anticipate, there is no mechanism for your snap to access the command (not to mention, cmd-path is set to the path of the binary, not /snap/bin/gcloud, which means that gcloud is being run outside of snap run and not being tracked by the system properly. That is a bug in gcloud though⌠Fixing that wouldnât allow your snap to use it either since snaps arenât allowed to launch other snaps via /snap/bin).
Any updates? Does @advocacy have enough information to make the decision?
Vetting done. So +1 from me from that perspective.
Does that count for Classic confinement for fluxctl as well? Anything we need to do to move forward?
Iâm also wondering what needs to be done to move this forward.
Weâre somewhat stuck. Help?
Sorry for the delay on this. This is a rather complicated request and strict mode is is an important consideration since kubernetes on Ubuntu Core is a very interesting proposition and Ubuntu Core does not allow classic snaps. Please keep in mind that there is a tension between the curated Snap Store and authentication mechanisms that are so flexible that they are expected to be able to do anything on the system.
@pedronis and I discussed this in Vancouver and have decided that kubernetes authentication helpers are a valid use case for classic, at least for the time being while the problem space is being understood.
After classic requests are granted, I think the next step along the path (as use cases dictate) is that applications like kontena-lens and fluxctl would provide a separate strict mode track for their snaps and these snaps would themselves ship and support specific authentication agents with opinionated configurations that work with specific k8s snaps on Ubuntu Core (also remember that Ubuntu Core systems wonât have agents on the device, so they have to be installed via snaps).
Further down the line, it seems possible to use the content interface for a sort plugin mechanism. Eg, providing snaps can expose authentication agent binaries/what have you to connecting snaps that know how to consume them. For example, some sort of aws snap could provide the aws-iam-authenticator binary or a gcloud snap the gcloud binary. These binaries know about snaps, how to write configuration and run under another snapâs security context, etc and kontena-lens, fluxctl, etc become consumers of these content snaps. A nice property of this is that the experts for each snap still maintain their own snaps (ie, the authentication agent upstreams could provide the agent snaps, kontena-lens, fluxctl, etc just know how to consume them). While this does require a good deal of coordination between the providers and consumers, in practice this should evolve organically as use cases dictate (and of course, snapd could evolve to facilitate these use cases).
Granted use of classic. This is live.