I’ve noticed that many other projects that interact with kubectl
use classic confinement (as does kubectl
itself.) I also saw a request for a kubernetes-config interface back in February. I think a k8s-config interface would be a much better solution than each snap wrestling with personal-files
. I’ve also run into a situation which can’t be solved cleanly using that interface.
I really don’t want to revert this snap to classic for one aspect of it’s behavior, when I have everything else working in strict.
I’m working on getting doctl
, in a strictly confined snap, compatible with kubectl
.
I’ve worked through most of the problem using personal-files
.
The problem I’m having is that
doctl k8s cluster kubeconfig save <k8s_cluster>
is setting the command in ~/.kube/config
to the command called by the launcher, rather than
the name of the doctl
command itself.
~/.kube/config
.
.
.
users:
- name: do-sfo2-fred2-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- kubernetes
- cluster
- kubeconfig
- exec-credential
- --version=v1beta1
- 12345dfa-whatever
command: doctl.real
env: null
I’m using the launcher so that I can share the doctl and kubectl config files with kubectl.
If I manually edit ~/.kube/config
to replace doctl.real
with doctl
everything works fine.
An example of the errors I see:
hilary@doctl-snap:~$ kubectl --context do-sfo2-fred2 get nodes
Unable to connect to the server: getting credentials: exec: exec: "doctl.real": executable file not found in $PATH
The value for users: user: exec: command:
is set as os.Args[0]
here.
I think the problem arises because command-doctl.wrapper
execs the launcher.
Is there a known solution? Am I doing something wrong? I would really appreciate some help.