[etcd] request to autoconnect removable-media


#1

I am using kubernetes-core juju bundle to deploy kubernetes on my cluster (details can be found here: https://github.com/alejandroEsc/baremetal-k8s-on-lxd).

I have come to realize that consistently when a machine is rebooted, in order for etcd (the brains of kubernetes) to come up in a master control node, the snap must be manually plugged in to the removable-media interface.

This is quite unfortunate as etcd is very important kubernetes component as its key-value store.
First is this expected behavior or am i hitting some sort of edge case? Second if it is expected is there anyway to allow autoconnect to happen in the future?


#2

If you connect the interface, it should stay connected after reboot. @zyga may want more information on this.

Out of curiosity, what are the denials you are seeing? I’m a bit surprised etcd needs removable-media at all (unless you are actually storing data on removable drives; then it makes perfect sense).


#3

Thank you for your reply!

I am seeing errors like this

Jun 07 05:03:07 nuc2-2-ubuntu-2CPU-8GB-1 etcd[1267]: cannot access data directory: mkdir /var/snap/etcd/current: permission denied

and similarly
cannot access data directory: /var/snap/etcd/current/.touch: permission denied

As for removable-drives i dont have enough knowledge on the privs involved there and optionally the other types of interfaces. All i know is this allows me the permission to let etcd make changes to my ‘drive’. Please look at my linked document above if you are curious.


#4

Did you symlink /var, /var/snap or /var/snap/etcd to something on an external drive? Can you give the output of ‘journalctl |grep audit’ at the time of the denial?


#5

So, no to any link. Also the snap is run in an lxd container whose configured to use the zfs device as memory, so maybe thats the issue?. I will try to get the output tomorrow morning.


#6

Sorry for the delay. Tried to find a machine that has this issue that i didnt already plug. Here is the output from running the command:

Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920244 631 flags.go:27] FLAG: --audit-log-batch-buffer-size=“10000”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920250 631 flags.go:27] FLAG: --audit-log-batch-max-size=“400”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920255 631 flags.go:27] FLAG: --audit-log-batch-max-wait=“30s”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920262 631 flags.go:27] FLAG: --audit-log-batch-throttle-burst=“15”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920268 631 flags.go:27] FLAG: --audit-log-batch-throttle-enable=“false”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920273 631 flags.go:27] FLAG: --audit-log-batch-throttle-qps=“10”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920283 631 flags.go:27] FLAG: --audit-log-format=“json”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920288 631 flags.go:27] FLAG: --audit-log-maxage=“0”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920293 631 flags.go:27] FLAG: --audit-log-maxbackup=“0”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920298 631 flags.go:27] FLAG: --audit-log-maxsize=“0”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920303 631 flags.go:27] FLAG: --audit-log-mode=“blocking”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920308 631 flags.go:27] FLAG: --audit-log-path=""
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920313 631 flags.go:27] FLAG: --audit-log-truncate-enabled=“false”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920318 631 flags.go:27] FLAG: --audit-log-truncate-max-batch-size=“10485760”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920327 631 flags.go:27] FLAG: --audit-log-truncate-max-event-size=“102400”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920332 631 flags.go:27] FLAG: --audit-policy-file=""
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920337 631 flags.go:27] FLAG: --audit-webhook-batch-buffer-size=“10000”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920343 631 flags.go:27] FLAG: --audit-webhook-batch-initial-backoff=“10s”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920349 631 flags.go:27] FLAG: --audit-webhook-batch-max-size=“400”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920354 631 flags.go:27] FLAG: --audit-webhook-batch-max-wait=“30s”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920359 631 flags.go:27] FLAG: --audit-webhook-batch-throttle-burst=“15”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920365 631 flags.go:27] FLAG: --audit-webhook-batch-throttle-enable=“true”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920370 631 flags.go:27] FLAG: --audit-webhook-batch-throttle-qps=“10”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920375 631 flags.go:27] FLAG: --audit-webhook-config-file=""
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920381 631 flags.go:27] FLAG: --audit-webhook-initial-backoff=“10s”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920387 631 flags.go:27] FLAG: --audit-webhook-mode=“batch”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920392 631 flags.go:27] FLAG: --audit-webhook-truncate-enabled=“false”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920397 631 flags.go:27] FLAG: --audit-webhook-truncate-max-batch-size=“10485760”
Jun 12 03:50:33 nuc-ae-b-ubuntu-2CPU-8GB-1 kube-apiserver.daemon[631]: I0612 03:50:33.920403 631 flags.go:27] FLAG: --audit-webhook-truncate-max-event-size=“102400”

I also noticed the app armor profiles arent always picked up. I dont think its related but yeah just another thing.


#7

Unfortunately, this didn’t give useful information. Please make sure the interface is disconnected, then try to reproduce the issue, when you see the problem, then do inside and outside the container sudo journalctl |grep 'audit:' (I added the ‘:’ this time to get rid of the kube messages).


#8

I would try this one: " snap connect thunderbird:removable-media "