Consider the case of a snapped Kubernetes control plane service. A workload served inside Kubernetes should not be able to cause an OOM possibly killing the kubernetes service.
Can we currently set the OOMScoreAdjust property in the systemd service created by snapcraft?
i dont think that is supported ATM … but you could possibly hack around it by using a command-chain wrapper that does something like:
echo -200 >/proc/$$/oom_score_adj
plus using the
browser-support interface to allow access to oom_score_adj …
We do not allow explicitly setting OOMScoreAdjust for snaps (well except for the browser-support hack @ogra mentioned but that is just that a hack ) , however there is now a system wide setting
resilience.vitality-hint which can be set to a string of a comma separated list of snaps, where all the services in the mentioned snaps have their OOMScoreAdjust set to be in decreasing importance in the order of the list. So setting it to:
snap set system resilience.vitality-hint=snap1,snap2,snap3
Will ensure that the OOMScoreAdjust for snap3 is set to a higher value than snap2, and the value for snap2 is higher than snap1, and snap1 has the lowest possible setting for any snap service which is exactly 1 less than snapd itself which always a score of -900.
So in practice, snap1 will have a score of -899, snap2 will have a score of -898 and snap3 will have a score of -897.
In order to use this for Kubernetes, you would need to ensure that your control plane service is in a separate snap from those of the worker snaps, and have the control plane snap come before the worker snap(s) in the
resilience.vitality-hint snap setting.
Also note that a snap cannot set this for itself, it’s a system wide setting so the system administrator would need to configure this, though a snap that is managing the system using snapd-control can configure this (with a brand store).