helm-chart: fix missing PDB config maxUnavailable, and default to 1 #1418
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PDBs are tricky resources resource that can make things lock stuck in some situations unless we are careful. If we for example have one single replica and have
minAvailable: 1
then automated k8s node upgrades will fail until someone manually deletes the pod on the old node bing removed.PDBs can block automated maintenance in a k8s cluster and shouldn't do that unless explicitly configured to do so I'd say. But it should help ensure that if you have two replicas running, both shouldn't be removed at the same time. To do that, one can either use
minAvailable: 1
ormaxUnavailable: 1
. UsingmaxUnavailable: 1
can work well with both one or two replicas of a pod, whileminAvailable: 1
can only work well with two replicas.This PR makes us use
maxUnavailable: 1
instead ofminAvailable: 1
. It also fixes a bug that made the helm chart configpdb.maxUnavailable
be ignored.I suspect this PR can cause upgrade issues, and it would be great to have an upgrade test of the Helm chart as well. This is represented by #1195. EDIT: rebased and we have now tested this to be upgradable without issues.