-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster Autoscaler erroneously triggered during PVC binding #923
Comments
@gtie Can you specify which CA version are you using? |
@MaciekPytel , updated original description with the version (answer is v.1.2.2) |
No one else seeing this sequence of events? |
My team at work has implemented a work-around for this issue. We've introduced a delay so that pods that are too "young" will not trigger a scale-up. The CA will only consider scaling-up for unschedulable pods past a certain age. This is configurable via the command-line. So far, in production, setting a delay of 2m seems to have eliminated the issue for us. I suspect the delay could be as little as 30s and this workaround would still be effective. Would this be something worth submitting as a pull request? |
That sounds like a good addition to solve this issue :) |
- This is intended to address the issue described in kubernetes#923 - the delay is configurable via a CLI option - in production (on AWS) we set this to a value of 2m - the delay could possibly be set as low as 30s and still be effective depending on your workload and environment - the default of 0 for the CLI option results in no change to the CA's behavior from defaults. Change-Id: I7e3f36bb48641faaf8a392cca01a12b07fb0ee35
OK, I've posted a diff and a PR. |
@aleksandra-malinowska Can you take a look? |
- This is intended to address the issue described in kubernetes#923 - the delay is configurable via a CLI option - in production (on AWS) we set this to a value of 2m - the delay could possibly be set as low as 30s and still be effective depending on your workload and environment - the default of 0 for the CLI option results in no change to the CA's behavior from defaults. Change-Id: I7e3f36bb48641faaf8a392cca01a12b07fb0ee35
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
- This is intended to address the issue described in kubernetes#923 - the delay is configurable via a CLI option - in production (on AWS) we set this to a value of 2m - the delay could possibly be set as low as 30s and still be effective depending on your workload and environment - the default of 0 for the CLI option results in no change to the CA's behavior from defaults. Change-Id: I7e3f36bb48641faaf8a392cca01a12b07fb0ee35
[Makefile] Use $(PROJECT_DIR) instead of $(shell pwd)
Scaling up is currently triggered by any uschedulable pod. A pod can be unschedulable for just a few seconds, however, while it's waiting for a volume to be bound. Those few seconds are enough for Cluster Autoscaler to kick off the creation of a new instance.
Here is how the sequence looks in the event stream:
In my particular case, where the above bug is combined with a high concentration of PodDisrupttionBudget=0 Pods and a significant amount of turnover, this bug often means that the new extra node is there to stay. The combination quickly leads to very low usage density and very high server costs.
The above problem is observable in Kubernetes clusters running version 1.9 and 1.10 in AWS. Cluster autoscaled in use:
v.1.2.2
The text was updated successfully, but these errors were encountered: