-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No node_taints for default pool since v2.34 #10490
Comments
#10307 was just merged. For now it's half-way measure to create default pool as system pool, but we're still discussing re-enabling other (soft) taints as well. |
Duplicate of #9183 |
This has been released in version 2.47.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 2.47.0"
}
# ... other configuration ... |
@tombuildsstuff This is not fixed in 2.47.0. Can you please check ? |
@Cyanopus To avoid confusion, it might be worth noting that this fix does not allow setting the default_node_pool {
node_taints = ["CriticalAddonsOnly=true:PreferNoSchedule"] # <- still not supported, use solution below
} ... but introduces a new default_node_pool {
only_critical_addons_enabled = true # supported
} |
@robinmanuelthiel Thank you, you are completely correct. I'm still on the fence about allowing other soft taints for default node pool, since AKS team is also on the fence about keeping that functionality :) But this at least allows for a clean separation of system and application pools. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Steps to reproduce
Since AKS needs a default node pool and does not accept empty AKS clusters, we used the default node pool just for "system management" and additional node pools as user pools. Which actually sucks because you even have to keep the default pool redundant as well for high availability purposes which means unnecessary costs! To prevent from having user pods running on the default node pool we set node_taints on the default pool like this so that the user pods are being scheduled on the user pools only:
With the new provider this does not work anymore:
Expected behavior
The use case is to change the VM type of the node pool without loosing the whole configuration and the pods when only a default pool is used.
Community Note
The text was updated successfully, but these errors were encountered: