-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update of workers_group_defaults on already deployed node_groups #1102
Comments
Hmm after reading #997 looks like I need to create my own LT for all my node_groups. Am I right? I'm a bit confused. |
I came up with the following code: module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = local.settings.terraform.cluster_name
...
node_groups = { for key, value in local.settings.terraform.node_groups : key => merge({
launch_template_id = aws_launch_template.node_group[key].id
launch_template_version = aws_launch_template.node_group[key].default_version
}, value) }
}
resource "aws_launch_template" "node_group" {
for_each = local.settings.terraform.node_groups
name = "${local.settings.terraform.cluster_name}-${each.key}-node-group"
instance_type = each.value["instance_type"]
metadata_options {
http_tokens = "required"
}
update_default_version = true
lifecycle {
create_before_destroy = true
}
} Which seems to plan what I would expect.
Except that it seems to be big bang situation (that's an example the the targeted cluster have 6 odes pools with ~ 20 nodes. # module.eks.module.node_groups.aws_eks_node_group.workers["foo"] must be replaced
+/- resource "aws_eks_node_group" "workers" {
~ ami_type = "AL2_x86_64" -> (known after apply) # forces replacement
~ arn = "arn:aws:eks:us-east-1:xxx:nodegroup/xxx/foo/01badec0-2006-343e-db15-5913ff334450" -> (known after apply)
cluster_name = "xxx"
~ disk_size = 20 -> (known after apply) # forces replacement
~ id = "xxx:xxx-foo" -> (known after apply)
~ instance_types = [
- "m5.8xlarge",
] -> (known after apply) # forces replacement
~ labels = {
- "type" = "foo"
} -> (known after apply)
~ node_group_name = "xxx-foo" -> (known after apply) # forces replacement
node_role_arn = "arn:aws:iam::xxx:role/xxx20201102162754325600000010"
~ release_version = "1.17.11-20201007" -> (known after apply)
~ resources = [
- {
- autoscaling_groups = [
- {
- name = "eks-12badec0-2006-343e-ab15-5913ff304450"
},
]
- remote_access_security_group_id = ""
},
] -> (known after apply)
~ status = "ACTIVE" -> (known after apply)
subnet_ids = [
"subnet-02bf4b5cc3c56b0e7",
"subnet-0b80242f51666c85b",
]
~ tags = {
- "owner" = "me"
- "platform" = "xxx"
} -> (known after apply)
~ version = "1.17" -> (known after apply)
+ launch_template {
+ id = (known after apply) # forces replacement
+ name = (known after apply)
+ version = (known after apply)
}
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
} From https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#why-are-nodes-not-recreated-when-the-launch_configurationlaunch_template-is-recreated I would have some hope that I could manually drain and aws would have done its job. Is there any trick that could avoid a big bang situation like this or I need to the dangerous |
Managed to upgrade my setup by using terraform state rm and apply the new conf then remove the old node_groups from EKS manually. Was mostly smooth except that somehow while removing the last node_group aws somehow decided to remove the arm from the aws-auth... (might be a concurrency issue in aws). |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
I'm submitting a...
What is the current behavior?
Given current conf
I need to change the IMDSv2 settings to requires token by default.
But it doesnt seems to anything, even if I add a new node_group (so new LT is created).
If this is a bug, how to reproduce? Please include a code sample if relevant.
What's the expected behavior?
I was in hope that new settings would land to a new version of the LT for every nodes.
Are you able to fix this problem and submit a PR? Link here if you have already.
Environment details
Any other relevant info
I may miss something obvious :p, thanks!
The text was updated successfully, but these errors were encountered: