You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What's the procedure for updating existing worker nodes? The AWS docs want us to update the existing stack with this template: https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-12-10/amazon-eks-nodegroup.yaml.
This is obviously not the same template that eksctl uses.
Should this be done using eksctl somehow, or should we be updating the CloudFormation stack that eksctl creates?
Assume I've already upgraded my cluster (i.e. from 1.10 to 1.11) and I've replaced kube-dns with CoreDNS.
The text was updated successfully, but these errors were encountered:
As #369 has landed, we can write down manual instructions before we get to implementing #348. To work on this no changes have to be made to eksctl, and you should be able to show how to upgrade cluster via AWS CLI, and then use eksctl create nodegroup, kubectl drain each old node (by the way there is #370, which should be fairly easy to tackle, if desired), followed by eksctl delete nodegroup for the old nodegroup. I'd spare the discussion of full automation for #348.
What's the procedure for updating existing worker nodes? The AWS docs want us to update the existing stack with this template:
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-12-10/amazon-eks-nodegroup.yaml
.This is obviously not the same template that
eksctl
uses.Should this be done using
eksctl
somehow, or should we be updating the CloudFormation stack thateksctl
creates?Assume I've already upgraded my cluster (i.e. from 1.10 to 1.11) and I've replaced
kube-dns
with CoreDNS.The text was updated successfully, but these errors were encountered: