Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to upgrade existing worker node group? #357

Closed
mrichman opened this issue Dec 18, 2018 · 2 comments
Closed

How to upgrade existing worker node group? #357

mrichman opened this issue Dec 18, 2018 · 2 comments

Comments

@mrichman
Copy link

What's the procedure for updating existing worker nodes? The AWS docs want us to update the existing stack with this template: https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-12-10/amazon-eks-nodegroup.yaml.

This is obviously not the same template that eksctl uses.

Should this be done using eksctl somehow, or should we be updating the CloudFormation stack that eksctl creates?

Assume I've already upgraded my cluster (i.e. from 1.10 to 1.11) and I've replaced kube-dns with CoreDNS.

@errordeveloper
Copy link
Contributor

errordeveloper commented Dec 29, 2018

As #369 has landed, we can write down manual instructions before we get to implementing #348. To work on this no changes have to be made to eksctl, and you should be able to show how to upgrade cluster via AWS CLI, and then use eksctl create nodegroup, kubectl drain each old node (by the way there is #370, which should be fairly easy to tackle, if desired), followed by eksctl delete nodegroup for the old nodegroup. I'd spare the discussion of full automation for #348.

@errordeveloper
Copy link
Contributor

Closing in favour of #348.

torredil pushed a commit to torredil/eksctl that referenced this issue May 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants