-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move node config deletion out of drainNode and into Delete #11731
Move node config deletion out of drainNode and into Delete #11731
Conversation
/ok-to-test |
Can one of the admins verify this patch? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a good catch ! how about we add an integration subtest for multinode test...that checks what u did in the PR description ?
kvm2 driver with docker runtime
Times for minikube start: 49.8s 47.5s 46.8s 47.3s 46.7s Times for minikube (PR 11731) ingress: 35.8s 35.7s 43.2s 36.7s 42.2s docker driver with docker runtime
Times for minikube start: 22.4s 21.7s 21.2s 21.2s 22.8s Times for minikube ingress: 40.5s 33.0s 32.0s 37.5s 34.0s docker driver with containerd runtime
Times for minikube start: 31.5s 43.6s 47.3s 43.8s 47.2s |
Previously, drainNode would delete the node from the config, but this means the machine can no longer be properly cleaned up if the node is drained for non-deletion reasons (rejoining a cluster).
15fcba1
to
ae8cfa9
Compare
Created new integration test - tested on master and breaks, tested with this PR and works! Also rebased to get rid of old breakages. |
kvm2 driver with docker runtime
Times for minikube start: 50.8s 46.2s 47.0s 46.5s 48.6s Times for minikube ingress: 36.2s 42.3s 36.1s 36.7s 34.7s docker driver with docker runtime
Times for minikube (PR 11731) start: 20.9s 21.4s 20.8s 21.2s 21.5s Times for minikube ingress: 38.0s 33.5s 37.5s 33.5s 38.0s docker driver with containerd runtime
Times for minikube start: 47.2s 43.1s 48.0s 47.2s 43.1s |
kvm2 driver with docker runtime
Times for minikube ingress: 34.2s 33.7s 36.2s 34.2s 34.7s Times for minikube start: 48.4s 47.7s 47.1s 47.5s 47.1s docker driver with docker runtime
Times for minikube start: 22.3s 21.5s 22.3s 21.5s 22.7s Times for minikube ingress: 34.5s 34.5s 34.0s 33.5s 35.0s docker driver with containerd runtime
Times for minikube start: 47.5s 42.7s 43.0s 43.8s 43.0s |
These are the flake rates of all failed tests on KVM_Linux_containerd.
|
1 similar comment
These are the flake rates of all failed tests on KVM_Linux_containerd.
|
/retest-this-please |
kvm2 driver with docker runtime
Times for minikube (PR 11731) start: 46.4s 46.2s 52.6s 47.4s 45.9s Times for minikube ingress: 34.2s 34.2s 32.7s 35.2s 33.8s docker driver with docker runtime
Times for minikube start: 22.5s 21.3s 21.1s 21.2s 21.8s Times for minikube ingress: 28.0s 29.0s 29.0s 30.9s 31.0s docker driver with containerd runtime
Times for minikube start: 31.9s 42.7s 42.9s 46.9s 43.8s |
These are the flake rates of all failed tests on Docker_Linux_containerd.
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andriyDev, sharifelgamal The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Previously, drainNode would delete the node from the config, but this means the machine can no longer be properly cleaned up if the node is drained for non-deletion reasons (rejoining a cluster).
This fixes 2 issues, first using an old version of the config after calling drainNode, second losing track of machines after drainNode has been called.
fixes #11687.
In particular, this PR fixes what happens when restarting a >2 multinode.
Before
After