-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After upgrading k8s to version above 1.24, PVC is blocked in the UmountDevice stage #121134
Comments
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig node |
/sig storage |
We can also check if this code exists in later versions |
/remove-sig node |
@ndixita this bug also exists in 1.28, for example, if upgrading k8s version from 1.23 to 1.28, it will also trigger this problem. |
/assign @LastNight1997 |
What is the persistentVolumeReclaimPolicy policy defined in the PV configuration? Is it "Retain"? |
@LastNight1997 I believe this is expected behavior. Kubernetes 1.24 release notes stated that a node has to be drained first before updating kubelet: kubernetes/CHANGELOG/CHANGELOG-1.24.md Line 2676 in afc302c
Looking at the reproducer it does not seem the node was drained - this is crucial for letting kubelet remove the old mount paths first before the update. After the update is done and pods get scheduled again only the new paths should exist and kubelet won't hit this error. |
@RomanBednar In the case a daemonset pod using a csi pvc, the pod will not be cleaned when we drain the node. Should we consider this case as a problem? |
@LastNight1997 You should be able to drain it with |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/triage needs-information |
Can you try this suggestion to drain the daemonsets? #121134 (comment) |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened?
After upgrading k8s from 1.20 to 1.24, we deleted a pod using csi pvc on the node that had not been evicted before the upgrade. The pod was deleted successfully, but pvc was till mounted and attached to the node, if a new pod use the same pvc in another node, it will report Multi-Attach Error. In the kubelet log, we found the error:
Aug 22 14:51:36 ncije4dedvch9fjdtj8s0 kubelet[2444519]: E0822 14:51:36.720426 2444519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/ebs.csi.volcengine.com^vol-k4ucb4dmcffxuv3316ea podName: nodeName:}" failed. No retries permitted until 2023-08-22 14:53:38.720408461 +0800 CST m=+12441.648359884 (durationBeforeRetry 2m2s). Error: GetDeviceMountRefs check failed for volume "pvc-230ac950-e2c6-46ee-add4-bc3138858b4a" (UniqueName: "kubernetes.io/csi/ebs.csi.volcengine.com^vol-k4ucb4dmcffxuv3316ea") on node "172.28.25.127" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/ebs.csi.volcengine.com/493ac2db47f5eb2e95c0a7ba237e94ac3221f60c2be498cf70bd802123bc52f8/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-230ac950-e2c6-46ee-add4-bc3138858b4a/globalmount /mnt/vdb/kubelet/plugins/kubernetes.io/csi/pv/pvc-230ac950-e2c6-46ee-add4-bc3138858b4a/globalmount /mnt/vdb/kubelet/plugins/kubernetes.io/csi/ebs.csi.volcengine.com/493ac2db47f5eb2e95c0a7ba237e94ac3221f60c2be498cf70bd802123bc52f8/globalmount]
What did you expect to happen?
the csi pvc can be umount and detach successfuly after deleting pod, and new pod can be created successfully.
How can we reproduce it (as minimally and precisely as possible)?
Anything else we need to know?
It is judged that the cloud disk pvc cannot be successfully unmounted because pvc failed in the UmountDevice stage. Further analysis of the root cause is as follows:
kubernetes/pkg/volume/util/operationexecutor/operation_generator.go
Lines 951 to 959 in eafebcc
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: