Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After upgrading k8s to version above 1.24, PVC is blocked in the UmountDevice stage #121134

Closed
LastNight1997 opened this issue Oct 11, 2023 · 17 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/storage Categorizes an issue or PR as relevant to SIG Storage. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@LastNight1997
Copy link
Contributor

What happened?

After upgrading k8s from 1.20 to 1.24, we deleted a pod using csi pvc on the node that had not been evicted before the upgrade. The pod was deleted successfully, but pvc was till mounted and attached to the node, if a new pod use the same pvc in another node, it will report Multi-Attach Error. In the kubelet log, we found the error:

Aug 22 14:51:36 ncije4dedvch9fjdtj8s0 kubelet[2444519]: E0822 14:51:36.720426 2444519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/ebs.csi.volcengine.com^vol-k4ucb4dmcffxuv3316ea podName: nodeName:}" failed. No retries permitted until 2023-08-22 14:53:38.720408461 +0800 CST m=+12441.648359884 (durationBeforeRetry 2m2s). Error: GetDeviceMountRefs check failed for volume "pvc-230ac950-e2c6-46ee-add4-bc3138858b4a" (UniqueName: "kubernetes.io/csi/ebs.csi.volcengine.com^vol-k4ucb4dmcffxuv3316ea") on node "172.28.25.127" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/ebs.csi.volcengine.com/493ac2db47f5eb2e95c0a7ba237e94ac3221f60c2be498cf70bd802123bc52f8/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-230ac950-e2c6-46ee-add4-bc3138858b4a/globalmount /mnt/vdb/kubelet/plugins/kubernetes.io/csi/pv/pvc-230ac950-e2c6-46ee-add4-bc3138858b4a/globalmount /mnt/vdb/kubelet/plugins/kubernetes.io/csi/ebs.csi.volcengine.com/493ac2db47f5eb2e95c0a7ba237e94ac3221f60c2be498cf70bd802123bc52f8/globalmount]

What did you expect to happen?

the csi pvc can be umount and detach successfuly after deleting pod, and new pod can be created successfully.

How can we reproduce it (as minimally and precisely as possible)?

  1. Deploy a k8s cluster with a version lower than 1.24, such as 1.23, 1.22
  2. Running a pod that mounts a csi pvc needing mountDevice(representing NodeStageVolume in CSI call)
  3. upgrading k8s version to 1.24 (for testing, we can only update kubelet binary and restart it)
  4. delete the pod created in step 2
  5. umountDevice the pvc will fail

Anything else we need to know?

It is judged that the cloud disk pvc cannot be successfully unmounted because pvc failed in the UmountDevice stage. Further analysis of the root cause is as follows:

  1. k8s version 1.24 has modified the global stagePath of pvc. For related background, please see change node staging path for csi driver to PV agnostic #107065
  2. After upgrading kubelet, kubelet will be remounted in the new global stagePath.
  3. If there are still pods using the pvc running on the node before upgrading kubelet, when the pod is deleted, pvc enters the UmountDevice stage. Since the mount record of the old stagePath still exists, it will report the error "is still mounted by other references" and the umountDevice cannot be successful.
    refs, err := deviceMountableVolumePlugin.GetDeviceMountRefs(deviceMountPath)
    if err != nil || util.HasMountRefs(deviceMountPath, refs) {
    if err == nil {
    err = fmt.Errorf("the device mount path %q is still mounted by other references %v", deviceMountPath, refs)
    }
    eventErr, detailedErr := deviceToDetach.GenerateError("GetDeviceMountRefs check failed", err)
    return volumetypes.NewOperationContext(eventErr, detailedErr, migrated)
    }

Kubernetes version

$ kubectl version
# paste output here
v1.24.15

Cloud provider

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@LastNight1997 LastNight1997 added the kind/bug Categorizes issue or PR as related to a bug. label Oct 11, 2023
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 11, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@LastNight1997
Copy link
Contributor Author

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 11, 2023
@ndixita
Copy link
Contributor

ndixita commented Oct 11, 2023

/sig storage
1.24 is not supported version.
Assigning to sig storage for further action.

@k8s-ci-robot k8s-ci-robot added the sig/storage Categorizes an issue or PR as relevant to SIG Storage. label Oct 11, 2023
@ndixita
Copy link
Contributor

ndixita commented Oct 11, 2023

We can also check if this code exists in later versions

@ndixita
Copy link
Contributor

ndixita commented Oct 11, 2023

/remove-sig node

@k8s-ci-robot k8s-ci-robot removed the sig/node Categorizes an issue or PR as relevant to SIG Node. label Oct 11, 2023
@LastNight1997
Copy link
Contributor Author

@ndixita this bug also exists in 1.28, for example, if upgrading k8s version from 1.23 to 1.28, it will also trigger this problem.

@LastNight1997 LastNight1997 changed the title After upgrading k8s to 1.24, PVC is blocked in the UmountDevice stage After upgrading k8s to version above 1.24, PVC is blocked in the UmountDevice stage Oct 12, 2023
@carlory
Copy link
Member

carlory commented Oct 12, 2023

/assign @LastNight1997

@vidhut-singh
Copy link

What is the persistentVolumeReclaimPolicy policy defined in the PV configuration? Is it "Retain"?
Please note in the case of Retain the PV will not deleted when you trigger POD deletion. Also is this only happening during upgrade ? have you tried by manually deleting the POD on the existing K8s version , does it delete ?
Which CSI driver are you using here?

@RomanBednar
Copy link
Contributor

@LastNight1997 I believe this is expected behavior. Kubernetes 1.24 release notes stated that a node has to be drained first before updating kubelet:

- Changed node staging path for CSI driver to use a PV agnostic path. Nodes must be drained before updating the kubelet with this change. ([#107065](https://github.com/kubernetes/kubernetes/pull/107065), [@saikat-royc](https://github.com/saikat-royc))

Looking at the reproducer it does not seem the node was drained - this is crucial for letting kubelet remove the old mount paths first before the update. After the update is done and pods get scheduled again only the new paths should exist and kubelet won't hit this error.

@LastNight1997
Copy link
Contributor Author

@RomanBednar In the case a daemonset pod using a csi pvc, the pod will not be cleaned when we drain the node. Should we consider this case as a problem?

@RomanBednar
Copy link
Contributor

@LastNight1997 You should be able to drain it with --ignore-daemonsets or --ignore-daemonsets --force - unless there's a reason this might be a bad idea.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@xing-yang
Copy link
Contributor

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Feb 21, 2024
@xing-yang
Copy link
Contributor

Can you try this suggestion to drain the daemonsets? #121134 (comment)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 22, 2024
@k8s-ci-robot k8s-ci-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 21, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/storage Categorizes an issue or PR as relevant to SIG Storage. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

8 participants