-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSI Volume staging path contains PV name #105899
Comments
@jsafrane: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig storage |
I suggest that if two volumes have the same unique volume ID (computed from CSI These are several issues changing the global mount path:
|
I think in-tree driver can handle this case correctly because global mountpath is using unique volume id instead of pv name. With csi-migration, does it break backward compatibility? |
We require users to drain nodes before enabling migration or when updating to a Kubernetes version that have migration enabled by default. All volumes should be therefore unmounted. |
@jingxu97 probably yes this breaks backward compatibility, but since only migration for PD is being done maybe it's more rare? We saw a customer case for this situation with NFS shares using the csi filestore driver. For PD I think it would only come up when trying to ROX a PD volume across namespaces, I'm not sure if that's common. |
Capturing some thoughts here in line to understand your suggestions @jsafrane
2.1. if we decide to make the change at a minor boundary, then a node drain mandate would unstage any volumes before the kubelet is upgraded. this ensures any staged path for csi volume is using the new format 2.2 If we do not want to enforce a node drain, kubelet can first lookup json data (the reasoning is if this file does not exist, we will not be able to retrieve the volume handle from the new handle anyway), else fallback to try the old format of staging path -
|
/assign |
@saikat-royc yes, your summary is correct. We persist a json, please make sure it contains anything that NodeUnstage may need.
We already do |
What happened?
When two CSI PVs point to the same volume in the storage backend, kubelet can't start pods that use these two PVs on the same node.
Both PVs have the same VolumeHandle and kubelet is smart enough to recognize that they're the same volume (because they have the same unique volume ID). It calls
NodeStage
only once, however, the staging path contains name of one of the PVs, say PV1:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f5fa91c3-bf91-42a9-a8a3-da516e0371f0/globalmount/
kubernetes/pkg/volume/csi/csi_attacher.go
Line 661 in 3e6d122
One of the CSI volume
SetUpAt
calls then fails, because it uses PV2 to compute the staging path and that path does not exist.What did you expect to happen?
Both pods should just start.
How can we reproduce it (as minimally and precisely as possible)?
Using CSI HostPath volume, PV1 (dynamically provisioned for simplicity):
PV2 (basically edited PV1):
PVC1, PVC2, Pod1 and Pod2 are trivial, nothing special there.
Kubernetes version
The text was updated successfully, but these errors were encountered: