-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes test "multiple PV pointing to the same storage on the same node" fails #1913
Comments
This is expected behavior. for PV2, after CSI full sync determines that this volume needs to be registered and container volume, it registered it, and then the volume will be available in the query volume call. until that happens detach with fail. vSphere CSI driver does not support creating multiple PVs with the same volume handle. We recommend customers use RWM volume if we have a use case to use the same volume across many pods. |
this delete is just a de-registration of volume as a Container volume. FCD and VMDK are not deleted from the back end. |
vSphere CSI driver is broken: - https://bugzilla.redhat.com/show_bug.cgi?id=2106736 - kubernetes-sigs/vsphere-csi-driver#1913 The test was added in 4.11, skip it in 4.11 and newer.
) vSphere CSI driver is broken: - https://bugzilla.redhat.com/show_bug.cgi?id=2106736 - kubernetes-sigs/vsphere-csi-driver#1913 The test was added in 4.11, skip it in 4.11 and newer.
/close |
@jsafrane: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
So should it be considered as a regression from intree to CSI because intree test can pass? Is there public document about this issue? |
@jingxu97 CSI driver supports CNS volume. The volume handle in a PV points to a FCD UUID for a CSI driver. When the PV is deleted with retain policy, the volume is deregistered from CNS. Since the volume is no longer a CNS volume, detach will fail until full sync happens, as explained here. The in-tree volume plugin does not support CNS volume. The volume handle in a PV for the in-tree plugin points to a VMDK path. When the PV is deleted with retain policy, there isn't a step to deregister from CNS. It is not a regression. It works as expected. It is just that the in-tree plugin and CSI driver have very different architecture. |
I think it would be good to have a document mentioning vSphere does not support this use case "multiple PV pointing to the same storage on the same node"? |
I created an issue for it #2248 |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Running Kubernetes e2e tests, this test fails about 50% of time:
What you expected to happen:
The test passes
How to reproduce it (as minimally and precisely as possible):
Run Kubernetes 1.24 CSI tests with vSphere CSI driver.
Anything else we need to know?:
The test is quite complicated, here are individual steps:
volumeHandle
.volumeHandle
. So far so goodreclaimPolicy: Retain
, so no deletion in the storage backend should happen. Again, so far so good.At this time, the CSI driver is not able to detach PV1 from the node, because of this error:
I was able to see that in step 5 (after PV2 is deleted), syncer deletes the volume from CNS:
But PV1 still exists at this time and the volume is still attached to a node. The attacher is then not able to find + detach the volume.
The test was added in 1.24 in this PR to test for a regression.
Environment:
The text was updated successfully, but these errors were encountered: