Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot inherit group id(gid) from parent directory in Azure File NFS #682

Closed
andyzhangx opened this issue May 29, 2021 · 8 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@andyzhangx
Copy link
Member

andyzhangx commented May 29, 2021

What happened:
chown on parent directory on NFSv4 file share, while subdirectory would still has root group id. Make following fsGroup tests fails:

  • asking for grpid setting in Azure File NFS
grpid|bsdgroups and nogrpid|sysvgroups These options define what group id a newly created file gets. When grpid is set, it takes the group id of the directory in which it is created; otherwise (the default) it takes the fsgid of the current process, unless the directory has the setgid bit set, in which case it takes the gid from the parent directory, and also gets the setgid bit set if it is a directory itself.
  • chown behavior on Azure File NFSv4
# chown 1000:2000 azurefile -R
# ls -lt
total 1
drwxrwsrwx 2 1000 2000 64 May 29 13:27 azurefile
# cd azurefile
# ls -lt
total 171
-rw-r--r-- 1 1000 2000 174058 May 29 15:09 outfile
drwxr-xr-x 2 1000 2000     64 May 29 15:09 test
# mkdir a
# ls -lt
total 172
-rw-r--r-- 1 1000 2000 174203 May 29 15:09 outfile
drwxr-xr-x 2 root root     64 May 29 15:09 a
drwxr-xr-x 2 1000 2000     64 May 29 15:09 test
  • chown behavior on Blob NFSv3
# chown 1000:2000 blob -R
# ls -lt
total 0
drwxrwxrwx 2 1000 2000 0 May 29 15:13 blob
# cd blob
# mkdir test
# ls -lt
total 54
-rw-r--r-- 1 1000 2000 54201 May 29 15:15 outfile
drwxr-xr-x 2 root 2000     0 May 29 15:15 test

https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_azurefile-csi-driver/679/pull-azurefile-csi-driver-external-e2e-nfs/1398521114681937920

[Fail] External Storage [Driver: file.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [Measurement] (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:717
[Fail] External Storage [Driver: file.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [Measurement] (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:717
[Fail] External Storage [Driver: file.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [Measurement] (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:717
[Fail] External Storage [Driver: file.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [Measurement] (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:717
[Fail] External Storage [Driver: file.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [Measurement] (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:717
[Fail] External Storage [Driver: file.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [Measurement] (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:717
Ran 27 of 5978 Specs in 817.561 seconds
FAIL! -- 21 Passed | 6 Failed | 0 Pending | 5951 Skipped 
�[1mSTEP�[0m: Creating Pod in namespace fsgroupchangepolicy-6813 with fsgroup 1000
May 29 06:27:52.427: INFO: Pod fsgroupchangepolicy-6813/pod-3b8ae2ba-52fe-48e3-b2ad-6233b0bc434e started successfully
�[1mSTEP�[0m: Creating a sub-directory and file, and verifying their ownership is 1000
May 29 06:27:52.427: INFO: ExecWithOptions {Command:[/bin/sh -c touch /mnt/volume1/file1] Namespace:fsgroupchangepolicy-6813 PodName:pod-3b8ae2ba-52fe-48e3-b2ad-6233b0bc434e ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 06:27:52.427: INFO: >>> kubeConfig: /root/tmp676422964/kubeconfig/kubeconfig.eastus.json
May 29 06:27:53.137: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/volume1/file1] Namespace:fsgroupchangepolicy-6813 PodName:pod-3b8ae2ba-52fe-48e3-b2ad-6233b0bc434e ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 06:27:53.137: INFO: >>> kubeConfig: /root/tmp676422964/kubeconfig/kubeconfig.eastus.json
May 29 06:27:53.663: INFO: pod fsgroupchangepolicy-6813/pod-3b8ae2ba-52fe-48e3-b2ad-6233b0bc434e exec for cmd ls -l /mnt/volume1/file1, stdout: -rw-r--r--    1 root     root             0 May 29 06:27 /mnt/volume1/file1, stderr: 
May 29 06:27:53.663: INFO: stdout split: [-rw-r--r-- 1 root root 0 May 29 06:27 /mnt/volume1/file1], expected gid: 1000
May 29 09:48:11 aks-nodepool1-28334476-vmss000001 kubelet[16992]: I0529 09:48:11.944922   16992 csi_mounter.go:89] kubernetes.io/csi: mounter.GetPath generated [/var/lib/kubelet/pods/766e5dbc-ac7d-40aa-a5e6-3bb4ac11a555/volumes/kubernetes.io~csi/pvc-d7aeef14-3b83-4239-8fb0-d51731c0f493/mount]
May 29 09:48:11 aks-nodepool1-28334476-vmss000001 kubelet[16992]: I0529 09:48:11.944931   16992 volume_linux.go:140] perform recursive ownership change for /var/lib/kubelet/pods/766e5dbc-ac7d-40aa-a5e6-3bb4ac11a555/volumes/kubernetes.io~csi/pvc-d7aeef14-3b83-4239-8fb0-d51731c0f493/mount
May 29 09:48:11 aks-nodepool1-28334476-vmss000001 kubelet[16992]: I0529 09:48:11.944940   16992 csi_mounter.go:89] kubernetes.io/csi: mounter.GetPath generated [/var/lib/kubelet/pods/766e5dbc-ac7d-40aa-a5e6-3bb4ac11a555/volumes/kubernetes.io~csi/pvc-d7aeef14-3b83-4239-8fb0-d51731c0f493/mount]
May 29 09:48:11 aks-nodepool1-28334476-vmss000001 kubelet[16992]: I0529 09:48:11.960626   16992 csi_mounter.go:279] kubernetes.io/csi: mounter.SetupAt fsGroup [1000] applied successfully to mc_andy-aks12052_andy-aks12052_eastus2#f7b82a30729694c7796af2a#pvcn-d7aeef14-3b83-4239-8fb0-d51731c0f493#
May 29 09:48:11 aks-nodepool1-28334476-vmss000001 kubelet[16992]: I0529 09:48:11.960647   16992 csi_mounter.go:282] kubernetes.io/csi: mounter.SetUp successfully requested NodePublish [/var/lib/kubelet/pods/766e5dbc-ac7d-40aa-a5e6-3bb4ac11a555/volumes/kubernetes.io~csi/pvc-d7aeef14-3b83-4239-8fb0-d51731c0f493/mount]
May 29 09:48:11 aks-nodepool1-28334476-vmss000001 kubelet[16992]: I0529 09:48:11.960668   16992 operation_generator.go:672] MountVolume.SetUp succeeded for volume "pvc-d7aeef14-3b83-4239-8fb0-d51731c0f493" (UniqueName: "kubernetes.io/csi/file.csi.azure.com^mc_andy-aks12052_andy-aks12052_eastus2#f7b82a30729694c7796af2a#pvcn-d7aeef14-3b83-4239-8fb0-d51731c0f493#") pod "statefulset-azurefile3-0" (UID: "766e5dbc-ac7d-40aa-a5e6-3bb4ac11a555")
    spec:
      securityContext:
         fsGroup: 1000

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@andyzhangx andyzhangx added the kind/bug Categorizes issue or PR as related to a bug. label May 29, 2021
@andyzhangx andyzhangx changed the title cannot inherit uid, gid from root directory in Azure File NFS cannot inherit group id(gid) from root directory in Azure File NFS May 29, 2021
@andyzhangx andyzhangx changed the title cannot inherit group id(gid) from root directory in Azure File NFS cannot inherit group id(gid) from parent directory in Azure File NFS May 29, 2021
@Jiawei0227
Copy link
Contributor

Is this only on nfs? What about CIFS?

@andyzhangx
Copy link
Member Author

Is this only on nfs? What about CIFS?

@Jiawei0227 track this issue for cifs fsGroup setting, would be fixed in 1.22 release cycle.
#708

@andyzhangx
Copy link
Member Author

update: Azure File team will fix this issue before Nov. 2021

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 21, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@andyzhangx andyzhangx reopened this Feb 20, 2022
@andyzhangx andyzhangx removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants