Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing ownership of s3fs volume mount point #50

Open
pres-t opened this issue Jun 18, 2020 · 0 comments
Open

Changing ownership of s3fs volume mount point #50

pres-t opened this issue Jun 18, 2020 · 0 comments

Comments

@pres-t
Copy link

pres-t commented Jun 18, 2020

Hi,

I have an application container running as a non root user (uid=111111,gid=111111) with an s3fs volume provisioned through a volume claim template.
The issue i'm facing is that when the volume is mounted, the mount point ownership is root and the application cannot write to the volume.
The container spec has
securityContext: runAsUser: 11111

I've tried following the steps described at https://cloud.ibm.com/docs/containers?topic=containers-cs_troubleshoot_storage#cos_nonroot_access but in my case i've added an initcontainer to perform the job of the fix-permissions.yaml, however that did not resolve the problem.

Also I've noticed that the flexVolume driver has the options for:
"kubernetes.io/fsGroup,omitempty"
"kubernetes.io/mounterArgs.FsGroup,omitempty"
based on https://github.com/IBM/ibmcloud-object-storage-plugin/blob/master/driver/driver.go#L75
How are these options passed to the driver?

k8s nodes version - 1.16.7
object storage plugin version - 1.8.16

Any help would be appreciated.
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant