Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] hostpath provisioner “permission denied” on R4E due to SELinux #643

Closed
DanielFroehlich opened this issue Mar 30, 2022 · 6 comments · Fixed by #695
Closed

[BUG] hostpath provisioner “permission denied” on R4E due to SELinux #643

DanielFroehlich opened this issue Mar 30, 2022 · 6 comments · Fixed by #695
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@DanielFroehlich
Copy link
Contributor

What happened:

Deployed microshift using RHEL/Edge os-tree.
Created pvc+pod for the hostpath provisionor that is available from the microshift base installation.
PVC does not get bind, due to PV no created. provisonr pod log shows:

failed to provision volume with StorageClass "kubevirt-hostpath-provisioner": mkdir /var/hpvolumes/pvc-bb3312ba-ba35-4ca7-8873-b6d8334b22e7: permission denied
I tried
sudo chmod 777 /var/hpvolumes
which did not help. Thats hints to se-linux problems. But
Installed Packages: microshift-selinux.noarch 4.8.0-2022_03_11_124751.el8
is there.
Trying brute force workaround:
sudo setenforce Permissive and restart crio+ushift, and voila, then it works.

OpenShift Docs have a step to configure SELinux for hostpath provisioning,, maybe the is missing?

What you expected to happen:

should have just worked.

How to reproduce it (as minimally and precisely as possible):

see above

Anything else we need to know?:

Environment:

  • Microshift version (use microshift version):
    MicroShift Version: 4.8.0-0.microshift-2022-03-11-124751
    Base OKD Version: 4.8.0-0.okd-2021-10-10-030117

  • Hardware configuration:
    virtualised

  • OS (e.g: cat /etc/os-release):
    NAME="Red Hat Enterprise Linux"
    VERSION="8.5 (Ootpa)"
    ID="rhel"
    ID_LIKE="fedora"
    VERSION_ID="8.5"
    PLATFORM_ID="platform:el8"
    PRETTY_NAME="Red Hat Enterprise Linux 8.5 (Ootpa)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"

  • Kernel (e.g. uname -a):
    Linux localhost.localdomain 4.18.0-348.20.1.el8_5.x86_64 Init #1 SMP Tue Mar 8 12:56:54 EST 2022 x86_64 x86_64 x86_64 GNU/Linux

  • Others:
    !!! RHEL/Edge rpm-ostree installation!!!

Relevant Logs

@DanielFroehlich DanielFroehlich added the kind/bug Categorizes issue or PR as related to a bug. label Mar 30, 2022
@DanielFroehlich
Copy link
Contributor Author

A much better and less insecure workaround is
sudo restorecon -R -v /var/hpvolumes

@fzdarsky
Copy link
Contributor

A solution should be to deploy the following systemd unit:
https://github.com/kubevirt/hostpath-provisioner/blob/main/deploy/systemd/hostpath-provisioner.service

We should probably package that in our MicroShift RPM. Then again, we may switch the default provider to TopoLVM that'd make the need for a workaround obsolete.

@mangelajo @oglok WDYT?

@DanielFroehlich
Copy link
Contributor Author

please dont simply switch the provider. While TopoLVM is nice, it also has some drawbacks compared to hostpath-provisionor. IF you switch, please make it configuratable in case customers want to stay. Or provide instructions on how to configure it.

@mangelajo
Copy link
Contributor

Makes sense, the should package this!

@mangelajo mangelajo self-assigned this Apr 11, 2022
@xsgordon
Copy link
Member

@DanielFroehlich note I am still waiting for your write-up on this.

@DanielFroehlich
Copy link
Contributor Author

TopoLVM provides strict isolation and size enforcement for PVCs, which is usually a good thing, so topolvm should definitely be an option for ushift.
But is also enforces you you to pre-allocate/configure your PVCs, which can be a problem.

Assume you have 2 PODS, both need local storage, but you dont know upfront which pod is going to need which amount of storage? Or even if the storage demand dynamically changes during runtime? For example, POD A need 100G of storage in the morning but cleans up and shrinks back to 10G at lunch team, but after lunch POD B suddenly needs 80G. If you have only 120G of total storage available and statically assigned to PVCs (using lvm volumes), you would need to constantly monitor and dynamically resize the PVCs. Is shrinking a PVC even possible with CSI ?

With hostpath, there is no need. You can simply assign all PVCs to the 120G of storage, and it will just work. If a pod consumes all storage, that is a risk of course and can have an impact on other pods. But that can be easily tackled if pods handle "disk full" situations correctly and gracefully (which they should do in any case). That is much easier to implement then a dynamic PVCs resize algorithm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants