You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FCOS in OKD 4.12 is missing GlusterFS packages, so the gluster PVs can't be mounted anymore and the containers are stuck in ContainerCreating state, with glusterfs mount errors in event log.
I'm aware that GlusterFS support has been stale for a while now both in Kubernetes and OpenShift, but this abrupt undocumented removal could cause issues for some users.
Kubernetes 1.25 did deprecate GlusterFS (more details here), but release notes for OKD 4.12 don't mention anywhere that glusterfs and glusterfs-fuse packages are not installed by default so this behavior is unexpected.
This can be worked around by installing glusterfs and glusterfs-fuse packages via FCOS image layering mechanism described here. Example configuration for 4.12.0-0.okd-2023-03-05-022504:
FROM quay.io/openshift/okd-content@sha256:{HASH_FOR_INSTALLED_VERSION}
RUN rpm-ostree install -y glusterfs glusterfs-fuse && \
rpm-ostree cleanup -m && \
ostree container commit
This change should either be reverted to include GlusterFS until upstream Kubernetes removes it, or clearly document it and warn users before the upgrade.
The text was updated successfully, but these errors were encountered:
These RPMs were installed to pass conformance tests, so its a coincidence that you could use GlusterFS in-tree driver without additional packages. This was never mentioned in the documentation afaik, if it is we'll need to remove these mentions
FCOS in OKD 4.12 is missing GlusterFS packages, so the gluster PVs can't be mounted anymore and the containers are stuck in ContainerCreating state, with glusterfs mount errors in event log.
I'm aware that GlusterFS support has been stale for a while now both in Kubernetes and OpenShift, but this abrupt undocumented removal could cause issues for some users.
Kubernetes 1.25 did deprecate GlusterFS (more details here), but release notes for OKD 4.12 don't mention anywhere that glusterfs and glusterfs-fuse packages are not installed by default so this behavior is unexpected.
This can be worked around by installing glusterfs and glusterfs-fuse packages via FCOS image layering mechanism described here. Example configuration for 4.12.0-0.okd-2023-03-05-022504:
This change should either be reverted to include GlusterFS until upstream Kubernetes removes it, or clearly document it and warn users before the upgrade.
The text was updated successfully, but these errors were encountered: