You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 30, 2020. It is now read-only.
Describe the feature you'd like to have.
Gluster pods should be able to float between nodes in response to failures. This includes support for more than one gluster pod per node. Note: the storage for the bricks must be able to move for this to be possible.
What is the value to the end user? (why is it a priority?)
Currently, Gluster is deployed onto specific nodes, and when a node fails, loses connectivity, or is taken down for some reason, the corresponding gluster pod remains down until the node is repaired. Users would like to have better availability for their data by allowing the gluster pod to restart elsewhere if the back end storage can still be accessed. This also allows users to run multiple gluster clusters on the same set of storage nodes, lower their minimum investment.
How will we know we have a good solution? (acceptance criteria)
When a node that hosts a gluster pod is taken down, that pod restarts elsewhere in the cluster, properly rejoins, and heals.
Moving gluster pods is non-disruptive to client workloads, even if all gluster pods are moved (one at a time)
Multiple gluster pods can run on a single node without interfering with each other (excepting CPU/memory/network bandwidth issues). They should be able to be a part of the same or different gluster clusters
Additional context
This requires a fixed identity for the pod that can travel with it (i.e., DNS name), and that ID must be used properly by CSI & GD2 peers. If a stable IP is also needed, there will probably have to also be a service per pod.
Describe the feature you'd like to have.
Gluster pods should be able to float between nodes in response to failures. This includes support for more than one gluster pod per node. Note: the storage for the bricks must be able to move for this to be possible.
What is the value to the end user? (why is it a priority?)
Currently, Gluster is deployed onto specific nodes, and when a node fails, loses connectivity, or is taken down for some reason, the corresponding gluster pod remains down until the node is repaired. Users would like to have better availability for their data by allowing the gluster pod to restart elsewhere if the back end storage can still be accessed. This also allows users to run multiple gluster clusters on the same set of storage nodes, lower their minimum investment.
How will we know we have a good solution? (acceptance criteria)
Additional context
This requires a fixed identity for the pod that can travel with it (i.e., DNS name), and that ID must be used properly by CSI & GD2 peers. If a stable IP is also needed, there will probably have to also be a service per pod.
Depends on:
The text was updated successfully, but these errors were encountered: