You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.
We can write a lot about the problems to get user's to speed about 'bricks', where as, they don't need to know about it at all. For users, it is a new concept. For Admin's it is lot of brain cycles to remember the order, plan CLI command itself. For Developers, hard to test the scale.
With few automations it can be reduced, but think of it, that automation is another layer of work involved for some of the complexities which can be well avoided in first place.
4yrs back Gluster team did write more about the need for abstracting the bricks from users in its release plan document.
Ideally, for an SDS there should be just concept of Volume - A storage entity which has 'capacity', and 'nodes' - A resource which has CPU and will be exporting storage. (can be multiple export points in such 'node').
Benefits
Makes Gluster more like SDS for commoners, instead of for 'gluster' experts.
How to go about this?
Provide 2 major set of operating entities: Volumes and Nodes - Exists today.
Extend the Node operations
NOTE: currently called peers, consider making it alias, and default to nodes.
Allow any hardware / machine related operations of GD2 part of this operation.
like, editing IP address, hostname. Having more than 1 hostname.
Allow grouping of networks and options to pick different n/w for different volume, different process (like self-heal only, etc).
Allow rack-awareness etc.
Extend the Volume operations
NOTE: Currently 'bricks' are required part of the volume operations involving creates, expand, shrink.
Allow just 2 options of volume create:
volume create with capacity specified. (call it dynamic volume provisioning) - For reference, heketi project does something similar.
volume create with nodes specified. (Few or all nodes, and a config can be provided to get list of hardware resources to consider from these nodes).
How Gluster internally picks the brick should be based on logic implemented locally, and which can evolve multiple ways based on volume type, more config options, etc etc. It can also consider points mentioned in #416 too.
I would recommend GD2 team to start getting into this mindset immediately if not present already.
The text was updated successfully, but these errors were encountered:
NOTE: currently called peers, consider making it alias, and default to nodes.
I would recommend against this. node is a very generic term used across multiple contexts, including VMs, containers etc. peer specifically means a gluster peer and is unambiguous: it'll always automatically imply a gluster peer without having to explain the context.
I would keep it open till the first release to see how many of our APIs still need concept of 'brick'. Hope that is fine. We can run all the commands and see what needs to be done.
Problem statement
We can write a lot about the problems to get user's to speed about 'bricks', where as, they don't need to know about it at all. For users, it is a new concept. For Admin's it is lot of brain cycles to remember the order, plan CLI command itself. For Developers, hard to test the scale.
With few automations it can be reduced, but think of it, that automation is another layer of work involved for some of the complexities which can be well avoided in first place.
4yrs back Gluster team did write more about the need for abstracting the bricks from users in its release plan document.
Ideally, for an SDS there should be just concept of Volume - A storage entity which has 'capacity', and 'nodes' - A resource which has CPU and will be exporting storage. (can be multiple export points in such 'node').
Benefits
Makes Gluster more like SDS for commoners, instead of for 'gluster' experts.
How to go about this?
Extend the Node operations
NOTE: currently called
peers
, consider making it alias, and default tonodes
.Extend the Volume operations
NOTE: Currently 'bricks' are required part of the volume operations involving creates, expand, shrink.
How Gluster internally picks the brick should be based on logic implemented locally, and which can evolve multiple ways based on volume type, more config options, etc etc. It can also consider points mentioned in #416 too.
I would recommend GD2 team to start getting into this mindset immediately if not present already.
The text was updated successfully, but these errors were encountered: