-
Notifications
You must be signed in to change notification settings - Fork 82
Support common thinpool while auto provisioning bricks #987
Comments
I thought I had mentioned something releated to this when I commented on issue #728, but I can't find it at the moment, so either I forgot to mention it or perhaps I wrote it somewhere else. Anyways, I think that every device provided to the system will need to have a concept of "formatting" or "layout" that can be extended over time. We'd start off with one or two basic layouts like the current thinpool per brick and then an alternative that has a thinpool per device. Future additions might involve vdo, different file systems, etc. I think I've mentioned this in the past but if not, I really think we need a design doc for device management (and how it interacts with IVP) before we do any additional coding in this area. Personally, I'd like to take the approach I suggested here: #919 (comment) but as long as we do some design I think that'd be helpful. |
@phlogistonjohn Good point. In addition to things like the device stack, there are parameters to those items like the stripe size for LVM & xfs to match the underlying device geometry. This seems like a pretty big design space. Any thoughts on what we can do to move forward with a basic, yet extensible, initial design? |
As previously suggested my favorite approach is to start a document in the repo and iterate on its contents through the PR process. I imagine that the initial doc would capture the minimal requirements to have feature parity with heketi while fitting in with the gd2 approach to apis & infrastructure. We can then add in any high-level additions like this where we need a little future proofing on top of that. I don't want to totally hijack this issue (although we sort of have already) but we might want to use issue #728 for this. It was more general and could serve as a starting point for the doc. It already captures some of my thoughts on the topic and a short list of the needed apis provided by heketi. I could start the doc but we probably wouldn't see anything before next week if I do. If I don't hear anything else that's what I'll assume. |
@phlogistonjohn Thanks for the comments. Device management is only subset of this requirement, I don't think we need to merge this requirement with other issue. This issue talks about filters in Volume create API also. Since this is a feature request, github-issue of its own will help us to triage better. Moving every requirements to single github issue will not help. We have many requirements like this which are related to device management in one way or the other(#938, #920, #851, #728). (Previously we tried with single issue per feature, but because of the above said issues we split into multiple issues. Example: Quota, Selfheal, Geo-rep etc) I agree that Design discussions are spread across the github issues(#661 (comment)), commit message etc. I will add the design doc seperately under Added label to identify all device management related issues( FW: Device Management ) |
In Kubernetes, Snapshots are used mainly to create many new volumes by cloning the snapshot. For this usecase, single thinpool per brick is not scalable even with the snapshot reserve factor is 2. I think we need to use common thinpool in GCS. Are there any known limitations with common thinpool approach? |
@aravindavk I am not aware of any limitations. However we do not do that way in gluster-ansible or gdeploy. |
There is significant metadata contention in lvm that causes performance problems for foreground I/O when deleting other LVs in the same thinp. We need to support a common, shared pool of storage, but a single thinp isn't currently performant enough. |
One
thinpool
and onelv
is created for each brick when bricks are Auto provisioned. For some use cases commonthinpool
is more useful than creating individualthinpool
. Per node or per device configuration is required to support both the use cases.For example, while registering the device
thinpool
per brickcommon-thinpool
then create one commonthinpool
when the device is registered(consuming full brick size)thinpool
if device is configured to use commonthinpool
Note: Once device is configured to use common
thinpool
, changing the behavior in between is not easyThe text was updated successfully, but these errors were encountered: