-
Notifications
You must be signed in to change notification settings - Fork 729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend existing reattach-pv tool to allow using existing PVs to create newly named cluster. #6118
Conversation
…uster, from previous PVs from previously named cluster.
in findReleasedPVs, adjust the pv.claimref.name depending on whether re-creating missing cluster, or creating new cluster with existing cluster. when updating PVs claim ref, also update the name, not only uid, and resourceVersion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change would allow the Released PVs of clusterA to be used to build a new clusterB
I think an alternative would be to recreate clusterA
, with the same name, but in a different namespace. The "only" manual step is to patch the PersistentVolumes
to refer to the new namespace, with something along those lines:
- get the released PVs
k get pv | grep Released
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
pv1 2Gi RWO Retain Released ns1/elasticsearch-data-elasticsearch-sample-es-default-0 retain-sc 17m
pv2 2Gi RWO Retain Released ns1/elasticsearch-data-elasticsearch-sample-es-default-2 retain-sc 17m
pv3 2Gi RWO Retain Released ns1/elasticsearch-data-elasticsearch-sample-es-default-1 retain-sc 17m
- patch the PVs with the new namespace
kubectl patch pv pv1 pv2 pv3 -p '{"spec":{"claimRef": {"namespace": "ns2"} }}'
-
Ensure that the new namespace
ns2
does exist and is set in themetadata.namespace
of the Elasticsearch manifest we want to recreate. -
Run the
reattach-pv
tool as usual
It involves some manual steps, maybe a bit error prone...
Move var to boolean, and rename more appropriately. Move regex var out of loop. Move github link to permalink. Fix typo in comment.
use oldEsName in matchPVsWithClaim in Regex to ensure only volumes associated with old cluster are used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM. I left two comments to try to make the doc a little more clear and concise.
README changes applied @thbkrkr . Let me know if all looks well. Thanks. |
The proposed change would extend the existing reattach-pv tool to support creating a new cluster (clusterB) using PVs from a previously deleted cluster (clusterA), while maintaining the new name (clusterB).
Why is this needed?
If a user accidentally deletes clusterA, then immediately re-creates clusterA, it will be created with the same name, and new PersistentVolumes, becoming a new cluster. If that cluster begins ingesting data of some sort, then it will contain indexes with valid data that would need to be retained.
This change would allow the Released PVs of clusterA to be used to build a new clusterB, which contains the old data of clusterA, which would allow clusterA then to ingest the data of clusterB, and have the whole of the customer's data.
What is missing?
Testing Done
kubectl apply -f es.yaml
, deletekubectl delete es
, delete PVCs from prev cluster, create again with same namekubectl apply -f es.yaml
, then create 2nd cluster using existing PVs, and verify indexes with previous data exist.