-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question regarding connecting cluster nodes to leader if autounseal is set to false #58
Comments
To be honest, this case was not tested well. I did a little testing when I just started developing the module. Theoretically, the first time of initialization it will be possible to enter an |
If after testing it becomes clear that this is not possible, in addition to SSH, you can also get access using public access to the nodes, but this is not very secure since the traffic will not be encrypted. It is also possible to use another Vault for |
May I ask why there was such a need to refuse to |
Thanks for the insight! The main reason I was asking was that before I put Vault into production use, I wanted to test if I was successfully able to do cluster backup and restores. Using Using If I understand correctly, with If so, do you know how I could implement using an existing AWS KMS key with the module? |
Thanks for reporting!
After Terraform destroys the KMS key (inside the module) will be marked as deleted but can be restored in some period of time (10 days by module configuration, maybe need to provide an option to configure this value). So you have a chance to restore data from snapshots or EBS snapshots
For a cloud deployment with more than 1 node
Exactly, and on a new version of Vault it supports even migration, more: https://www.vaultproject.io/docs/concepts/seal#seal-migration and https://support.hashicorp.com/hc/en-us/articles/360002040848-Seal-Migration. But it will require a little manual work. In theory, it can be automated, but this is a rather difficult task and will take a lot of time. But this is quite interesting and I probably will add it to the list of tasks
I have added some configuration and example by this PR #62, after a quick test, it looks like everything is working as intended. Please test it from your side |
I just tested #62, and it was working for me, thanks! It is nice to be able to use KMS keys instead of manual
I was considering making a Lambda script that would run on a certain interval and make Raft backups and store them in an encrypted S3 bucket. In your opinion, what is the best option for backup, EBS snapshots or the Raft ones? Are there any pros or cons to either approach? |
Also, if I understand correctly the So if I had snapshots and needed to restore them, then I take the snapshots, create new volumes from them, detach the existing volumes from the ec2 instances, and attach the volumes created from the snapshots to the ec2 instances? |
Thanks, nice to hear that! |
I started thinking about automatic snapshots on the S3 bucket from the very beginning of the development of the module, but the main problem is that for this you need to create a system account and policy, and this, in turn, requires initializing the Vault. So far, there is no mechanism for pre-initializing the system user (with rights only to create snapshots). Using snapshots with storage on an S3 bucket is more preferable from the point of view of money costs and space-saving, but requires post-configuration (as I described before). But there are never too many backups 🙂 |
yes, absolutely. A separate EBS is needed to store the state of the cluster and data even with a complete re-creation (in case of updating the version of the Vault, for example) |
Yes, that's right. This is one of the possible options (but I have not tested it). There are many options for how to recover data (if you have them). For example, you can also launch the instance, add an external EBS, go to the instance by SSH and copy the data and transfer it to a new cluster. Or you can create a separate cluster with one node, and specify the EBS snapshot as a source of recovery, and then take a snapshot with your hands using a Vault UI or CLI. |
btw, this is a good point for creating a separate issue for this enhancement! I can investigation this method and it may be possible to completely automate this process. Since Terraform has an option snapshot_id для |
If the value
autounseal
is set to false, does that require me to SSH into every Vault node that is not the leader and try to link them to the leader node?The text was updated successfully, but these errors were encountered: