-
-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determine how we migrate large instances from CouchDB 1.x to CouchDB 2.0 #3257
Comments
In terms of logging people out, CouchDB uses HMAC, so while we need to test this, I think if we migrate the secret key then 1.x sessions will be valid on 2.0. |
Cool! If you set the same couchdb secret it looks like you can share cookies:
|
@browndav so what do you think about this rough plan:
The main issue here is that we would still be offline while migrations ran. Unfortunately to solve this we would need to solve the continual migrations problem, and forward port any migrations that need it into that system: #2012 |
Oh, to be clear about where this will leave different users:
And of course, while I've tested the simplest version of this, we'd want to actually test this with a real user and our app. |
This sounds right to me, thanks for putting it together. Just to make sure I'm understanding this correctly: does the safety of this approach depend upon the fact that neither API nor Sentinel are running on the destination CouchDB 2.x instance (i.e. no writes other than replication are occurring)? |
One other note (and we can actually build a checklist for this, so no big deal): we can actually swap Elastic IPs to avoid any DNS TTL issues. Not a huge win, but still decreased downtime. |
I honestly don't know. At least for API you'd need to wait until you had all the data to upgrade to the newer api and have all the migrations run. Apart from that, there is simply no need for them to be running at this stage, so better safe than sorry. |
Thanks @SCdF for putting this together! Partners will be VERY thankful if we can make this work. |
TODO:
|
Tested this locally, by:
@browndav what should the next steps here be? Do you want to test this (I can help) in an AWS environment with multiple servers? Presumably this would also be a good point to upgrade their MedicOS? Would you like me to generate some document about how to do this, or is proving it enough? |
More additional information. In Fauxton, it shows a warning if >50% of your documents in your DB are deletes. This is because in large DBs this can cause performance problems. Since our migration will force clients to re-replicate the We could manually work this out for larger clients by walking their changes feed and counting how many entries are |
Since we did a lot of deleting of training data in the early stages of the LG project, this seems like something we should consider. |
CouchDB2 is not prioritised for this milestone - removing. |
NB: I think we'll have to increase the {
"error": "too_large",
"reason": "the request entity is too large",
"name": "too_large",
"status": 413,
"message": "the request entity is too large"
} |
It turns out the couchdb default value is still only 64MB. I've raised an issue with couchdb to clarify and @browndav increased our default in medic-os to 128MiB. |
You can now install, stage and complete a staged install from the command line. Running horti daemonless now actually makes sense, with it performing the given action and then stopping. This change allows for easier migrations from 2.x to 3.0 because once you've replicated data over to the new instance you can run horti with: horti --stage=3.0.0 --no-daemon To pre-prepare the instance as much as possible for the deploy. Once you're ready to make the switch you can run: horti --complete --no-daemon And once that is done run horti as you would normally do via supervisor.d (or just run horti --complete and have the daemon run from there) medic/cht-core#3257
You can now install, stage and complete a staged install from the command line. Running horti daemonless now actually makes sense, with it performing the given action and then stopping. This change allows for easier migrations from 2.x to 3.0 because once you've replicated data over to the new instance you can run horti with: horti --stage=3.0.0 --no-daemon To pre-prepare the instance as much as possible for the deploy. Once you're ready to make the switch you can run: horti --complete-install --no-daemon And once that is done run horti as you would normally do via supervisor.d (or just run horti --complete-install and have the daemon run from there) medic/cht-core#3257
You can now install, stage and complete a staged install from the command line. Running horti daemonless now actually makes sense, with it performing the given action and then stopping. This change allows for easier migrations from 2.x to 3.0 because once you've replicated data over to the new instance you can run horti with: horti --stage=3.0.0 --no-daemon To pre-prepare the instance as much as possible for the deploy. Once you're ready to make the switch you can run: horti --complete-install --no-daemon And once that is done run horti as you would normally do via supervisor.d (or just run horti --complete-install and have the daemon run from there) medic/cht-core#3257
Apart from the horti PR linked above, I don't think there is any more work here to do. @browndav can you confirm, so we can close this ticket? |
You can now install, stage and complete a staged install from the command line. Running horti daemonless now actually makes sense, with it performing the given action and then stopping. This change allows for easier migrations from 2.x to 3.0 because once you've replicated data over to the new instance you can run horti with: horti --stage=3.0.0 --no-daemon To pre-prepare the instance as much as possible for the deploy. Once you're ready to make the switch you can run: horti --complete-install --no-daemon And once that is done run horti as you would normally do via supervisor.d (or just run horti --complete-install and have the daemon run from there) medic/cht-core#3257
This looks to be in good shape. Ready for release and testing with an actual project. |
We have some large instances, and it's going to be a pain to migrate to CouchDB 2.0, primary because it will force all long term sessions to be logged out (ie all CHWs will have to log back in).
We should look into how we can get around this, and if we definitely can't, strategies for migrating people over slowly (ie running both at the same time).
The text was updated successfully, but these errors were encountered: