-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Two Step Snapshot Delete #54705
Closed
original-brownbear
wants to merge
54
commits into
elastic:master
from
original-brownbear:two-step-snapshot-delete
Closed
Two Step Snapshot Delete #54705
original-brownbear
wants to merge
54
commits into
elastic:master
from
original-brownbear:two-step-snapshot-delete
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ep-snapshot-delete
Pinging @elastic/es-distributed (:Distributed/Snapshot/Restore) |
original-brownbear
added a commit
to original-brownbear/elasticsearch
that referenced
this pull request
Apr 7, 2020
We only have very indirect coverage of master failovers during snaphot delete at the moment. This comment adds a direct test of this scenario and also an assertion that makes sure we are not leaking any snapshot completion listeners in the snapshots service in this scenario. This gives us better coverage of scenarios like elastic#54256 and makes the diff to the upcoming more consistent snapshot delete implementation in elastic#54705 smaller.
original-brownbear
added a commit
that referenced
this pull request
Apr 8, 2020
* Add Snapshot Resiliency Test for Master Failover during Delete We only have very indirect coverage of master failovers during snaphot delete at the moment. This comment adds a direct test of this scenario and also an assertion that makes sure we are not leaking any snapshot completion listeners in the snapshots service in this scenario. This gives us better coverage of scenarios like #54256 and makes the diff to the upcoming more consistent snapshot delete implementation in #54705 smaller.
original-brownbear
added a commit
to original-brownbear/elasticsearch
that referenced
this pull request
Apr 15, 2020
Snapshot deletes should first check the cluster state for an in-progress snapshot and try to abort it before checking the repository contents. This allows for atomically checking and aborting a snapshot in the same cluster state update, removing all possible races where a snapshot that is in-progress could not be found if it finishes between checking the repository contents and the cluster state. Also removes confusing races, where checking the cluster state off of the cluster state thread finds an in-progress snapshot that is then not found in the cluster state update to abort it. Finally, the logic to use the repository generation of the in-progress snapshot + 1 was error prone because it would always fail the delete when the repository had a pending generation different from its safe generation when a snapshot started (leading to the snapshot finalizing at a higher generation). These issues (particularly that last point) can easily be reproduced by running `SLMSnapshotBlockingIntegTests` in a loop with current `master` (see elastic#54766). The snapshot resiliency test for concurrent snapshot creation and deletion was made to more aggressively start the delete operation so that the above races would become visible. Previously, the fact that deletes would never coincide with initializing snapshots resulted in a number of the above races not reproducing. This PR is the most consistent I could get snapshot deletes without changes to the state machine. The fact that aborted deletes will not put the delete operation in the cluster state before waiting for the snapshot to abort still allows for some possible (though practically very unlikely) races. These will be fixed by a state-machine change in upcoming work in elastic#54705 (which will have a much simpler and clearer diff after this change). Closes elastic#54766
original-brownbear
added a commit
that referenced
this pull request
Apr 15, 2020
Snapshot deletes should first check the cluster state for an in-progress snapshot and try to abort it before checking the repository contents. This allows for atomically checking and aborting a snapshot in the same cluster state update, removing all possible races where a snapshot that is in-progress could not be found if it finishes between checking the repository contents and the cluster state. Also removes confusing races, where checking the cluster state off of the cluster state thread finds an in-progress snapshot that is then not found in the cluster state update to abort it. Finally, the logic to use the repository generation of the in-progress snapshot + 1 was error prone because it would always fail the delete when the repository had a pending generation different from its safe generation when a snapshot started (leading to the snapshot finalizing at a higher generation). These issues (particularly that last point) can easily be reproduced by running `SLMSnapshotBlockingIntegTests` in a loop with current `master` (see #54766). The snapshot resiliency test for concurrent snapshot creation and deletion was made to more aggressively start the delete operation so that the above races would become visible. Previously, the fact that deletes would never coincide with initializing snapshots resulted in a number of the above races not reproducing. This PR is the most consistent I could get snapshot deletes without changes to the state machine. The fact that aborted deletes will not put the delete operation in the cluster state before waiting for the snapshot to abort still allows for some possible (though practically very unlikely) races. These will be fixed by a state-machine change in upcoming work in #54705 (which will have a much simpler and clearer diff after this change). Closes #54766
Will reopen this in a cleaner format shortly |
original-brownbear
added a commit
to original-brownbear/elasticsearch
that referenced
this pull request
Apr 20, 2020
…ic#54866) * Add Snapshot Resiliency Test for Master Failover during Delete We only have very indirect coverage of master failovers during snaphot delete at the moment. This comment adds a direct test of this scenario and also an assertion that makes sure we are not leaking any snapshot completion listeners in the snapshots service in this scenario. This gives us better coverage of scenarios like elastic#54256 and makes the diff to the upcoming more consistent snapshot delete implementation in elastic#54705 smaller.
original-brownbear
added a commit
that referenced
this pull request
Apr 20, 2020
… (#55456) * Add Snapshot Resiliency Test for Master Failover during Delete We only have very indirect coverage of master failovers during snaphot delete at the moment. This comment adds a direct test of this scenario and also an assertion that makes sure we are not leaking any snapshot completion listeners in the snapshots service in this scenario. This gives us better coverage of scenarios like #54256 and makes the diff to the upcoming more consistent snapshot delete implementation in #54705 smaller.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Improves the state machine for snapshot deletion to remove race conditions around concurrent snapshot create + delete.
Prior to this change the following would happen during a delete:
This comes with at least two possible problems:
a. If a snapshot finished right after checking the repository contents and before checking the cluster-state for an in-progress snapshot, then the delete will fail with a
404
even though it was sent after the snapshot was started.b. If an abort finishes and the user concurrently initiates another snapshot, then the retry of the delete might happen after the new snapshot creation is put in the cluster state and will fail because now a snapshot is in progress again.
Currently, both of these are just minor annoyances that led to some test-failures that required workarounds. Problem a. is definitely bad UX though. With concurrent snapshot operations incoming, these kinds of issues are more relevant since they prevent clean ordering of operations if deletes and snapshot creates are allowed to be executed in parallel.
This change adds another step to the delete process and changes the way the delete job is serialized in the cluster state.
Now, the delete always begins with a cluster state update. If that update finds an in-progress snapshot then it will abort it right away and already add the delete job for that snapshot to the cluster state. If no in-progress snapshot is found, then the delete is added to the cluster state with unknown repository state id and unknown
SnapshotId
.If no in-progress snapshot is found, then the repository data will be loaded after the placeholder delete has been added to the cluster state and the placeholder is updated with the correct repository generation and
SnapshotId
found in the repository data.In the case of an abort, we again wait for the aborted snapshot to finish, then execute the delete.
Since there is a delete in the cluster state, no new snapshot can be started concurrently to mess up the delete operation.
Also, if no in-progress snapshot is found initially then the addition of the delete operation to the cluster state prevents another snapshot from being started concurrently and we get clean ordering of operations in this case as well.
=> Only checking the cluster state contents for in-progress operations during cluster state update jobs and putting a place holder for the delete similar to the snapshot create
INIT
stage resolves all the above races.=> The serialization change to the snapshot in progress entries sets up the basis for deleting multiple snapshots at once for #49800 (we can, without another CS serialization change make it so that the name in the delete placeholder is resolved to multiple snapshot ids which is all that's left to make #49800 behave nicely .. currently that PR is blocked on the fact that the races above make multi-delete behave strangely when patterns match in-progress and existing snapshots etc.)
=> The clean ordering of operations from this change gets us another big step closer to having safe concurrent repository operations
WIP: need to clean up the code a little but would like some CI runs