Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two Step Snapshot Delete #54705

Conversation

original-brownbear
Copy link
Member

Improves the state machine for snapshot deletion to remove race conditions around concurrent snapshot create + delete.
Prior to this change the following would happen during a delete:

  1. Delete operation checks repository if it contains the snapshot to delete.
  2. If one is found -> puts delete operation into the cluster state. If none is found, checks the cluster state for whether the snapshot is in-progress and tries to abort it.
  3. Either waits for the aborted snapshot to finish and then executes step 1 again or physically deletes snapshot from repository after adding the delete to the repo worked out.

This comes with at least two possible problems:
a. If a snapshot finished right after checking the repository contents and before checking the cluster-state for an in-progress snapshot, then the delete will fail with a 404 even though it was sent after the snapshot was started.
b. If an abort finishes and the user concurrently initiates another snapshot, then the retry of the delete might happen after the new snapshot creation is put in the cluster state and will fail because now a snapshot is in progress again.

Currently, both of these are just minor annoyances that led to some test-failures that required workarounds. Problem a. is definitely bad UX though. With concurrent snapshot operations incoming, these kinds of issues are more relevant since they prevent clean ordering of operations if deletes and snapshot creates are allowed to be executed in parallel.

This change adds another step to the delete process and changes the way the delete job is serialized in the cluster state.

Now, the delete always begins with a cluster state update. If that update finds an in-progress snapshot then it will abort it right away and already add the delete job for that snapshot to the cluster state. If no in-progress snapshot is found, then the delete is added to the cluster state with unknown repository state id and unknown SnapshotId.
If no in-progress snapshot is found, then the repository data will be loaded after the placeholder delete has been added to the cluster state and the placeholder is updated with the correct repository generation and SnapshotId found in the repository data.
In the case of an abort, we again wait for the aborted snapshot to finish, then execute the delete.
Since there is a delete in the cluster state, no new snapshot can be started concurrently to mess up the delete operation.
Also, if no in-progress snapshot is found initially then the addition of the delete operation to the cluster state prevents another snapshot from being started concurrently and we get clean ordering of operations in this case as well.

=> Only checking the cluster state contents for in-progress operations during cluster state update jobs and putting a place holder for the delete similar to the snapshot create INIT stage resolves all the above races.
=> The serialization change to the snapshot in progress entries sets up the basis for deleting multiple snapshots at once for #49800 (we can, without another CS serialization change make it so that the name in the delete placeholder is resolved to multiple snapshot ids which is all that's left to make #49800 behave nicely .. currently that PR is blocked on the fact that the races above make multi-delete behave strangely when patterns match in-progress and existing snapshots etc.)
=> The clean ordering of operations from this change gets us another big step closer to having safe concurrent repository operations

WIP: need to clean up the code a little but would like some CI runs

@original-brownbear original-brownbear added WIP :Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs labels Apr 3, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (:Distributed/Snapshot/Restore)

original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Apr 7, 2020
We only have very indirect coverage of master failovers during snaphot delete
at the moment. This comment adds a direct test of this scenario and also
an assertion that makes sure we are not leaking any snapshot completion listeners
in the snapshots service in this scenario.

This gives us better coverage of scenarios like elastic#54256 and makes the diff
to the upcoming more consistent snapshot delete implementation in elastic#54705
smaller.
original-brownbear added a commit that referenced this pull request Apr 8, 2020
* Add Snapshot Resiliency Test for Master Failover during Delete

We only have very indirect coverage of master failovers during snaphot delete
at the moment. This comment adds a direct test of this scenario and also
an assertion that makes sure we are not leaking any snapshot completion listeners
in the snapshots service in this scenario.

This gives us better coverage of scenarios like #54256 and makes the diff
to the upcoming more consistent snapshot delete implementation in #54705
smaller.
@original-brownbear original-brownbear marked this pull request as draft April 9, 2020 11:36
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Apr 15, 2020
Snapshot deletes should first check the cluster state for an in-progress snapshot
and try to abort it before checking the repository contents. This allows for atomically
checking and aborting a snapshot in the same cluster state update, removing all possible
races where a snapshot that is in-progress could not be found if it finishes between
checking the repository contents and the cluster state.
Also removes confusing races, where checking the cluster state off of the cluster state thread
finds an in-progress snapshot that is then not found in the cluster state update to abort it.
Finally, the logic to use the repository generation of the in-progress snapshot + 1 was error
prone because it would always fail the delete when the repository had a pending generation different from its safe generation when a snapshot started (leading to the snapshot finalizing at a
higher generation).

These issues (particularly that last point) can easily be reproduced by running `SLMSnapshotBlockingIntegTests` in a loop with current `master` (see elastic#54766).

The snapshot resiliency test for concurrent snapshot creation and deletion was made to more
aggressively start the delete operation so that the above races would become visible.
Previously, the fact that deletes would never coincide with initializing snapshots resulted
in a number of the above races not reproducing.

This PR is the most consistent I could get snapshot deletes without changes to the state machine. The fact that aborted deletes will not put the delete operation in the cluster state before waiting for the snapshot to abort still allows for some possible (though practically very unlikely) races. These will be fixed by a state-machine change in upcoming work in elastic#54705 (which will have a much simpler and clearer diff after this change).

Closes elastic#54766
original-brownbear added a commit that referenced this pull request Apr 15, 2020
Snapshot deletes should first check the cluster state for an in-progress snapshot
and try to abort it before checking the repository contents. This allows for atomically
checking and aborting a snapshot in the same cluster state update, removing all possible
races where a snapshot that is in-progress could not be found if it finishes between
checking the repository contents and the cluster state.
Also removes confusing races, where checking the cluster state off of the cluster state thread
finds an in-progress snapshot that is then not found in the cluster state update to abort it.
Finally, the logic to use the repository generation of the in-progress snapshot + 1 was error
prone because it would always fail the delete when the repository had a pending generation different from its safe generation when a snapshot started (leading to the snapshot finalizing at a
higher generation).

These issues (particularly that last point) can easily be reproduced by running `SLMSnapshotBlockingIntegTests` in a loop with current `master` (see #54766).

The snapshot resiliency test for concurrent snapshot creation and deletion was made to more
aggressively start the delete operation so that the above races would become visible.
Previously, the fact that deletes would never coincide with initializing snapshots resulted
in a number of the above races not reproducing.

This PR is the most consistent I could get snapshot deletes without changes to the state machine. The fact that aborted deletes will not put the delete operation in the cluster state before waiting for the snapshot to abort still allows for some possible (though practically very unlikely) races. These will be fixed by a state-machine change in upcoming work in #54705 (which will have a much simpler and clearer diff after this change).

Closes #54766
@original-brownbear
Copy link
Member Author

Will reopen this in a cleaner format shortly

original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Apr 20, 2020
…ic#54866)

* Add Snapshot Resiliency Test for Master Failover during Delete

We only have very indirect coverage of master failovers during snaphot delete
at the moment. This comment adds a direct test of this scenario and also
an assertion that makes sure we are not leaking any snapshot completion listeners
in the snapshots service in this scenario.

This gives us better coverage of scenarios like elastic#54256 and makes the diff
to the upcoming more consistent snapshot delete implementation in elastic#54705
smaller.
original-brownbear added a commit that referenced this pull request Apr 20, 2020
… (#55456)

* Add Snapshot Resiliency Test for Master Failover during Delete

We only have very indirect coverage of master failovers during snaphot delete
at the moment. This comment adds a direct test of this scenario and also
an assertion that makes sure we are not leaking any snapshot completion listeners
in the snapshots service in this scenario.

This gives us better coverage of scenarios like #54256 and makes the diff
to the upcoming more consistent snapshot delete implementation in #54705
smaller.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs WIP
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants