Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Call Refresh API to satisfy ?refresh in Stateless #93160

Merged
merged 12 commits into from
Jan 24, 2023

Conversation

pxsalehi
Copy link
Member

@pxsalehi pxsalehi commented Jan 23, 2023

This is a workaround that provides support for the IMMEDIATE and WAIT_UNTIL Refresh policies.

Relates ES-5292

@pxsalehi pxsalehi added >non-issue :Distributed Indexing/CRUD A catch all label for issues around indexing, updating and getting a doc by id. Not search. labels Jan 23, 2023
@DaveCTurner
Copy link
Contributor

The failure is #93142; @elasticmachine please run elasticsearch-ci/part-2

@pxsalehi
Copy link
Member Author

I'm writing a test with this on the Stateless module. It seems when it's WAIT_UNTIL, it gets stuck. I'm looking into that.

ActionListener<BulkResponse> listener = outerListener;
if (DiscoveryNode.isStateless(clusterService.getSettings()) && bulkRequest.getRefreshPolicy() != WriteRequest.RefreshPolicy.NONE) {
listener = outerListener.delegateFailure((l, r) -> { client.admin().indices().prepareRefresh().execute(l.map(ignored -> r)); });
bulkRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.NONE);
Copy link
Member Author

@pxsalehi pxsalehi Jan 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for the WAIT_UNTIL to work properly, we'd need to also integrate the refresh listeners mechanism in Stateless, since currently they don't get called back. Since we're going with work-arounds here to keep it short, I've just replaced the policy once we know we're calling a refresh afterwards anyway.

@pxsalehi pxsalehi marked this pull request as ready for review January 24, 2023 09:40
@pxsalehi pxsalehi requested a review from DaveCTurner January 24, 2023 09:40
@elasticsearchmachine elasticsearchmachine added the Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. label Jan 24, 2023
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

Copy link
Contributor

@DaveCTurner DaveCTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM apart from the spurious @AwaitsFix

@pxsalehi
Copy link
Member Author

Thanks, David!

pxsalehi added a commit that referenced this pull request Jan 31, 2023
…93383)

In #93160, we never set the forced_refresh flag in the response. With
this change, the bulk response now correctly reflects what happened. It
also unblocks a bunch of YAML tests for Stateless.

Relates ES-5292
mark-vieira pushed a commit to mark-vieira/elasticsearch that referenced this pull request Jan 31, 2023
…lastic#93383)

In elastic#93160, we never set the forced_refresh flag in the response. With
this change, the bulk response now correctly reflects what happened. It
also unblocks a bunch of YAML tests for Stateless.

Relates ES-5292
tlrx added a commit to tlrx/elasticsearch that referenced this pull request Feb 1, 2023
tlrx added a commit that referenced this pull request Feb 1, 2023
Since we know which indices were involved in the Bulk 
request we can refresh only those instead of all indices, 
and expand to hidden indices so that they are also 
refreshed.

Relates #93160
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Indexing/CRUD A catch all label for issues around indexing, updating and getting a doc by id. Not search. >non-issue Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. v8.7.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants