Skip to content
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.

Previous action was not able to update IndexMetaData #116

Open
arnitolog opened this issue Nov 26, 2019 · 15 comments
Open

Previous action was not able to update IndexMetaData #116

arnitolog opened this issue Nov 26, 2019 · 15 comments
Labels
enhancement An improvement on the existing feature’s functionalities maintenance improves code quality, but not the product

Comments

@arnitolog
Copy link

Hello, I noticed that several indexes have status Failed with the error: "Previous action was not able to update IndexMetaData". I think it happens after data nodes restart, but not sure.
Is there any way to configure automatic retry for such error.
My policy is below:
{ "policy": { "policy_id": "ingest_policy", "description": "Default policy", "last_updated_time": 1574686046552, "schema_version": 1, "error_notification": null, "default_state": "ingest", "states": [ { "name": "ingest", "actions": [], "transitions": [ { "state_name": "search", "conditions": { "min_index_age": "4d" } } ] }, { "name": "search", "actions": [ { "timeout": "2h", "retry": { "count": 5, "backoff": "constant", "delay": "1h" }, "force_merge": {"max_num_segments": 1 } } ], "transitions": [ { "state_name": "delete", "conditions": {"min_index_age": "30d"} } ] }, { "name": "delete", "actions": [ { "timeout": "2h", "retry": { "count": 5, "backoff": "constant", "delay": "1h" }, "delete": {} } ], "transitions": [] } ] } }

@dbbaughe
Copy link
Contributor

Hi @arnitolog,

At which action or step is the error occurring at?

That error is from:

if (managedIndexMetaData.stepMetaData?.stepStatus == Step.StepStatus.STARTING) {

Which basically means that one of the executions attempted to "START" the step being executed, but was never able to finish it. It's possible if your data nodes restart during the middle of that execution time period.

We currently don't have an automatic retry for this specific part, because we don't know if the step finished or not, and if it's something non-idempotent then we don't want to retry it which is why we turn it over to the user to handle.

With that in mind, we could definitely add automatic retries on things that are idempotent/safe to eliminate the majority of cases this can happen in (like checking conditions for transitioning etc.).

@arnitolog
Copy link
Author

Hi @dbbaughe
this can happen on different steps. I saw this error on "ingest" step (which is the first one) and on "search" (which is the second)

It will be good to have some retriable mechanism fo such cases, so the less manual work the better

@dbbaughe dbbaughe added enhancement An improvement on the existing feature’s functionalities maintenance improves code quality, but not the product labels May 8, 2020
@dbbaughe
Copy link
Contributor

dbbaughe commented May 8, 2020

Some improvements that have been added to help with this:

#165
#209

We have a few further ideas that we will track in:
#207

@dbbaughe dbbaughe closed this as completed May 8, 2020
@gittygoo
Copy link

gittygoo commented Jun 30, 2020

This is still happening on opendistro 1.8.0 release
Strangely enough alot of them just stay on "running"/"attempting to transition" also
ism

@dbbaughe
Copy link
Contributor

Hey @gittygoo,

Are you using this plugin independently or using ODFE 1.8? What's your cluster setup look like?
Are the "Attempting to transition/Running" stuck even though the conditions are met? If so what are those conditions?
Can you check if your cluster pending tasks are backed up: GET /_cluster/pending_tasks

Thanks

@gittygoo
Copy link

gittygoo commented Jun 30, 2020

@dbbaughe its an internal cluster with 2 nodes, using Opendistro 1.8

policy looks like this, this should rotate them daily until deletion... so yes conditions are met

{
    "policy": {
        "policy_id": "default_ism_policy",
        "description": "Default policy",
        "last_updated_time": 1590706756863,
        "schema_version": 1,
        "error_notification": null,
        "default_state": "hot",
        "states": [
            {
                "name": "hot",
                "actions": [],
                "transitions": [
                    {
                        "state_name": "warm",
                        "conditions": {
                            "min_index_age": "1d"
                        }
                    }
                ]
            },
            {
                "name": "warm",
                "actions": [],
                "transitions": [
                    {
                        "state_name": "cold",
                        "conditions": {
                            "min_index_age": "2d"
                        }
                    }
                ]
            },
            {
                "name": "cold",
                "actions": [],
                "transitions": [
                    {
                        "state_name": "delete",
                        "conditions": {
                            "min_index_age": "3d"
                        }
                    }
                ]
            },
            {
                "name": "delete",
                "actions": [
                    {
                        "delete": {}
                    }
                ],
                "transitions": []
            }
        ]
    }
}

Tasks are empty

{"tasks":[]}

@dbbaughe
Copy link
Contributor

Hi @gittygoo,

A few things to check:

  • Can you do a GET <index>/_settings on one that should be rolled over just so we can confirm the "index.creation_date"?
  • Can you also confirm your cluster is not red as it would skip executions in that case.
  • Do you see any logs for the index in elasticsearch.log that would imply the job is actually running and just not evaluating the conditions to true? Trying to see if it's an issue possibly in ISM or Job Scheduler.

Thanks

@gittygoo
Copy link

So here is an example:

  • Index (metricbeat-7.6.0-2020.06.23) with creation_date set to 1592870678351 (22/06/2020 19:04:38)
  • Cluster is green
  • Cant see any related logs on the elasticsearch log referring to this index

Anything else i should check ?

@dbbaughe
Copy link
Contributor

@gittygoo, you can try to set the log level to debug and see if any logs pop up. Otherwise we can try to jumpstart the job scheduler and see if it starts working again. The job scheduler plugin will reschedule any job when either the job document is updated or the shard moves to a different node and needs to be rescheduled on the new node. So you can either manually move the .opendistro-ism-config index shards to a different node to force it or manually update the managed_index documents in that index (probably something like changing enabled to false and back to true). Unfortunately we don't have an API to forcefully reschedule jobs.. it can be something we take as an action item to add.

@gittygoo
Copy link

The way i connect the indexes to the ism template is via the index templates. So i can assume removing all the current "Managed indices" and then waiting 3 more days to see if the rotations went fine should achieve the same as your "jumpstart" idea? since the new indexes would automatically be assigned that policy based on their names. if so i will proceed to delete them and wait

@dbbaughe
Copy link
Contributor

If you removed the current policy_ids from the indices it would delete the internal jobs (Managed Indices). And then you could try re-adding them to those indices and see if it goes through. Not sure if I followed the "waiting 3 more days" part.

@OrangeTimes
Copy link

We are experiencing the same issues

@dbbaughe
Copy link
Contributor

dbbaughe commented Jul 1, 2020

Hi @OrangeTimes,

The same issue as in "Previous action not able to update IndexMetaData" or similar to gittygoo where they have jobs that don't appear to be running anymore?

Can you also give a bit more information about what your cluster setup looks like (ODFE vs AmazonES, what version, # of nodes, etc.) and any more details about the issue you're experiencing.

@dbbaughe dbbaughe reopened this Jul 1, 2020
@OrangeTimes
Copy link

@dbbaughe similar to gittygoo Some indices are in Active and some in Failed state. Our index managament page looks pretty much the same

@samling
Copy link

samling commented Jul 7, 2020

Experiencing the same issue here, though possibly partly our own doing. We switched to ODFE last night and blanket-applied a policy to our existing indices, then very quickly decided to apply a different policy instead. This morning I checked the indices and about 90% of them show "Previous action was not able to update IndexMetaData", with the last action being Force Merge. Tried retrying the failed step but that didn't work, now I'm trying removing the policy altogether and reapplying it to try and jog the index.

Edit: This didn't work either, nor did retrying the policy from a specified state. Any more suggestions to debug or jog things are appreciated as we're now stuck with quite a lot of indices in this failed state.

Here's a little more info on our setup:
ODFE v1.8.0
7 nodes (6 hot, 1 cold)
Our policy transitions indices to the cold node first in a warm state after 2 days, then to a cold state after either a week or a month depending on the policy. During the warm phase the indices are force-merged, replicas removed, made read-only, and reallocated in that order.

Not sure if removing and attaching a different policy before the first one was complete is what broke things, but whatever the cause I've not yet been able to fix them. Happy to provide any additional information.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement An improvement on the existing feature’s functionalities maintenance improves code quality, but not the product
Projects
None yet
Development

No branches or pull requests

5 participants