-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Logs UI] "Missing indices" log threshold alert error not descriptive #119777
Comments
Pinging @elastic/infra-monitoring-ui (Team:Infra Monitoring UI) |
Question for clarification: how do you create a Log threshold alert on a random index? We don't allow you to choose the index for this rule type, I don't think? It just uses whatever indices are specified in the Logs UI settings as far as I remember, which makes reproducing this seem impossible. Can you help me understand how to reproduce? |
@jasonrhodes I think one would have to create the rule using valid indices and then change the source config, delete indices or change aliases so the index name pattern doesn't match any index anymore. |
I'm uncertain what the best way to handle this errors is. I see two options:
The former would cause more noise, but would make a usually undesirable configuration visible. The latter would be more tolerant of transient states in which this might be the case, at the risk of nobody noticing the rule doesn't work as intended. |
Is there some way in the alerts framework to do some kind of "alert once and stop" so we could notify of the config and then stop running that rule until something changes? |
Interesting thought. But what would be the advantage for the user over continuing to retry (and possibly fail again) as we do now? |
I thought it would reduce the noise, and possibly it could free up some resources in the cluster. If we can determine on our side that this configuration won't work then we don't need to execute it anymore until it changes. |
True, but since it requires special alerting framework support we probably want to pick a solution that we can apply in the meantime. |
Doing some old issue notification clean-up 😬 I vote for wrapping this error in a clearer error message for now. If we get SDHs that reference harmless transient states that are triggering this error, at least it will be clearer to track down why it's happening, and we can use that real user info to determine if we should potentially find a way to silence that error in those cases (or always). |
Pinging @elastic/obs-ux-logs-team (Team:obs-ux-logs) |
Pinging @elastic/obs-ux-management-team (Team:obs-ux-management) |
We don't intend to fix this at this time. We're working on a plan to help users migrate uses of this kind of rule to use the custom threshold rule instead. |
Kibana version:
Elasticsearch version:
Server OS version:
Browser version:
Browser OS version:
Original install method (e.g. download page, yum, from source, etc.):
Describe the bug:
Creating alerts that don't match any of the indices defined in the Logs UI settings results in an error like this:
Steps to reproduce:
Expected behavior:
The error should provide some information about the cause of the problem
AC:
"No indices matching ${indices} could be found during the execution of this rule."
The text was updated successfully, but these errors were encountered: