-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Obs AI Assistant] Connector documentation #181282
Comments
Pinging @elastic/obs-knowledge-team (Team:obs-knowledge) |
Adding this to the observability docs project because it sounds like someone on our team should work on these docs. @klacabane We will need more information, including links to related issues/PRs and a list of contacts, to help us get started. Thanks! |
Hi @dedemorton, As an overview, the connector can be attached to an alert and configured with a message that will be passed to the AI assistant. When an alert fires, the assistant will be called with an initial prompt providing contextual information about the alert (eg when it fired, the service impacted, threshold breached..), and the message provided by the user when configuring the connector. Some technical details:
Links:
You can reach out to me or @dgieselaar for more informations! |
cc'ing @lcawl for awareness. She is working on other system action feature docs and may want to contribute to these docs. |
@dedemorton Do you have an update on this documentation? Is there anything needed from us? |
@emma-raffenne Not right now, but I'll let you know. This issue came in too late for our docs sprint 20, but it's towards the top of my list for sprint 21, which starts today. |
Here's my preliminary plan for the documentation after playing around with the Obs AI Connector today:
I ran into some flaky behavior when I was playing around with the connector. I received the slack messages and links to the conversation, but the visualizations didn't work. Eventually the messages stopped arriving, but I was also editing/deleting rules and might have broken something. Perhaps I generated too many alerts and ended up exceeding the token limit. I kept track of some questions that came up when I was testing:
|
Thank you @dedemorton |
Thanks, @emma-raffenne - I've had a brief scan of this comment thread and I'm not seeing the reference to Alerting documentation. Can you point me to it? |
Hi @dedemorton!
No but we generate a prompt that grows with the number of alerts being passed to the connector, and having several alerts being processed in the same connector execution may lead to many function calls analyzing the alerts and reach the function call limit. If that's the case we would not be able to call the connector. This behavior should be surfaced in the generated conversations, any chance you still have them stored ?
What is significant, 5minutes ? It should take around ~60 seconds if everything goes as expected but several function callings and errors may lead to additional processing time or a failure. In any case a conversation will be created and looking at this conversation would be the best way to troubleshoot any underlying issues.
I don't have the specifics of the rule inner workings but I expect any new alert to pick up the new settings/prompt. Did you experience bizarre behaviors when doing so ?
The more accurate the better. The assistant is given the list of connector with their configurations (configured name, ID and any other configured properties), given a large list of ie slack connectors one should ideally provide an identifier unique enough for the assistant to make a good decision, in our case the connector name would be appropriate.
Looking at the generated conversation would be the best way to track any errors that happened during the connector execution. Each function call (ie calling the connector) will appear in the conversation timeline and will have debugging informations attached to it
Could you provide details on your setup, how did you trigger the alert and what was the configured prompt in the connector ? |
@klacabane Unfortunately my data got blown away when the cluster was updated. I will go through the process again after I've finished the docs and want to test them. I triggered the alert by creating a custom threshold rule that I knew would fire. The rule looked for
I don't think I expanded all the function calls so I might have missed something. I think we should definitely consider adding more guidance to help users construct rules and prompts that avoid causing them to run into limits...and also tell them what to do when they run into limits. |
@klacabane I played around a bit with this today, and I am definitely exceeding limits. Maybe the rules I'm creating are too contrived (meant to generate alerts quickly, but perhaps generating too many alerts)? Today I tried using the Custom Threshold rule to test for
The weird thing is that it worked beautifully the very first time I tried it out. :-/ Now that I want to take screen captures, nothing is working. So I have a couple asks:
Thanks in advance for you help. |
Hi @dedemorton, I'm not able to reproduce this issue atm and still working on it.
The latter could be the culprit. We generate a summary and get context for every alerts that is passed to the connector. I suspect in your case a high number of alerts gets passed and as a result a large prompt is generated which would lead to reaching the token limit early in the conversation. If that's the culprit we should limit the number of alerts we summarize in the prompt but I'll need confirmation this is the root cause. Since you're able to generate this error consistently, could you either ping me the steps you're taking and/or provide a copy of the generated conversation that leads to the token limit being reached ? I'm also working against edge-lite-oblt and have no issues triggering the connector successfully. Could you try with an |
Here is the quote from Dede's comment:
|
I've created a rule that does not generate a lot of alerts, and I am seeing the same problem. This rule has created a single alert in the past 30 min. There are currently only 3 active alerts total, but there are a bunch of untracked alerts. Here's the API call for the rule:
Here’s the message I am seeing under Stack Management > Connectors > Logs:
Also note that there is no conversation created. |
OK, so I've tried a second round of testing using the latest 7.14.0 snapshot at staging.found.no (I wanted to create a very simple environment with limited data ingested using the System integration and Elastic Agent). It works fine! I think the takeaway here is that we need to provide users with some guidance on how to avoid exceeding the token limit when they create their rules + messages for the AI Assistant connector...and also some steps to diagnose problems. |
## Summary Adds reference documentation about the Obs AI Assistant connector (requested in #181282) Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## Summary Adds reference documentation about the Obs AI Assistant connector (requested in elastic#181282) Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com> (cherry picked from commit 310f4ff)
# Backport This will backport the following commits from `main` to `8.14`: - [[DOCS] Obs AI Assistant connector (#183792)](#183792) <!--- Backport version: 9.4.3 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"DeDe Morton","email":"dede.morton@elastic.co"},"sourceCommit":{"committedDate":"2024-05-31T18:26:12Z","message":"[DOCS] Obs AI Assistant connector (#183792)\n\n## Summary\r\n\r\nAdds reference documentation about the Obs AI Assistant connector\r\n(requested in #181282)\r\n\r\nCo-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>","sha":"310f4ff79cbe5d2ec7e699d9ffb3aefdc51da9ec","branchLabelMapping":{"^v8.15.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","docs","Feature:Actions/ConnectorTypes","Team:obs-knowledge","v8.14.0","v8.15.0"],"title":"[DOCS] Obs AI Assistant connector","number":183792,"url":"https://github.com/elastic/kibana/pull/183792","mergeCommit":{"message":"[DOCS] Obs AI Assistant connector (#183792)\n\n## Summary\r\n\r\nAdds reference documentation about the Obs AI Assistant connector\r\n(requested in #181282)\r\n\r\nCo-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>","sha":"310f4ff79cbe5d2ec7e699d9ffb3aefdc51da9ec"}},"sourceBranch":"main","suggestedTargetBranches":["8.14"],"targetPullRequestStates":[{"branch":"8.14","label":"v8.14.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v8.15.0","branchLabelMappingKey":"^v8.15.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/183792","number":183792,"mergeCommit":{"message":"[DOCS] Obs AI Assistant connector (#183792)\n\n## Summary\r\n\r\nAdds reference documentation about the Obs AI Assistant connector\r\n(requested in #181282)\r\n\r\nCo-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>","sha":"310f4ff79cbe5d2ec7e699d9ffb3aefdc51da9ec"}}]}] BACKPORT--> Co-authored-by: DeDe Morton <dede.morton@elastic.co>
Closed by #183792 and elastic/observability-docs#3906 |
Summary
While the connector is in tech preview and has limited capabilities we should create public documentation
The text was updated successfully, but these errors were encountered: