-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] improve reliability of job stats in larger clusters #86305
[ML] improve reliability of job stats in larger clusters #86305
Conversation
Pinging @elastic/ml-core (Team:ML) |
Hi @benwtrent, I've created a changelog YAML for you. |
@droberts195 I did some tests locally printing out the thread names on every call. Indeed we would be on a transport thread before this change. Meaning we would spin up N requests from that thread pool. Those executions would then occur on the search pool (eventually). In the call back, we are usually in the search pool, so I decided to fork that as well when gathering data stats (especially since in 7.17 its still multiple searches correct?). Project Loom cannot come fast enough... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - thanks for doing this
When gather job stats for closed jobs, we may be inadvertently executing on a transport thread. Typically, this is acceptable. But, when there are many jobs and many indices, this has a cascading effect and may cause the cluster to enter a troubling state. This is main due to how slow security checks can be for search requests when the cluster has many indices. To alleviate, gathering information about closed jobs is forked to the ML utility thread pool related: elastic#82255
💔 Backport failed
You can use sqren/backport to manually backport by running |
💚 All backports created successfully
Questions ?Please refer to the Backport tool documentation |
When gather job stats for closed jobs, we may be inadvertently executing on a transport thread. Typically, this is acceptable. But, when there are many jobs and many indices, this has a cascading effect and may cause the cluster to enter a troubling state. This is main due to how slow security checks can be for search requests when the cluster has many indices. To alleviate, gathering information about closed jobs is forked to the ML utility thread pool related: elastic#82255 (cherry picked from commit 4e481c3)
…6309) When gather job stats for closed jobs, we may be inadvertently executing on a transport thread. Typically, this is acceptable. But, when there are many jobs and many indices, this has a cascading effect and may cause the cluster to enter a troubling state. This is main due to how slow security checks can be for search requests when the cluster has many indices. To alleviate, gathering information about closed jobs is forked to the ML utility thread pool related: #82255
…6310) When gather job stats for closed jobs, we may be inadvertently executing on a transport thread. Typically, this is acceptable. But, when there are many jobs and many indices, this has a cascading effect and may cause the cluster to enter a troubling state. This is main due to how slow security checks can be for search requests when the cluster has many indices. To alleviate, gathering information about closed jobs is forked to the ML utility thread pool related: #82255 (cherry picked from commit 4e481c3)
When gather job stats for closed jobs, we may be inadvertently executing on a transport thread. Typically, this is acceptable. But, when there are many jobs and many indices, this has a cascading effect and may cause the cluster to enter a troubling state.
This is main due to how slow security checks can be for search requests when the cluster has many indices.
To alleviate, gathering information about closed jobs is forked to the ML utility thread pool
related: #82255