-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shutdown behavior for groupbyprocessor #1465
Comments
This should also be assigned to me. |
cc @nilebox, as you reviewed the original PR. The |
I believe the desirable shutdown behavior is to stop receiving new data, drain all in-memory data from the pipeline, flush the exporters and exit the process. For groupbyprocessor this would mean flushing the accumulated (even if incomplete) traces to the next consumer. I do not think there is an expectation to wait for a trace to be complete before shutdown is complete. |
I think that's in line with @nilebox's opinions. I'll work on doing this then. |
* Drain the queue upon shutdown, with a time limit. Fixes open-telemetry#1465. * Added metrics to the groupbyprocessor, making it easier to understand what's going on in case of problems. See open-telemetry#1811. * Changes the in-memory storage to unlock its RLock when the method returns. Fixes open-telemetry#1811. Link to tracking Issue: open-telemetry#1465 and open-telemetry#1811 Testing: unit + manual tests Documentation: see README.md Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>
* Drain the queue upon shutdown, with a time limit. Fixes open-telemetry#1465. * Added metrics to the groupbyprocessor, making it easier to understand what's going on in case of problems. See open-telemetry#1811. * Changes the in-memory storage to unlock its RLock when the method returns. Fixes open-telemetry#1811. Link to tracking Issue: open-telemetry#1465 and open-telemetry#1811 Testing: unit + manual tests Documentation: see README.md Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>
* Drain the queue upon shutdown, with a time limit. Fixes open-telemetry#1465. * Added metrics to the groupbyprocessor, making it easier to understand what's going on in case of problems. See open-telemetry#1811. * Changes the in-memory storage to unlock its RLock when the method returns. Fixes open-telemetry#1811. Link to tracking Issue: open-telemetry#1465 and open-telemetry#1811 Testing: unit + manual tests Documentation: see README.md Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>
…ring shutdown. (#1842) Fixed deadlock in groupbytrace processor. * Drain the queue upon shutdown, with a time limit. Fixes #1465. * Added metrics to the groupbyprocessor, making it easier to understand what's going on in case of problems. See #1811. * Changes the in-memory storage to unlock its RLock when the method returns. Fixes #1811. Link to tracking Issue: #1465 and #1811 Testing: unit + manual tests Documentation: see README.md Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>
…y#1465) Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.34.0 to 1.35.0. - [Release notes](https://github.com/grpc/grpc-go/releases) - [Commits](grpc/grpc-go@v1.34.0...v1.35.0) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Tyler Yahn <MrAlias@users.noreply.github.com>
When working on #1362, there was a discussion about what the shutdown behavior should be for the processor, regarding the in-flight traces:
To unblock that PR, this issue here was created, so that the appropriate solution is agreed on, to be implemented as a follow-up PR.
The text was updated successfully, but these errors were encountered: