-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add spanmetricsprocessor readme, config , factory, tests #1917
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1917 +/- ##
==========================================
+ Coverage 89.95% 89.97% +0.02%
==========================================
Files 381 383 +2
Lines 18519 18570 +51
==========================================
+ Hits 16659 16709 +50
- Misses 1390 1391 +1
Partials 470 470
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
6e84c3b
to
4effa08
Compare
…ests Signed-off-by: albertteoh <albert.teoh@logz.io>
4effa08
to
1a8a2c6
Compare
|
||
# The exporter name must match the metrics_exporter name. | ||
# The receiver is just a dummy and never used; added to pass validation requiring at least one receiver in a pipeline. | ||
metrics/spanmetrics: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Processors allow the flexibility of writing to a metrics exporter without the overhead of an additional pipeline, if no processing of metrics is required. An example is exporting span metrics directly to prometheus.
Just wondering, how much overhead are we expecting for another pipeline? Isn't it just running consumeTraces
twice instead of once? Or, I've never tried, but is it not allowed to have the same receiver in multiple pipelines? That seems like it would make the config really understandable.
pipelines:
traces:
receivers: [jaeger]
processors: [batch, queued_retry]
exporters: [jaeger]
tracemetrics:
receivers: [jaeger]
processors: [spanmetrics]
exporters: [prometheus]
This is similar to the translator approach, but seems more intuitive - just let receivers take part in multiple pipelines without any other type of component, the collector should be able to manage wiring a receiver to multiple sinks without a big problem and seems like something that should be supported anyways since users will probably try this sort of config (I haven't tried it yet, does it really not work?).
I think this is another argument to start with this current implementation though :) But just wondering if this idea has come up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your thoughts, @anuraaga.
Or, I've never tried, but is it not allowed to have the same receiver in multiple pipelines?
Yup :)
how much overhead are we expecting for another pipeline? Isn't it just running consumeTraces twice instead of once?
There's some potential performance overhead if an earlier pipeline's first processor is blocked or slow. From the OTEL Design doc:
The data propagation from receiver to FanOutConnector and then to processors is via synchronous function call. This means that if one processor blocks the call the other pipelines that are attached to this receiver will be blocked from receiving the same data and the receiver itself will stop processing and forwarding newly received data.
I like your proposed config, it does look very intuitive.
However, pipelines don't support mixed telemetry types right now; if this were possible, I think your suggestion makes most sense. Mixed telemetry type pipelines is something I'd like to tackle once the spanmetricsprocessor
is done, because it sounds like a fairly significant change.
This is the error message I get if I use a jaeger
receiver with prometheus
exporter:
application run finished with error: cannot setup pipelines: cannot build builtExporters: pipeline "traces" of data type "traces" has an exporter "prometheus", which does not support that data type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I assume the actual implementation comes next.
Note: if the new proposal that allows pipelines of different types to be connected is accepted this processor may need to reworked (or become an exporter).
Please mark this processor as experimental in the README and clearly specify that it may be changed in incompatible way in the future.
Thank you for the review @anuraaga & @tigrannajaryan 🙏
Yes, understood; I'm definitely onboard with making this as user-friendly as possible.
Okay, I'll do that. |
**Description:** Adds spanmetricsprocessor logic as a follow-up to #1917. **Link to tracking Issue:** #403 **Testing:** - Unit and integration tested locally (by adding to `cmd/otelcontribcol/components.go`). - Performance tested (added metrics key cache to improve performance) and added benchmark function. **Documentation:** Add relevant documentation to exported structs and functions.
- Fix links - Minor cleanup and consistency changes
**Description:** Adds spanmetricsprocessor logic as a follow-up to #1917. **Link to tracking Issue:** #403 **Testing:** - Unit and integration tested locally (by adding to `cmd/otelcontribcol/components.go`). - Performance tested (added metrics key cache to improve performance) and added benchmark function. **Documentation:** Add relevant documentation to exported structs and functions.
Signed-off-by: albertteoh albert.teoh@logz.io
Description:
Adds a new processor for aggregating span data to produce Requests Error and Duration (R.E.D) metrics.
This is the first PR with a few to follow; containing the structure of the processor including:
Note: as advised by maintainers during the Agent/Collector SIG, a workaround configuration of adding a dummy receiver and pipeline is required due to current limitations of the pipeline design and constraints.
Processor vs Exporter:
This component can be implemented both as a processor or exporter. The processor path was taken because:
host:port
configuration is not required as it is encapsulated in the configured exporter to write span metrics to.The advantages with an exporter approach are:
Link to tracking Issue: #403
Testing:
Documentation:
testdata
directory.