-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporter/datadog] Semantic conventions for spans #1909
Comments
Thanks for filing this. We recently merged in work to improve datadog span operation and span resource naming here: #1861 The semantic conventions you're referring to are The information you're referring to
Should be available via the Span We have plans this quarter to continue to improve the translation of opentelemetry spans to datadog spans, and can add to this issue when there are future PRs in this space. That being said, At this time I do not believe the are immediate plans to change the format of span operation from Hope that helps! |
thanks @ericmustin for clarification. Would you accept an optional parameter at collector for otel compatible naming? |
@cemo I think that's a thoughtful suggestion but i'm just trying to understand the use case here? Perhaps the root cause here is that the length of |
Hi @ericmustin, I am just trying to have more readable spans.
I think this is more similar to opentelemetry conventions. Current situation is simply unreadable on our side.
|
|
@cemo one thought I had is that in interim, if the
If this was something you'd be open to I could attempt to suggest a configuration to use |
Thank you so much for detailed explanation @ericmustin. I will add a process to mitigate issue however what I want is actually to have a similar experience as datadog native agent. As you can see in the subsection of previous Datadog's example, http traces are named with specific part for their endpoints such as
|
Hi @andrewhsu, can you assign @ericmustin instead? Traces issues and features requests are generally handled by Eric |
Yes, I think this is a good approach and should help the broad majority of users. I'll try to update this thread when there's a PR up, I am not sure what the timeline is on that at this time. |
I also initially had questions on the instrumentation library prefix, but that part makes sense to me now for the reasons given above. Thanks for the explanation! That said, I have a case where I'm using a server framework that does not make it easy to get a route template for an endpoint. Instead, we opted to use However, because of the way the Datadog exporter processes the span names, and because we have In general, what's the reasoning behind the DD exporter forming its own resource name rather than falling back on the span name for the resource always? i.e. why isn't the logic in that function just |
@alanisaac appreciate the feedback. I guess there's a few things here but, generally speaking, if folks want to just use |
Thank you for the incredibly fast response! Something like |
So I spoke a bit with my team here, i think there's some open questions we have around how this confg should work and how much flexibility should be built into it, and in the immediate term i'm not sure this will be prioritised in the upcoming few sprints, but I've added this feature request to our tracking internally and will try to update if/when it becomes available. |
Instrumentation library as span nameAs another data point, the naming of spans based on instrumentation library is so long that it is pretty much useless when looking at it by default. In a simple Go application using Gorilla mux Otel contrib instrumentation and making HTTP calls, I have 2 extremely long IL names before I get any useful information. The sub span is actually unreadable at a glance: So far I've used the collector's traces:
span_name_remappings:
io.opentelemetry.javaagent.spring.client: spring.client
# instrumentation::express.server: express # example actually breaks the config because of double ::
go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux: mux # doesn't work, need to specify kind too
go.opentelemetry.io_contrib_instrumentation_github.com_gorilla_mux_otelmux.server: mux
go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux.server: mux # doesn't work, need to figure out what DD is doing to "sanitize" it first
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp: http # doesn't work, need to figure out / -> _ and SPAN_KIND
go.opentelemetry.io_contrib_instrumentation_net_http_otelhttp: http # doesn't work, didn't include SPAN_KIND
go.opentelemetry.io_contrib_instrumentation_net_http_otelhttp.client: http
Missing HTTP client pathsFurthermore, the HTTP client request spans using the net/http OTel contrib instrumentation are also kind of lacking with just What I want is: I've tried to override this with this, but Datadog just ignores it wholesale (which doesn't seem correct): client := otelhttp.DefaultClient
client.Transport = otelhttp.NewTransport(
http.DefaultTransport,
otelhttp.WithSpanNameFormatter(func(op string, r *http.Request) string {
return fmt.Sprintf("%s %s", r.Method, r.URL.Path)
}),
) Instead, I have to set the client := otelhttp.DefaultClient
client.Transport = otelhttp.NewTransport(
http.DefaultTransport,
otelhttp.WithSpanOptions(trace.WithAttributes(
semconv.HTTPRouteKey.String(req.URL.Path), // The http.route tag is necessary in order for Datadog to properly name spans with http.* tags (web type)
)),
) Funnily enough, these keys don't work: attribute.Key("resource").String("/custom-resource")
attribute.Key("operation").String("custom-operation") There seems to be a related conversation here about why Datadog doesn't show this here: DataDog/dd-trace-rb#277 Lastly, the span contains all the info Datadog ( |
@tonglil Hey! Took a quick look, sounds like reasonable feedback, that being said I'm no longer employed at Datadog, last I left it there was some backlog items internally to make this more flexible, but as I'm no longer involved, support@datadoghq.com would be the optimal place to get actionable feedback or movement on this issue. All the best! |
Thanks for your detailed message @tonglil; I will also try to flag this internally (and thanks @ericmustin for pointing them to support!). While I am assigned to this issue I am not involved in the APM-side of Datadog so I can't directly help with code, but I want to try to at least try to move this forward. I have tried to list down the Datadog exporter-specific problems that you mention on the following list in a bite-sized format:
Do these descriptions make sense? I want to, if possible, split this issue into smaller and better-defined issues (either here on Github or in our internal tracker). In particular, (1),(2), and (4) feel like they belong to the same documentation issue, (3) may be addressed by a solution like #1909 (comment) and without further context (5) sounds like a bug. |
Thank you. |
It is very sad that such a critical issue is not tried to be addressed so far. This issue is much worse in java land since it has more instrumentation support. I think this can be even solved in the server side. Span has the information of tracing libraries. It can be formatted according to each library. Is this doable on server side? Am I missing something? |
Also looking forward to this, that long operation name makes it very hard to see things in the trace UI. |
I ended up creating a custom TracerProvider which wraps the global one and sets the All integrations I saw supports sending a custom https://gist.github.com/RangelReale/1e50518a1e3c73eb56748192c5746163 |
I ended up creating a package for this code, with contrib for common libraries. https://github.com/RangelReale/opentelemetry-go-datadog-spanname |
Sorry for the lack of responses here. I am actively working on this as we speak. |
I will be away next week at KubeCon EU, but do wanted to inform you that we are considering the problem twofold here:
Ultimately, we want everyone to have a pleasing experience without the need to switch flags such as the Please let me know what you think. Would it be an acceptable solution to improve the Datadog span name in such a way that it is shorter and easier to work with, similar to the proposal above? |
I think the operation name is DataDog specific, there should be no good easy way of guessing it from each of the otel instrumentations besides having mappings. |
@gbbr hey bud hope all is well. just wanted to provide the chesterton's fence for this long standing complaint. When implementing this in the POC (many years ago) we encountered a problem mentioned earlier in this issue, here
No clue if that's still how the RED(hits/latency/errors) trace-metrics are generated internally at datadog, or if that is just a small edge case at this point and not worth having a bad experience for the majority of users, but just a heads up not to footgun yourself by re-introducing that issue. (for anyone following along, please note that I have not worked at ddog in a long time and am not speaking on the organizations behalf, just trying to provide context so the current folks working on this have all the information available to them) |
I can not understand why this issue is not addressed on the DD server side? They have everything to format it as a span datadog agent creates. Each library must be mapped into something as native data agent creates. What is the problem of this approach? |
No worries. We'll figure out a good way to extract the right operation name. The question is whether that would be a satisfactory solution to you (and everyone else): having a shorter, more inline with Datadog "SDKs" operation name.
I agree with you @cemo. That is what I tried to point out in my second bullet point above. We are working on it. In the meantime, I'll try to improve the existing code to create better operation names that aren't as long and confusing. |
@gbbr I think the best solution would be to make sure the operation/span names from the Otel library is the same as the equivalent ones in dd-trace-go library. That's what I did, I went into each instrumentation in that library and copied the names that it would output. |
hi @gbbr , i don't know if this is related or not, but i'm also having a bit problem regarding the resource name i'm currently instrumenting a php application using dd_trace. but when i redirect the trace to opentelemetry collector (with the intention to add several tags on the trace data), it only include the http.method, making it harder to distinguish the traffic when creating dashboard. |
@rucciva this issue seems like a tangent but slightly different. Can we treat it separately from this one to keep things focused here? It would help me to see your code. Can you open a separate issue and give more details? Or, if you don't want to share it publicly, reach out to our support and send everything over, mentioning my name -- I'd be happy to help. |
noted @gbbr , i'll open a ticket, thanks for the response |
Long thread but this comment is crucial! I believe the intention from the beginning was to introduce a flag that sets the Until DD comes up with something better or fixes the meaning of the flag, I have added the following to the collector.
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Relevant |
Not stale |
Hoping for a nice fix for this as well |
The The exporters:
datadog:
traces:
span_name_remappings:
"OpenTelemetry::Instrumentation::Net::HTTP.client": "http.client"
otel-collector-1 | Error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s):
otel-collector-1 |
otel-collector-1 | error decoding 'exporters': error reading configuration for "datadog": decoding failed due to the following error(s):
otel-collector-1 |
otel-collector-1 | error decoding '': decoding failed due to the following error(s):
otel-collector-1 |
otel-collector-1 | 'traces.span_name_remappings[OpenTelemetry]' expected type 'string', got unconvertible type 'map[string]interface {}', value: 'map[Instrumentation:map[Net:map[HTTP.client:http.client]]]'
otel-collector-1 | error decoding 'connectors': error reading configuration for "datadog/connector": decoding failed due to the following error(s):
otel-collector-1 |
otel-collector-1 | 'traces.span_name_remappings[OpenTelemetry]' expected type 'string', got unconvertible type 'map[string]interface {}', value: 'map[Instrumentation:map[Net:map[HTTP.client:http.client]]]'
otel-collector-1 | 2025/01/09 15:51:53 collector server run finished with error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s):
otel-collector-1 |
otel-collector-1 | error decoding 'exporters': error reading configuration for "datadog": decoding failed due to the following error(s):
otel-collector-1 |
otel-collector-1 | error decoding '': decoding failed due to the following error(s):
otel-collector-1 |
otel-collector-1 | 'traces.span_name_remappings[OpenTelemetry]' expected type 'string', got unconvertible type 'map[string]interface {}', value: 'map[Instrumentation:map[Net:map[HTTP.client:http.client]]]'
otel-collector-1 | error decoding 'connectors': error reading configuration for "datadog/connector": decoding failed due to the following error(s):
otel-collector-1 |
otel-collector-1 | 'traces.span_name_remappings[OpenTelemetry]' expected type 'string', got unconvertible type 'map[string]interface {}', value: 'map[Instrumentation:map[Net:map[HTTP.client:http.client]]]' |
Thanks for flagging @arielvalentin. I've looked a bit into this, and the issue happens at the stage when confmap creates the confmap.Conf from the map here. The KeyDelimiter which is used is Looking at the I'll create a task in our backlog to find a solution for this, but in the meantime you can set attribute |
Describe the bug
The current way of displaying spans is not conforming to spec.
Steps to reproduce
Use opentelemetry-java-instrumentation and opentelemetry-collector-contrib
What did you expect to see?
I am expecing to see name's of spans conforming spec such as:
What did you see instead?
Additional context
See conventions:
https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/semantic_conventions/http.md
The text was updated successfully, but these errors were encountered: