Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metricstransformprocessor just not working #29853

Closed
tcaty opened this issue Dec 13, 2023 · 4 comments
Closed

metricstransformprocessor just not working #29853

tcaty opened this issue Dec 13, 2023 · 4 comments
Labels
bug Something isn't working needs triage New item requiring triage processor/metricstransform Metrics Transform processor

Comments

@tcaty
Copy link

tcaty commented Dec 13, 2023

Component(s)

processor/metricstransform

What happened?

Description

Hi! Today I faced this issue: I tried to use metricstransform processor in metrics pipeline but it just not working.

Steps to Reproduce

  1. Try to use metricstransform in pipelines

Expected Result

Target metric should contains new label like that

duration_milliseconds_count{custom_label="custom_value", exported_job="client", instance="otel-collector-aggregated:8889", job="otel-collector-aggregated", service_name="client", span_kind="SPAN_KIND_CLIENT", span_name="HTTP GET", status_code="STATUS_CODE_ERROR"}

Actual Result

As you can see on screenshot below prometheus pull metrics which has no custom label from metricstransform

duration_milliseconds_count{exported_job="client", instance="otel-collector-aggregated:8889", job="otel-collector-aggregated", service_name="client", span_kind="SPAN_KIND_CLIENT", span_name="HTTP GET", status_code="STATUS_CODE_ERROR"}

image

Collector version

v0.90.1

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

receivers:
  otlp:
    protocols:
      grpc:

processors:
  batch:
  metricstransform:
    transforms:
      - include: duration_milliseconds_count
        action: update
        operations:
          - action: add_label
            new_label: custom_label
            new_value: custom_value

exporters:
  otlp/tempo:
    endpoint: tempo:4317
    tls:
      insecure: true
  otlp/jaeger:
    endpoint: jaeger:4317
    tls:
      insecure: true
  prometheus:
    endpoint: "0.0.0.0:8889"
      
connectors:
  spanmetrics:

service:
  pipelines:
    traces:
      receivers:
        - otlp
      processors:
        - batch
      exporters:
        - otlp/tempo
        - otlp/jaeger
        - spanmetrics
    metrics:
      receivers:
        - otlp
        - spanmetrics
      processors:
        - batch
        - metricstransform
      exporters:
        - prometheus

Log output

No response

Additional context

No response

@tcaty tcaty added bug Something isn't working needs triage New item requiring triage labels Dec 13, 2023
@crobert-1 crobert-1 added the processor/metricstransform Metrics Transform processor label Dec 13, 2023
Copy link
Contributor

Pinging code owners for processor/metricstransform: @dmitryax. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@crobert-1
Copy link
Member

crobert-1 commented Dec 13, 2023

Hello @tcaty, this kind of issue is usually due to metrics being in a different format than expected, so either the label isn't added properly, or the exporter doesn't handle it properly.

One thing to note wiht your config is that Prometheus has different naming standards on metrics than OTEL. So when metrics are being exported by the Prometheus exporter they're converted to match Prometheus's format. This means that when metrics are being modified by the metricstransform processor they're often in a different format (naming schema, for example) than when seen in Prometheus.

To be able to make progress debugging this, could you add the debug exporter to your metrics pipeline with verbosity: detailed, and share the duration_milliseconds_count metric output? This will show the actual metrics and their naming and contents before being converted to the Prometheus format. This will help us determine if this is an actual bug, or just a configuration issue.

@tcaty
Copy link
Author

tcaty commented Dec 13, 2023

Hello @crobert-1! Thanks for your reply! You're right, the kind of issue is different format in my case too. Let me explain, I thought that metrics that I see in prometheus dashboard are same with metrics that metricstransform processes, but it's not. According to spanmetricsconnector docs it generates only calls and duration metrics.

You can see there is prometheusexporter in my metrics pipeline:

service:
  pipelines:
    metrics:
      receivers:
        - otlp
        - spanmetrics
      processors:
        - batch
        - metricstransform
      exporters:
        - prometheus

Therefore metricstransform receives only calls and duration and prometheusexporter normalize them, so that's why I see calls_total and duration_milliseconds_count in my prometheus dashboard. I replaced duration_milliseconds_count to duration like that:

processors:
  batch:
  metricstransform:
    transforms:
     # replaced here from duration_milliseconds_count to duration
      - include: duration
        action: update
        operations:
          - action: add_label
            new_label: custom_label
            new_value: custom_value

And now it works as expected!

image

Thanks for your reply again! metricstransform works good, issue can be closed.

P.S. I'll use debug exporter to see what labels come from spanmetricsconnector.

@crobert-1
Copy link
Member

Glad to hear you were able to find the solution! Thanks for including it as well, it's always helpful to see what ends up working. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs triage New item requiring triage processor/metricstransform Metrics Transform processor
Projects
None yet
Development

No branches or pull requests

2 participants