Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[target-allocator] Target allocator missing meta labels for port despite labels being found by Prometheus #998

Closed
moh-osman3 opened this issue Jul 22, 2022 · 2 comments

Comments

@moh-osman3
Copy link
Contributor

Opening this issue because certain meta port labels that I wish to relabel and scrape metrics for are not showing up when viewing the targets in the target-allocator (but I can see the labels in prometheus). As a result metrics don't seem to be coming into the collector.

Background Info

I did portforwarding on my prometheus service to take a look locally. When I look at http://localhost:9090/config I can confirm that the expected config exists but don't seem to be getting a match on __meta_kubernetes_endpoint_port_name

- job_name: serviceMonitor/opentelemetry/lightstep-collector-servicemonitor/0
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 30s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  relabel_configs:
  - source_labels: [job]
    separator: ;
    regex: (.*)
    target_label: __tmp_prometheus_job_name
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name, __meta_kubernetes_service_labelpresent_app_kubernetes_io_name]
    separator: ;
    regex: (lightstep-collector-collector-monitoring);true
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: monitoring
    replacement: $1
    action: keep
...
...
...
  kubernetes_sd_configs:
  - role: endpoints
    kubeconfig_file: ""
    follow_redirects: true
    namespaces:
      own_namespace: false
      names:
      - opentelemetry

(i.e. I have a service called lightstep-collector-collector-monitoring with a port named monitoring)
confirmed this regex matches what is found in my cluster

$ k describe svc lightstep-collector-collector-monitoring
Name:              lightstep-collector-collector-monitoring
Namespace:         opentelemetry
Labels:            app.kubernetes.io/component=opentelemetry-collector
                   app.kubernetes.io/instance=opentelemetry.lightstep-collector
                   app.kubernetes.io/managed-by=opentelemetry-operator
                   app.kubernetes.io/name=lightstep-collector-collector-monitoring
                   app.kubernetes.io/part-of=opentelemetry
                   app.kubernetes.io/version=8214b341cfa94db235b00c0c0800b19f71894cee
Annotations:       meta.helm.sh/release-name: lightstep
                   meta.helm.sh/release-namespace: opentelemetry
Selector:          app.kubernetes.io/component=opentelemetry-collector,app.kubernetes.io/instance=opentelemetry.lightstep-collector,app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/name=lightstep-collector-collector,app.kubernetes.io/part-of=opentelemetry,app.kubernetes.io/version=8214b341cfa94db235b00c0c0800b19f71894cee
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.27.246.138
IPs:               10.27.246.138
Port:              monitoring  8888/TCP
TargetPort:        8888/TCP
Endpoints:         10.24.0.55:8888,10.24.1.69:8888,10.24.7.98:8888
Session Affinity:  None
Events:            <none>

The Issue:

Looking at the target allocator I can view the jobs it discovered by running

$ curl http://lightstep-collector-targetallocator:80/jobs
{"serviceMonitor/opentelemetry/lightstep-collector-servicemonitor/0":{"_link":"/jobs/serviceMonitor%2Fopentelemetry%2Flightstep-collector-servicemonitor%2F0/targets"},"serviceMonitor/testapp/testapp/0":{"_link":"/jobs/serviceMonitor%2Ftestapp%2Ftestapp%2F0/targets"}}

But when I look at the targets assigned to one of the statefulset collector pods, I am not seeing the __meta_kubernetes_endpoint_port_name

$ curl http://lightstep-collector-targetallocator:80/jobs/serviceMonitor%2Fopentelemetry%2Flightstep-collector-servicemonitor%2F0/targets?collector_id=lightstep-collector-collector-1
[
  {
    "targets": [
      "10.24.7.98:4317",
      "10.24.0.55:8888",
      "10.24.7.98:8888"
    ],
    "labels": {
      "__meta_kubernetes_endpoints_label_app_kubernetes_io_component": "opentelemetry-collector",
      "__meta_kubernetes_endpoints_label_app_kubernetes_io_instance": "opentelemetry.lightstep-collector",
      "__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by": "opentelemetry-operator",
      "__meta_kubernetes_endpoints_label_app_kubernetes_io_name": "lightstep-collector-collector",
      "__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of": "opentelemetry",
      "__meta_kubernetes_endpoints_label_app_kubernetes_io_version": "8214b341cfa94db235b00c0c0800b19f71894cee",
      "__meta_kubernetes_endpoints_label_service_kubernetes_io_headless": "",
      "__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component": "true",
      "__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance": "true",
      "__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by": "true",
      "__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name": "true",
      "__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of": "true",
      "__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version": "true",
      "__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless": "true",
      "__meta_kubernetes_endpoints_name": "lightstep-collector-collector-headless",
      "__meta_kubernetes_namespace": "opentelemetry",
      "__meta_kubernetes_service_annotation_meta_helm_sh_release_name": "lightstep",
      "__meta_kubernetes_service_annotation_meta_helm_sh_release_namespace": "opentelemetry",
      "__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name": "lightstep-collector-collector-headless-tls",
      "__meta_kubernetes_service_annotationpresent_meta_helm_sh_release_name": "true",
      "__meta_kubernetes_service_annotationpresent_meta_helm_sh_release_namespace": "true",
      "__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name": "true",
      "__meta_kubernetes_service_label_app_kubernetes_io_component": "opentelemetry-collector",
      "__meta_kubernetes_service_label_app_kubernetes_io_instance": "opentelemetry.lightstep-collector",
      "__meta_kubernetes_service_label_app_kubernetes_io_managed_by": "opentelemetry-operator",
      "__meta_kubernetes_service_label_app_kubernetes_io_name": "lightstep-collector-collector",
      "__meta_kubernetes_service_label_app_kubernetes_io_part_of": "opentelemetry",
      "__meta_kubernetes_service_label_app_kubernetes_io_version": "8214b341cfa94db235b00c0c0800b19f71894cee",
      "__meta_kubernetes_service_labelpresent_app_kubernetes_io_component": "true",
      "__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance": "true",
      "__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by": "true",
      "__meta_kubernetes_service_labelpresent_app_kubernetes_io_name": "true",
      "__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of": "true",
      "__meta_kubernetes_service_labelpresent_app_kubernetes_io_version": "true",
      "__meta_kubernetes_service_name": "lightstep-collector-collector-headless"
    }
  }
]

But it seems the labels are found on the Prometheus side of things at http://localhost:9090/service-discovery

serviceMonitor/opentelemetry/lightstep-collector-servicemonitor/0
Discovered Labels:
__address__="10.24.0.61:8888"
__meta_kubernetes_endpoint_address_target_kind="Pod"
__meta_kubernetes_endpoint_address_target_name="lightstep-collector-collector-1"
__meta_kubernetes_endpoint_node_name="gke-dev-mohosman-default-pool-b9cc4014-z356"
__meta_kubernetes_endpoint_port_name="monitoring"
__meta_kubernetes_endpoint_port_protocol="TCP"
...
...
...

I have also reproduced this issue with other collector services I have running in my cluster such as lightstep-collector-collector and lightstep-collector-collector-headless and in all instances it seems that the __meta_kubernetes_endpoint_port_name is dropped in the target allocator even though I can observe the labels in Prometheseus. Unsure what the issue is.

@jaronoff97
Copy link
Contributor

this seems to be related to this issue which has a PR being worked on here

@moh-osman3
Copy link
Contributor Author

This issue was also resolved in ab00e8c for issue #948

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants