You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Opening this issue because certain meta port labels that I wish to relabel and scrape metrics for are not showing up when viewing the targets in the target-allocator (but I can see the labels in prometheus). As a result metrics don't seem to be coming into the collector.
Background Info
I did portforwarding on my prometheus service to take a look locally. When I look at http://localhost:9090/config I can confirm that the expected config exists but don't seem to be getting a match on __meta_kubernetes_endpoint_port_name
(i.e. I have a service called lightstep-collector-collector-monitoring with a port named monitoring)
confirmed this regex matches what is found in my cluster
I have also reproduced this issue with other collector services I have running in my cluster such as lightstep-collector-collector and lightstep-collector-collector-headless and in all instances it seems that the __meta_kubernetes_endpoint_port_name is dropped in the target allocator even though I can observe the labels in Prometheseus. Unsure what the issue is.
The text was updated successfully, but these errors were encountered:
Opening this issue because certain meta port labels that I wish to relabel and scrape metrics for are not showing up when viewing the targets in the target-allocator (but I can see the labels in prometheus). As a result metrics don't seem to be coming into the collector.
Background Info
I did portforwarding on my prometheus service to take a look locally. When I look at http://localhost:9090/config I can confirm that the expected config exists but don't seem to be getting a match on
__meta_kubernetes_endpoint_port_name
(i.e. I have a service called lightstep-collector-collector-monitoring with a port named monitoring)
confirmed this regex matches what is found in my cluster
The Issue:
Looking at the target allocator I can view the jobs it discovered by running
But when I look at the targets assigned to one of the statefulset collector pods, I am not seeing the
__meta_kubernetes_endpoint_port_name
But it seems the labels are found on the Prometheus side of things at http://localhost:9090/service-discovery
I have also reproduced this issue with other collector services I have running in my cluster such as
lightstep-collector-collector
andlightstep-collector-collector-headless
and in all instances it seems that the__meta_kubernetes_endpoint_port_name
is dropped in the target allocator even though I can observe the labels in Prometheseus. Unsure what the issue is.The text was updated successfully, but these errors were encountered: