-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kubeletstatsreceiver] had a breaking resource label change on container metrics starting in v0.52.0 #10842
Comments
@dmitryax this is fixed by updated the resource attribute in metadata.yaml right? Which is the correct semantic convention, |
This change shouldn't be part of migration to the metrics builder. |
I don't think |
If we currently use the same value for |
We do. So the resource attribute should be updated in metadata.yaml. @jvoravong is that something you can do? |
I submitted a fix #10848 |
Describe the bug
After PR #9744 was merged, metrics for containers running in pods running in Kubernetes had a label change. This label change is a breaking change, several default dashboards at my company stop functioning properly with this change.
The issue: In versions v0.51.0 and before the label k8s.container.name was used on container metrics. Starting in v0.52.0, the label container.name is used on container metrics instead of k8s.container.name.
It is not trivial for users of the kubeletstats receiver to migrate monitoring content (alerts, dashboards, etc) without notice.
PR: [receiver/kubeletstats] Migrate kubeletstatsreceiver to the new Metrics Builder #9744
kubeletstatsreceiver v0.51.0:
opentelemetry-collector-contrib/receiver/kubeletstatsreceiver/internal/kubelet/resource.go
Line 40 in 0904c58
kubeletstatsreceiver v0.52.0:
opentelemetry-collector-contrib/receiver/kubeletstatsreceiver/internal/kubelet/resource.go
Line 32 in 9b58adf
Steps to reproduce
Deploy the kubeletstats receiver v0.51.0 to a Kubernetes cluster, record some container metrics with label values.
Deploy the kubeletstats receiver v0.52.0 to a Kubernetes cluster, record some container metrics with label values.
Compare the metrics labels.
Example of label difference:
kubeletstatsreceiver v0.51.0 Sample container metric:
StartTimestamp: 2022-05-27 16:51:14 +0000 UTC
Timestamp: 2022-06-08 20:09:57.352340611 +0000 UTC
Value: 0
ResourceMetrics #4
Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
Resource labels:
-> k8s.pod.uid: STRING(----)
-> k8s.pod.name: STRING(dns-controller--)
-> k8s.namespace.name: STRING(kube-system)
-> k8s.container.name: STRING(dns-controller)
-> container.id: STRING()
-> cloud.provider: STRING(aws)
-> cloud.platform: STRING(aws_ec2)
-> cloud.region: STRING(us-west-2)
-> cloud.account.id: STRING()
-> cloud.availability_zone: STRING(us-west-2a)
-> host.id: STRING(i-)
-> host.image.id: STRING(ami-)
-> host.type: STRING(m3.xlarge)
-> host.name: STRING(ip----.us-west-2.compute.internal)
-> os.type: STRING(linux)
-> k8s.node.name: STRING(ip----.us-west-2.compute.internal)
-> k8s.cluster.name: STRING(********-aws)
kubeletstatsreceiver v0.52.0 sample container metric:
StartTimestamp: 2022-05-27 16:52:46 +0000 UTC
Timestamp: 2022-06-08 20:18:30.183591147 +0000 UTC
Value: 5373952
ResourceMetrics #21
Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
Resource labels:
-> k8s.pod.uid: STRING(----)
-> k8s.pod.name: STRING(ebs-csi-node-)
-> k8s.namespace.name: STRING(kube-system)
-> container.name: STRING(ebs-plugin)
-> container.id: STRING()
-> cloud.provider: STRING(aws)
-> cloud.platform: STRING(aws_ec2)
-> cloud.region: STRING(us-west-2)
-> cloud.account.id: STRING()
-> cloud.availability_zone: STRING(us-west-2a)
-> host.id: STRING(i-)
-> host.image.id: STRING(ami-)
-> host.type: STRING(m3.xlarge)
-> host.name: STRING(ip----30.us-west-2.compute.internal)
-> os.type: STRING(linux)
-> k8s.node.name: STRING(ip----.us-west-2.compute.internal)
-> k8s.cluster.name: STRING(********-aws)
What did you expect to see?
Any of the following
What did you see instead?
All container resource metrics now use the container.name label
What version did you use?
Version: (v0.51.0, v0.52.0)
What config did you use?
Config: (e.g. the yaml config file)
Environment
Kubernetes, AWS
The text was updated successfully, but these errors were encountered: