-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[receiver/kubeletstats] Migrate kubeletstatsreceiver to the new Metrics Builder #9744
Conversation
I submitted a small refactor to unblock this, PTAL: #9745 |
Please add a changelog entry as well |
@TylerHelmuth #9745 is merged, please rebase |
@dmitryax I removed all the resource label stuff, but the changes were a little more wide spread than just metadata.go. If the PR is too big now, I can revert the change and this PR can handle only the new metrics builder (handling resource labels to ResourceOption conversion with that switch statement I had before), and a second PR could deal with the resource labels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks very nice! Just nits about removed newlines
Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
Also I tried this build, and looks like it doesn't produce any data. Not sure what's wrong looking at the code |
I will take a look. |
Did you check if it produces data now? |
Still working towards that. |
So I can confirm the api calls are returning the expected data, but after the call to MetricsData, r.mb.Emit() returns an empty pmetrics.Metrics. Still investigating why. |
Still trying to debug this issue. Strangely, if I move accumulation to func (r *kubletScraper) scrape(context.Context) (pmetric.Metrics, error) {
summary, err := r.statsProvider.StatsSummary()
r.logger.Info("summary retrieved", zap.Any("kubeletstats", summary))
if err != nil {
r.logger.Error("call to /stats/summary endpoint failed", zap.Error(err))
return pmetric.Metrics{}, err
}
r.logger.Info("Preparing metrics")
currentTime := pcommon.NewTimestampFromTime(time.Now())
if summary.Node.Memory != nil {
if summary.Node.Memory.AvailableBytes != nil {
r.mb.RecordK8sNodeMemoryAvailableDataPoint(currentTime, int64(*summary.Node.Memory.AvailableBytes))
r.logger.Info("RecordK8sNodeMemoryAvailableDataPoint")
}
if summary.Node.Memory.UsageBytes != nil {
r.mb.RecordK8sNodeMemoryUsageDataPoint(currentTime, int64(*summary.Node.Memory.UsageBytes))
r.logger.Info("RecordK8sNodeMemoryUsageDataPoint")
}
if summary.Node.Memory.RSSBytes != nil {
r.mb.RecordK8sNodeMemoryRssDataPoint(currentTime, int64(*summary.Node.Memory.RSSBytes))
r.logger.Info("RecordK8sNodeMemoryRssDataPoint")
}
if summary.Node.Memory.WorkingSetBytes != nil {
r.mb.RecordK8sNodeMemoryWorkingSetDataPoint(currentTime, int64(*summary.Node.Memory.WorkingSetBytes))
r.logger.Info("RecordK8sNodeMemoryWorkingSetDataPoint")
}
if summary.Node.Memory.PageFaults != nil {
r.mb.RecordK8sNodeMemoryPageFaultsDataPoint(currentTime, int64(*summary.Node.Memory.PageFaults))
r.logger.Info("RecordK8sNodeMemoryPageFaultsDataPoint")
}
if summary.Node.Memory.MajorPageFaults != nil {
r.mb.RecordK8sNodeMemoryMajorPageFaultsDataPoint(currentTime, int64(*summary.Node.Memory.MajorPageFaults))
r.logger.Info("RecordK8sNodeMemoryMajorPageFaultsDataPoint")
}
}
//kubelet.PrepareMetricsData(r.logger, summary, metadata, r.metricGroupsToCollect, r.mb)
data := r.mb.Emit(metadata.WithK8sNodeName(summary.Node.NodeName))
r.logger.Info("metric prepared", zap.Any("kubeletstats", data))
r.logger.Info("metric count", zap.Any("kubeletstats", data.MetricCount()))
r.logger.Info("metric data point count", zap.Any("kubeletstats", data.DataPointCount()))
r.logger.Info("resource metrics", zap.Any("kubeletstats", data.ResourceMetrics()))
return data, nil
} @dmitryax any ideas? I thought maybe the metrics were disabled in metadata.yaml, but it looks right to me. I will continue investigating tomorrow. |
I know why. Looks like you missed setting default value for |
🤦 |
@dmitryax added default metrics settings and reverted to the metrics slice accumulation that was happening before. Added a new struct to handle the different metrics builders. Deployed to GKE and confirmed data. |
…cs Builder (open-telemetry#9744) * Updated to use v2 generated metrics * Fix lint errors * Update changelog * Moved Node and Pod resources back to accumulator * replaced resource labels with ResourceOptions * Fix lint * Fix import spacing * Update receiver/kubeletstatsreceiver/internal/metadata/metrics.go Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com> * Refactored scraper to emit using metrics builder * Add default metrics * Added multiple metrics builders * Fix lint * Regenerated v2 metrics * rename metric builders * Updated to set new start time. * Update CHANGELOG.md * Update CHANGELOG.md Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
Description:
Updates kubeletstatsreceiver to use the metrics builder. Attempted to re-use as many existing conditionals as possible to ensure all functionality is the same as before. Would've like to remove the
labels
handling frominternal/kubelet/metadata.go
in favor of ResourceOptions, but there was a check for the persistent volume claim volume type ininternal/kubelet/resource.go
that ruined that idea. To leave that check in place, I keptinternal/kubelet/metadata.go
as is and opted to translate labels to ResourceOptions ininternal/kubelet/resource.go
.Link to tracking Issue:
Resolves #7143
Testing:
Updated existing tests where needed. I did not add any new tests as there should be no new scenarios. All outputs should be the same as before.