Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only watch metadata for ReplicaSets in K8s provider #5699

Merged
merged 1 commit into from
Oct 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions changelog/fragments/1728039144-k8s-replicaset-onlymeta.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Kind can be one of:
# - breaking-change: a change to previously-documented behavior
# - deprecation: functionality that is being removed in a later release
# - bug-fix: fixes a problem in a previous version
# - enhancement: extends functionality but does not break or fix existing behavior
# - feature: new functionality
# - known-issue: problems that we are aware of in a given version
# - security: impacts on the security of a product or a user’s deployment.
# - upgrade: important information for someone upgrading from a prior version
# - other: does not fit into any of the other categories
kind: enhancement

# Change summary; a 80ish characters long description of the change.
summary: Only watch metadata for ReplicaSets in K8s provider

# Long description; in case the summary is not enough to describe the change
# this field accommodate a description without length limits.
# NOTE: This field will be rendered only for breaking-change and known-issue kinds at the moment.
#description:

# Affected component; usually one of "elastic-agent", "fleet-server", "filebeat", "metricbeat", "auditbeat", "all", etc.
component: elastic-agent

# PR URL; optional; the PR number that added the changeset.
# If not present is automatically filled by the tooling finding the PR where this changelog fragment has been added.
# NOTE: the tooling supports backports, so it's able to fill the original PR number instead of the backport PR number.
# Please provide it if you are adding a fragment for a different PR.
#pr: https://github.com/owner/repo/1234

# Issue URL; optional; the GitHub issue related to this changeset (either closes or is part of).
# If not present is automatically filled by the tooling with the issue linked to the PR number.
#issue: https://github.com/owner/repo/1234
22 changes: 17 additions & 5 deletions internal/pkg/composable/providers/kubernetes/pod.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ import (
"sync"
"time"

"k8s.io/apimachinery/pkg/runtime/schema"

"github.com/elastic/elastic-agent-autodiscover/kubernetes"
"github.com/elastic/elastic-agent-autodiscover/kubernetes/metadata"
"github.com/elastic/elastic-agent-autodiscover/utils"
Expand Down Expand Up @@ -104,11 +106,21 @@ func NewPodEventer(
// Deployment -> Replicaset -> Pod
// CronJob -> job -> Pod
if metaConf.Deployment {
replicaSetWatcher, err = kubernetes.NewNamedWatcher("resource_metadata_enricher_rs", client, &kubernetes.ReplicaSet{}, kubernetes.WatchOptions{
SyncTimeout: cfg.SyncPeriod,
Namespace: cfg.Namespace,
HonorReSyncs: true,
}, nil)
metadataClient, err := kubernetes.GetKubernetesMetadataClient(cfg.KubeConfig, cfg.KubeClientOptions)
if err != nil {
logger.Errorf("Error creating metadata client for %T due to error %+v", &kubernetes.Namespace{}, err)
}
// use a custom watcher here, so we can provide a transform function and limit the data we're storing
replicaSetWatcher, err = kubernetes.NewNamedMetadataWatcher(
"resource_metadata_enricher_rs",
client,
metadataClient,
schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "replicasets"},
kubernetes.WatchOptions{
SyncTimeout: cfg.SyncPeriod,
Namespace: cfg.Namespace,
HonorReSyncs: true,
}, nil, metadata.RemoveUnnecessaryReplicaSetData)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the whole idea here is based on this fucntion https://github.com/elastic/elastic-agent-autodiscover/pull/111/files#diff-745348e532593174e8280a273af14d0a76f379bbeb48e782d66c653e4e36d994R103 that computes only the needed metadata.

Quick question: Why we needed to create a specific watcher NewNamedMetadataWatcher and we could not retrieve the same info with the old client?

Copy link
Contributor

@pkoutsovasilis pkoutsovasilis Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the idea is not only the transform func, another essential bit is the use of PartialObjectMetadata, which a metadata-based client is requesting for this type from the API server. A relevant comment can be found here 🙂

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old client operates on whole resources. So it always fetches the entire ReplicaSet resource, and the informer gets an update whenever anything in that resource changes - for example when it scales up or down. For each such update, we need to deserialize the whole resource into memory, and then we only make use of the name and owner references. In a busy cluster, this adds up to a lot of memory churn that is completely unnecessary.

The new watcher only subscribes to changes to metadata, so it sidesteps the problem. However, to achieve this, we need a special K8s client which only fetches metadata, and a special informer that operates on PartialObjectMetadata structs. This is what the new watcher is for.

if err != nil {
logger.Errorf("Error creating watcher for %T due to error %+v", &kubernetes.Namespace{}, err)
}
Expand Down