-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Core: reduce memory footprint #982
Conversation
The previous client cache locking scheme was not thread safe, and allocated more locks than are typically needed. This change replaces that approach by using the KeyMutex provided by the k8s utils package. Locks are now a pooled resource. Other fixes - update invalid Bitnami Helm chart repo for postgres. We should phasing out its use. - Bump TF Helm to latest version
Previously, VSO was caching and registering full object watchers on K8s Secrets. While watching K8s Secrets is necessary for automated remediation in the case where a destination secret is deleted from the cluster, doing so can result in OOM conditions for the operator, since each Secret's data contributes to the operator's total memory. This change does the following: - disables the caching K8s Secrets in the manager's client. - only watches for Secret metadata changes.
The doc string for WatchesMetadata makes it sound like we should use |
Yeah, that might make those calls a bit lighter weight. We can probably address that in follow-on work. |
Previously, VSO was caching and registering full object watchers on K8s Secrets. While watching K8s Secrets is necessary for automated remediation in the case where a destination secret is deleted from the cluster, doing so can result in OOM conditions for the operator, since each Secret's data contributes to the operator's total memory.
This change does the following:
Remaining tasks:
Reproduction steps:
Run:
Wait for the manager to be OOM killed
Test:
Wait for VSO to come up, it should not be OOM killed