-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid watching all secrets in the cluster #1274
Conversation
I just saw that the Lister was introduced to avoid throttling :/ |
Yeah this one will require a bit of thinking -
option 2. will simplify the implementation but is a breaking change for users so I'm leaning towards 1. |
Yes I agree. I think the best is to have a single place where the code calls |
@guillaumerose Nice yeah! A LRU cache should be good enough for our use case |
d0b58b3
to
2661d93
Compare
The following is the coverage report on the affected files.
|
The following is the coverage report on the affected files.
|
6f9185d
to
5bb6e63
Compare
The following is the coverage report on the affected files.
|
227a773
to
f268c7d
Compare
The following is the coverage report on the affected files.
|
The following is the coverage report on the affected files.
|
I added the LRU cache. It will store 1024 secrets for 5s maximum. That way, it will ensure if the listener receives a batch of notifications from the same repo, it will call once the apiserver. I don't cache errors. If the secret is incorrect or absent, it will still issue a call to the apiserver. WDYT? |
Looks good. I think 5s might be reasonable to start with though we might this configurable later. Did we do some testing with a bunch of concurrent requests? One question I have is how effective this will be for a bunch of requests (e.g. for the same event, multiple triggers) that arrive at the same time. Since each incoming request is processes in its own goroutine, I wonder if this will result in all (or majority) cache misses. See #594 (comment) |
@dibyom
I see the cache is working as its configured for 5s i see cache is true for 4-5 times and then its resetting |
/lgtm |
The following is the coverage report on the affected files.
|
459b497
to
2df53e9
Compare
The following is the coverage report on the affected files.
|
The following is the coverage report on the affected files.
|
/test pull-tekton-triggers-integration-tests |
2 similar comments
/test pull-tekton-triggers-integration-tests |
/test pull-tekton-triggers-integration-tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dibyom, savitaashture The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
The following is the coverage report on the affected files.
|
This cache was unused since the req parameter was always nil.
This will allow us to choose between an informer and a kubernetes client. The current implementation is using an informer.
Instead of comparing nil secret with the given secret, it will return a meaningful error message.
Instead of using an informer, it now uses a k8s client. It avoids loading all secrets of the cluster in controller memory. Tests are still using an informer for convenience.
It will reduce the load on the k8s apiserver when receiving a lot of webhooks coming from the same project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
The following is the coverage report on the affected files.
|
/test pull-tekton-triggers-integration-tests |
Changes
Replace the secret informer by a secret getter.
It implies that for each webhook an API call will be issued against the
k8s API to get the secret.
Previously, all secrets of the cluster were in the interceptor memory.
Before this change:
crictl stats reports 79.18MB with around 5k secrets of 2.5kB with kind.
After this change:
crictl stats reports 8.221MB
Proposition related to #1268
The other way to solve this would be to add a specific label on all secrets watched by the interceptor.
But it would introduce a breaking change between 2 releases.
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
See the contribution guide for more details.
Release Notes