-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse events used for syncing watchers #17563
Conversation
Skipping CI for Draft Pull Request. |
1eecad5
to
05db321
Compare
Do we expect sending events to compacted watchers (the slowest one) though? I think it should not. WDYT? |
I expect that minRev and rev of victims is above compactedRev, if now we can just add a protection. |
05db321
to
305b9db
Compare
305b9db
to
71b56dd
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files
... and 22 files with indirect coverage changes @@ Coverage Diff @@
## main #17563 +/- ##
==========================================
- Coverage 68.80% 68.71% -0.09%
==========================================
Files 420 420
Lines 35599 35619 +20
==========================================
- Hits 24494 24476 -18
- Misses 9675 9712 +37
- Partials 1430 1431 +1 Continue to review full report in Codecov by Sentry.
|
71b56dd
to
edb08ba
Compare
edb08ba
to
e7821e3
Compare
Ready to review |
/retest |
{minRev: 2, maxRev: 3, expectEvents: expectEvents[0:1]}, | ||
{minRev: 2, maxRev: 4, expectEvents: expectEvents[0:2]}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two cases are duplicated? see line 238-239. Better to sort the cases by minRev
or maxRev
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's intentional. We group multiple cases into a scenarios. Added comments to make them easier to see. This is to better test the reuse, as the function call has side effects with modifying evs
.
9b39f3f
to
ebb0f64
Compare
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
It would be wonderful if we have more performance data (i.e. in K8s scale test) to showcase the benefit of such improvement.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahrtr, serathius The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Don't think K8s scale are any close to push etcd watch into congestion, if they did they K8s would not work. See kubernetes/kubernetes#123448 That's why I'm using benchmarks. They are pretty good, however without a easily accessible metric they are not used. I'm thinking about doing some MVP to run a benchmark and upload results to perf-dash K8s, however the problem is that they have a very strict schema requirement, which doesn't match etcd. |
Part of #16839
Improve etcd behavior during watch congestion by reducing the memory needed.
Congestion causes watch requests to backup and resync loop to allocate tons of memory. This patch has shown to reduce memory needed 5-10 times.
Tested using command
go run ./tools/benchmark/main.go watch-latency --watch-per-stream 1000 --streams 1 --put-total 200 --val-size 100000
. Parameters adjusted for total object size be a 20MB.