-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dramatic performance decrease in latest commit(4c6c506). #13068
Comments
The columns are: |
Those commits only change test files? Which tests are we running here? |
#13060 is a change that removed an optimization of count-only range-reads. |
My test is running mixed read/write with txn-put and txn-range. But I was using a limit in txn-range request to avoid too many data. opts := []v3.OpOption{v3.WithRange(mixedTxnEndKey)}
if rangeConsistency == "s" {
opts = append(opts, v3.WithSerializable())
}
opts = append(opts, v3.WithPrefix(), v3.WithLimit(mixedTxnRangeLimit))
req.op = v3.OpGet("", opts...)
req.isWrite = false
readOpsTotal++ |
The original patch (which is reverted in #13060) only changed count-only range query path. |
Let me try to locate the patch that introduced this issue. |
The performance of non-count-only scenes will also be affected(tr.s.kvindex.Revisions function ). @gyuho |
in the not-count case seems we started to return Before:
After:
Did 3.4 was returning count for not-count queries ? |
In 3.4 etcd was returning Line 171 in d19fbe5
Lines 106 to 120 in d19fbe5
Proposed changes was implemented only after I have updated integration tests of Range function to test count with limit. Only after I have tested it on release-3.4 branch I have proposed it on To verify if I didn't make any mistake I have just cherry-picked the tests to release-3.4 branch. serathius@31c9a27 and run integration tests https://travis-ci.com/github/serathius/etcd/builds/227769629 @ptabor Case that you have shown with Please let me know if there is anything else I can help with. |
Thank you for checking. It seems that this optimization requires coordinated effort betweek k8s and etcd, e.g. k8s:
|
For my understanding,
#11990 impacted Kubernetes chunk API that relies on var remainingItemCount *int64
// getResp.Count counts in objects that do not match the pred.
// Instead of returning inaccurate count for non-empty selectors, we return nil.
// Only set remainingItemCount if the predicate is empty.
if utilfeature.DefaultFeatureGate.Enabled(features.RemainingItemCount) {
if pred.Empty() {
c := int64(getResp.Count - pred.Limit)
remainingItemCount = &c
}
} @ptabor @serathius @wilsonwang371 @tangcong
|
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions. |
I was testing my changes and I found the latest main branch performance was very bad.
commit 4c6c506
Merge: c15af6d d669eb0
Author: Piotr Tabor ptab@google.com
Date: Tue Jun 1 17:19:05 2021 +0200
Compared with my previous data:
DATA | 0.0078 | 32 | 16 | 275.2097:35699.6517 | 275.7598:35770.8341 | 276.6643:35888.4949 | 280.6877:36410.3105 | 282.7565:36678.4655 |
Now it is:
DATA,.0078,32,16,71.3823:9259.6178,71.5968:9287.4621,71.4798:9272.2607,71.4114:9263.3926,71.8465:9319.8179
We can see in the same condition, read reduced from 275 to 71.
@ptabor @gyuho guys, please take a look.
The text was updated successfully, but these errors were encountered: