Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

smartconnpool: Better handling for idle expiration #17757

Merged
merged 5 commits into from
Feb 13, 2025

Conversation

vmg
Copy link
Collaborator

@vmg vmg commented Feb 12, 2025

Description

We've had a report of a performance regression when migrating from an old Vitess version to a newer version that contains the new smart connection pool implementation. The system in question has inconsistent (spiky) load patterns and quite stringent idle timeouts for connections.

Our assumption here is that the regression is caused by a combination of two factors: the new connection pool provides connections in a LIFO pattern (where the most recently returned to the pool is returned to the client on subsequent gets). This is as opposed to the old pool, which was FIFO, meaning that the pool would constantly cycle through all the available connections. In systems that are not at full capacity, the LIFO connection pool will continuously re-use a small subset of all the open connections. This improves performance in mysqld, because of data locality, but causes the connections idling in the pool to eventually hit their max idle timeout and be closed by the background worker.

In this situation, where the delta between requests/second during normal operation and peak operation is sufficiently large, we end up with a very expensive "background stall" in the background worker that is closing idle connections, because a large group of connections can expire at once. Even when the delta is not particularly big, the stringent idle timeout for the connections causes the background worker to trigger very frequently, and because the way it is implemented, these triggers cause a micro-stall.

We believe that improving the way the background worker for idle connections works can get rid of all these stalls.

The issue with the current implementation is that it pops all the connections from each stack on each idle check. Once all the connections have been popped, the worker iterates the list of connections and returns the ones that are not idle to the stack in reverse order (so the resulting stack has the same order as before popping it). In retrospect, this is not a good implementation and periodically popping the whole connection stack will obviously lead to micro-stalls in busy systems with stringent timeouts, as incoming requests are very likely to collide with the idle checks in the background, encountering an empty stack and having to fall back to the slower path in the wait queue.

The proposed implementation in this PR fixes the issue as follows:

  1. I've implemented a new timestamp type that stores monotonic time points in just 8 bytes. This now replaces the time.Time timestamps (24 bytes) that we were using to track idle expiration dates in the pooled connections. Besides the significant memory savings, the key advantage of this timestamp is that it can be updated atomically. See the implementation in timestamp.go.
  2. I've updated the background worker to, instead of popping all the connections in each stack, do a best-effort read-only iteration of the stack and mark all the idle connections as being idle by atomically updating their timestamps. Each connection that is marked as idle is then closed in the background worker, but it is NOT removed from the stack.
  3. The client getter code is now aware that connections from the stack can carry an "idle expired" timestamp, so before returning them to the user we attempt to atomically update the timestamp as "busy". Any connections that we acquire with already expired timestamps are simply ignored, as they are being handled in the background thread.

This removes all the micro and macro stalls caused by the background worker, as the idle expiration algorithm is now wait-free: the background worker does a best-effort expiry of connections without actually contending on the connection stack, and the foreground clients cooperate with the expiration by ignoring any expired connections that are popped from a stack. The expensive close operation always happens in the background. A fixed amount of overhead is added to certain Get cases because now popping a connection from a stack can spuriously fail (when the popped connection is expired), and must be retried. Retrying the pop is an extremely cheap operation, so I believe this won't have an effect in the p99 of the requests, and it will certainly be a no-op in connection pools without idle timeouts, where the popped connections can never be expired.

An improvement on top of this implementation which I've discarded is upgrading the background worker to also do best-effort eviction of stale connections in the stacks. I'm opting not to pursue this because the semantics of removing nodes from the middle of the atomic stack are non-obvious, and they will cause contention even if we get them right. I think having the clients cooperate by ignoring expired connections is by far the most elegant and safest approach.

cc @harshit-gangal @deepthi

Related Issue(s)

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Deployment Notes

vmg added 3 commits February 12, 2025 10:43
Signed-off-by: Vicent Marti <vmg@strn.cat>
Signed-off-by: Vicent Marti <vmg@strn.cat>
Signed-off-by: Vicent Marti <vmg@strn.cat>
Copy link
Contributor

vitess-bot bot commented Feb 12, 2025

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Feb 12, 2025
@github-actions github-actions bot added this to the v22.0.0 milestone Feb 12, 2025
@vmg vmg added Component: Query Serving Type: Performance Component: Performance and removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsWebsiteDocsUpdate What it says NeedsIssue A linked issue is missing for this Pull Request NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Feb 12, 2025
Signed-off-by: Vicent Marti <vmg@strn.cat>
Copy link

codecov bot commented Feb 12, 2025

Codecov Report

Attention: Patch coverage is 92.94118% with 6 lines in your changes missing coverage. Please review.

Project coverage is 68.02%. Comparing base (c47f1bd) to head (d1e244d).
Report is 8 commits behind head on main.

Files with missing lines Patch % Lines
go/pools/smartconnpool/timestamp.go 87.87% 4 Missing ⚠️
go/pools/smartconnpool/pool.go 95.91% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #17757      +/-   ##
==========================================
+ Coverage   67.95%   68.02%   +0.06%     
==========================================
  Files        1586     1587       +1     
  Lines      255227   255579     +352     
==========================================
+ Hits       173445   173848     +403     
+ Misses      81782    81731      -51     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

go/pools/smartconnpool/pool.go Outdated Show resolved Hide resolved
go/pools/smartconnpool/stack.go Show resolved Hide resolved
@harshit-gangal harshit-gangal added the Benchmark me Add label to PR to run benchmarks label Feb 13, 2025
Copy link
Contributor

vitess-bot bot commented Feb 13, 2025

Hello! 👋

This Pull Request is now handled by arewefastyet. The current HEAD and future commits will be benchmarked.

You can find the performance comparison on the arewefastyet website.

@dbussink dbussink added Backport to: release-19.0 Needs to be back ported to release-19.0 Backport to: release-20.0 Needs to be backport to release-20.0 Backport to: release-21.0 Needs to be backport to release-21.0 labels Feb 13, 2025
@dbussink
Copy link
Contributor

Also marked it for backporting since it was a performance regression here in certain edge cases.

Signed-off-by: Vicent Marti <vmg@strn.cat>
Copy link
Member

@harshit-gangal harshit-gangal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an awesome change! Improves idle connection handling significantly. 🚀

@harshit-gangal harshit-gangal merged commit 057bcc9 into vitessio:main Feb 13, 2025
201 checks passed
@harshit-gangal harshit-gangal deleted the vmg/pool-idle branch February 13, 2025 17:34
vitess-bot pushed a commit that referenced this pull request Feb 13, 2025
Signed-off-by: Vicent Marti <vmg@strn.cat>
vitess-bot pushed a commit that referenced this pull request Feb 13, 2025
Signed-off-by: Vicent Marti <vmg@strn.cat>
dbussink pushed a commit that referenced this pull request Feb 13, 2025
…7757) (#17781)

Signed-off-by: Vicent Marti <vmg@strn.cat>
Co-authored-by: vitess-bot[bot] <108069721+vitess-bot[bot]@users.noreply.github.com>
dbussink pushed a commit that referenced this pull request Feb 13, 2025
…7757) (#17780)

Signed-off-by: Vicent Marti <vmg@strn.cat>
Co-authored-by: vitess-bot[bot] <108069721+vitess-bot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Backport to: release-19.0 Needs to be back ported to release-19.0 Backport to: release-20.0 Needs to be backport to release-20.0 Backport to: release-21.0 Needs to be backport to release-21.0 Benchmark me Add label to PR to run benchmarks Component: Performance Component: Query Serving Type: Performance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants