-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change Mutex to RWMutex for connectivityStateManager #3682
Conversation
|
I'm curious what led to this PR. Does this change provide measurable improvements to any benchmarks? Were you seeing unacceptable performance before? |
To answer your question of what led to this PR, I've come across a handful of projects that expose Stats or Status recently that don't use a read/write mutex. I was curious if there was an underlying reason to not use the RWMutex. #2142 was a similar enough change that I charged into this one. But a measurable benchmark is a fantastic suggestion. |
Created a little benchmark comparing Lock vs RLock at various concurrency levels. Attached results in the comments of the gist for a 32 core linux machine. While it shows slightly better performance as concurrency grows there's a small trade-off at the low end. Benchmark Gist: https://gist.github.com/lalonde/e76833f075bc05dc561301995b94a7a1 Largely influenced by: golang/go#17973 |
At some point in the past, I replaced one of our mutexes (not sure if it was this one) with a RWMutex, and measured worse overall performance in our RPC benchmarks. If you'll notice from your benchmarks, lock-16-* (and lower) all perform better than their rlock equivalents. So, what seems like it should be an obvious win, turns out to be worse in many cases. Also, I don't think this is a performance-sensitive path in general. But if it is, it would probably be better to use an atomic to hold the state instead of needing a lock to read. |
Back to what led to this PR, people wanting to check the state before executing an rpc call might use GetState() rather than wrapping their own state and using WaitForStateChange or some other mechanism. I agree this should not be a performance sensitive path. I've gotten the answers I set out to find. Probably not the best forum. I'll close this out now. |
This should not be a common thing to want to do. Typically, you should just be able to perform the RPC directly, and make sure it is not wait-for-ready (the default). The RPC should fail immediately with UNAVAILABLE if there is no connection active or in progress. Instead of using WaitForStateChange()/GetState() until READY, you should (typically) use a wait-for-ready RPC with an appropriate deadline.
No worries. Thank you for the PR and extra benchmarking investigation. We care a lot about performance, but have found that it's almost impossible to know the effects of even a small change on the overall system without running real-world benchmarks. We've even seen wildly different results in different environments, including some changes that were obviously more optimal leading to regressions due to the go runtime. |
This change is to make
ClientConn.GetState()
non-blocking.