-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[5.0] Recover keys in chain thread pool #1846
Conversation
@@ -802,7 +802,7 @@ class producer_plugin_impl : public std::enable_shared_from_this<producer_plugin | |||
fc::microseconds max_trx_cpu_usage = max_trx_time_ms < 0 ? fc::microseconds::maximum() : fc::milliseconds(max_trx_time_ms); | |||
|
|||
auto future = transaction_metadata::start_recover_keys(trx, | |||
_thread_pool.get_executor(), | |||
chain.get_thread_pool(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this change have any side effects on controller's thread pool operation?
What in release/5.0 cause the slowdown? In release/3.2 the same producer_plugin's thread pool was used and 31k TPS was achieved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The chain thread pool is mainly used in the controller for key recovery during block validation. The common use for key recovery is one nice thing about this change.
It is unclear what in 5.0 caused the slowdown. It was not this as it was this way in 3.2/4.0. Making this change in 3.2/4.0 would likely provide a similar improvement in performance.
One thing that seems to be clear is that this area is a hot-spot for performance. Currently these thread-hops into boost asio queues are providing a natural back pressure on the net threads. In 6.0 I would like to see this implemented not by boost asio queues but rather by our own ring buffer or bounded queue. What I think would make a big difference is to treat trxs differently than other queued tasks. I think we should implement a memory buffer of transactions that the main thread pulls from instead of queueing them up as low priority tasks. This would relieve contention on the boost asio priority queue and would provide better visibility into queued trxs waiting to be executed. This would also allow us to re-order trxs if desired and better enforce the 3-strike rule.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For additional information. post
is simply the boost::post()
line. exec
is complete execution of the contract including loading WASM and everything. Clearly these post()
calls are killing performance.
post calls: 10000, total: 162933us, avg: 16us
exec calls: 10000, total: 167621us, avg: 16us
Using the chain thread pool improves TPS (eosio.token transfer per second) significantly. From ~25K to ~31K. Since this is a rather simple change targeting it for 5.0. With this PR the sequence is net-thread => chain-thread => producer-thread => main-thread.
An improved version of this code that avoids a thread hop will target 6.0; net-thread => chain-thread => main-thread.
Removing 2 thread hops is also possible by recovering the keys on the net thread, however that overwhelms the main thread; net-thread => main-thread. That improvement would require throttling the net threads so they do not overwhelm the main thread. The current mechanism of a limit on the number of active trxs works as a protection but is not ideal for performance measurement as it drops trxs instead of just delaying them.
Issue #1690