-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate excessive flushing in Netty #7
Comments
this sounds like a good first project for me to get to know the whole stack? |
Yes and no. @louiscryan has already delved into this one some and came up empty-handed. Reducing the number of flushes did possibly have an improvement, but not enough to really care about (<5µs latency decrease) given our current performance. Although combining with a throughput benchmark may show it to be more benefit. |
@ejona86 would something like a AutoFlushingWriteBatcher[1] be too generic and would apply "smart batching" too often? |
@rschildmeijer, it looks like that one just does regular flushes based on a timer. That wouldn't work for us because it would introduce latency. Instead, we looked at something more akin to BetterWriteWithAtomic. As I mentioned though, performance may have improved, but it was mostly lost in the noise. Thus, we are going to focus on other things and then revisit the flushing. |
@ejona86 it is regular flusher based on a timer on steroids. If writes are sparse no additional latency will be added. The functionality is based on the following statement: if mean_time_between_sends < mean_time_to_send, then applying smart batching will improve throughput and reduce netty flush calls. So consider it and benchmark. Might be worth it in your case. Start with a non default setting(?) Based on Todds work: https://qconsf.com/sf2012/dl/qcon-sanfran-2012/slides/ToddMontgomery_RunningWithTheDevilMechanicalSympatheticNetworking.pdf |
@ejona86 @louiscryan I think this is a dup of #331, #332, and #305. Can we close this? |
I'd like to leave this open until we integrate fixes for netty/netty#3670, which currently doesn't have any other associated issue in gRPC. |
Issue in Netty has now been resolved and with the introduction of our own write queue or flush behavior is pretty good |
Determine ways to reduce number of flushes we perform. For example, if the header frame is being sent, flushing afterward is generally a waste since a DATA frame typically follows. We want methods that allow smart semi-automatic flushing or using knowledge from application layer.
The text was updated successfully, but these errors were encountered: