Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate excessive flushing in Netty #7

Closed
ejona86 opened this issue Jan 7, 2015 · 8 comments
Closed

Investigate excessive flushing in Netty #7

ejona86 opened this issue Jan 7, 2015 · 8 comments

Comments

@ejona86
Copy link
Member

ejona86 commented Jan 7, 2015

Determine ways to reduce number of flushes we perform. For example, if the header frame is being sent, flushing afterward is generally a waste since a DATA frame typically follows. We want methods that allow smart semi-automatic flushing or using knowledge from application layer.

@buchgr
Copy link
Contributor

buchgr commented Feb 11, 2015

this sounds like a good first project for me to get to know the whole stack?

@ejona86
Copy link
Member Author

ejona86 commented Feb 11, 2015

Yes and no. @louiscryan has already delved into this one some and came up empty-handed. Reducing the number of flushes did possibly have an improvement, but not enough to really care about (<5µs latency decrease) given our current performance. Although combining with a throughput benchmark may show it to be more benefit.

@rschildmeijer
Copy link

@ejona86 would something like a AutoFlushingWriteBatcher[1] be too generic and would apply "smart batching" too often?

[1] https://github.com/spotify/netty-zmtp/blob/master/src/main/java/com/spotify/netty/handler/queue/AutoFlushingWriteBatcher.java

@ejona86
Copy link
Member Author

ejona86 commented Feb 26, 2015

@rschildmeijer, it looks like that one just does regular flushes based on a timer. That wouldn't work for us because it would introduce latency.

Instead, we looked at something more akin to BetterWriteWithAtomic. As I mentioned though, performance may have improved, but it was mostly lost in the noise. Thus, we are going to focus on other things and then revisit the flushing.

@rschildmeijer
Copy link

@ejona86 it is regular flusher based on a timer on steroids. If writes are sparse no additional latency will be added.

The functionality is based on the following statement: if mean_time_between_sends < mean_time_to_send, then applying smart batching will improve throughput and reduce netty flush calls.

So consider it and benchmark. Might be worth it in your case. Start with a non default setting(?)

Based on Todds work: https://qconsf.com/sf2012/dl/qcon-sanfran-2012/slides/ToddMontgomery_RunningWithTheDevilMechanicalSympatheticNetworking.pdf

@ejona86 ejona86 added the netty label Mar 11, 2015
@nmittler
Copy link
Member

@ejona86 @louiscryan I think this is a dup of #331, #332, and #305. Can we close this?

@ejona86
Copy link
Member Author

ejona86 commented Apr 28, 2015

I'd like to leave this open until we integrate fixes for netty/netty#3670, which currently doesn't have any other associated issue in gRPC.

@louiscryan
Copy link
Contributor

Issue in Netty has now been resolved and with the introduction of our own write queue

#431

or flush behavior is pretty good

@lock lock bot locked as resolved and limited conversation to collaborators Sep 23, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants