-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backpressure from tee-ing and slow/pending consumer #506
Comments
Yes; data is never lost when using streams, even after you tee.
Yes.
I'm not sure how it could... |
Got it, thanks for confirming this. Do you think there should be a recommended internal buffer size for consistency across browser/node.js? |
I don't think so... each side needs to be able to make its own decisions as to what makes the most sense in that environment. In general highly-concurrent servers are very different from desktop browsers which are very different from mobile browsers. |
OK, one last question, does browser vendors have a consensus on how much is reasonable for browsers? If so we will just implement it to avoid likely backpressure issues. |
I'm not aware of it, but maybe @yutakahirano and @wanderview could comment on what they're doing so far. But as I said, I'd imagine it will vary depending on what device you're on, and even how much memory is free on the device. If not now, in the future as we gather more data. |
@bitinn I don't know if this is still relevant, but from #271 (comment), it looks like letting the streams buffer infinitely (or until OOM) is not an unlikely approach in the browser. |
Chrome doesn't implement backpressure correctly for some kind of teed streams. |
This is for a ReadableStream tee? It feels to me that a reader on one branch of the tee should not impact the other. The rate of reading for the source feeding the original stream will be determined by the fastest reader of the tee'd branches. Any divergence in the read rates on the tee branches has to result in buffering of some kind. IMO. Note, "buffering of some kind" could mean something other than a memory buffer. If a file stream is tee'd you could essentially open a second file descriptor and let them read at their own rates. The "buffering" is then happening on disk. |
Just to expand on this a bit, this is my plans for our internal "identity transform", or what we call a "pipe". Consider a pipe with a writer stream feeding a buffer and then a reading stream pulling from the buffer. A tee operation essentially hangs another reading stream off the same buffer. This leads to a few different cases:
So the buffer size is MAX(target limit, ABS(A position - B position)). Anyway, sorry if this is a tangent since its not directly related to ReadableStream and the spec. Its just my current thinking on the tee operation and backpressure. |
This is likely related to #401, but I don't want to trigger everyone in the thread if it turns out to be false, so here is another issue specifically for my problem:
Does current definition assume infinite or a significantly large internal buffer when you teed the stream? ie. If a user decide to read just one of the branch to completion, and then read the remaining branch, should the user expect to run into backpressure for significantly large data size? Or should implementation handle this internally for as much as possible?
I ask this because Node.js stream's internal buffer size seems to be smaller than Chrome's buffer size. Which means if the user is using isomorphic
fetch()
, they might run into backpressure on server-side but not on client-side.Maybe there should be a recommended buffer size or an algorithm to figure out the proper buffer size? Apologize if I miss something obvious.
The text was updated successfully, but these errors were encountered: