-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define "tee"ing a stream #271
Comments
From fetch API point of view it might be desirable that |
Yeah, this might be tricky to get right... I don't immediately see how to do it, but I want to stare at it for a while and see if maybe there's a way to make it work. |
Note that tee is also used in step 1 of https://fetch.spec.whatwg.org/#concept-http-network-or-cache-fetch (same reason). I should probably abstract the "copy a request" operation. |
How should clone() or tee() work when the resulting streams are read at wildly different rates? Consider:
At this point, the UA must start buffering the underlying data source for all data in-between My question is, can the UA provide back pressure on the underlying data source because This might be surprising since it will in effect stall Can the spec clarify that the UA has the freedom to provide back pressure if one of the tee'd streams is read too slowly? |
That's our usual way of doing things... Alternatively, can we notify the streams in some way that this is happening? |
Sorry, which is our usual way? OOM or stall? I agree this is the hard problem with teeing. Maybe it even needs to be an option to pick between the two behaviors? That would still leave the question of what the default is. |
OOM. Platform specifications hardly ever deal with limits. |
Our underlying gecko primitives default to providing back pressure in this particular scenario. I'd like to be able to default to back pressure to make implementation easier and more efficient. Of course, an unfortunate side effect of using back pressure here is that it makes GC observable:
|
Or rather, I'd like the default to be weasel words to the effect that "the UA may provide back pressure to all clones if one of the peer streams is not being read". |
Well, the plan was to define an actual TeeStream with a normative specification of how it operates on its single input and multiple output streams. (See old design here, but don't take it for anything serious.) Maybe the conceptual "tee a stream" could be more weasel-wordy. |
Understood. I agree that pure js stream behavior need to be exactly defined. In the fetch Request/Response clone() case, gecko uses "infinite" buffer size already to match XHR behavior of not doing any back pressure to the network. So this won't come into play for us there. I understand blink does do back pressure for fetch(), but I don't know how blink's Request/Response clone() works. Back pressure on peer streams might be an issue there. |
Currently Blink's fetch doesn't have the backpressure mechanism. When we implement it, I think the OOM behavior is the right way. |
Domenic created PR #302 to define "teeing" clearly. Discussion happening there now. |
See yutakahirano/fetch-with-streams#14. Fetch has
req.clone()
andres.clone()
which are currently ill-defined. They should be defined here, in a generic fashion, and probably there should be a user-exposed API for it (one way or another).Our current experiments with this are in TeeStream.md, but those are probably very outdated.
The text was updated successfully, but these errors were encountered: