-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QUIC: handle the case of both peers half-closing a stream #3343
Comments
I think this is the way to go.
I'd consider this a bug. It should be possible as a protocol designer to use "closing the read side" as a "I have read all data I was expecting". If the other side at the same time closes the write end, both are in agreement on the protocol semantics and we should continue on the happy path. Can you open an issue with |
Opened quinn-rs/quinn#1487. |
I thought about this some more after discussions with @marten-seemann, @mxinden and quinn-rs/quinn#1487. Maybe it's the best fix to simply not error on a Side note: I won't be tackling this in the near future. If anybody could take ownership of this issue, it would be much appreciated! |
Don't close the stream `protocol::recv`. This is a short-term fix for #3298. The issue behind this is a general one on the QUIC transport when closing streams, as described in #3343. This PR only circumvents the issue for identify. A proper solution for our QUIC transport still needs more thought. Pull-Request: #3344.
This is likely stale now that we use |
Summary
If for a QUIC stream both peers agree at the same time that no more data should be send in one direction, the one side tries to close the stream (send a
FIN
), while the other sends aSTOP_SENDING
.Receiving a
STOP_SENDING
before / while closing a stream right now causes theclose
call to return anError::Stopped
.However, in practice in the case that all of our write data was received and acknowledged by the remote, this should not be an error. Instead both peers simply reached the agreement that the stream is closed in this direction, and no data was lost.
The current behaviour is causing issues like #3298 and #3281.
Expected behaviour
If all write data has been received (and ack'ed) by the remote peer closing a stream should not fail.
Actual behaviour
Substream::close
always returns an error if the stream is stopped by the peer before theFIN
frame is acknowledged.Possible Solution
We have been discussion different solutions for this out of band in #3302.
One option would be to simply treat a stopped send-stream as successfully closed. Typically a remote would only send a
STOP_SENDING
if it read all of our data.However, if something unexpected happened and the remote stopped the stream (e.g. because it dropped the stream) before reading our latest data, the error would be lost. Our user would in this case wrongly assume that the remote read the data, when it actually didn't.
Another option would be to check if we still have unacknowledged data on the stream, and depending on that return
Ok
/Err
. Thequinn-proto
API currently doesn't give us this information, so we'd have to add an upstream patch.Side note: the high-level
quinn
implementation handles this case the same way as we do it, i.e. return anError
in the close call if the stream stopped. Given that we long-term consider switching toquinn
, this is also something we should take into consideration.I still need to think more about this. Input is appreciated.
There are easy fixes for #3298 #3281 on the application layer for which I'll do PRs, but long-term we should fix this within
libp2p-quic
.// cc @marten-seemann
Version
0.7.0-alpha.2
Would you like to work on fixing this bug?
Yes
The text was updated successfully, but these errors were encountered: