Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(server): Handle 100-continue #1232

Merged
merged 1 commit into from
Jun 26, 2017

Conversation

sfackler
Copy link
Contributor

cc #838

@sfackler
Copy link
Contributor Author

sfackler commented Jun 25, 2017

The 100 Continue is always sent immediately because tokio-proto tries to load the first bit of the body, but that's better than nothing!

@sfackler sfackler force-pushed the auto-continue branch 2 times, most recently from 890cead to 759e868 Compare June 25, 2017 06:11
@martell
Copy link

martell commented Jun 25, 2017

The 100 Continue is always sent immediately because tokio-proto tries to load the first bit of the body, but that's better than nothing!

@sfackler that is the reason we closed #1040

@sfackler
Copy link
Contributor Author

I don't agree with that rationale. Even blindly returning 100 Continue is way better than not. Without this change, curl http://localhost:1337/echo -F foo=bar takes 1.01 seconds against the example echo server since it sits around for a second waiting for a 100 Continue that never comes.

@sfackler sfackler force-pushed the auto-continue branch 2 times, most recently from b434e15 to a5b1646 Compare June 25, 2017 17:23
Copy link
Member

@seanmonstar seanmonstar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's apparently "fine" to respond with 100 Continue, and then right after, 400 Bad Request. So I'm fine with this behavior. Of course, when tokio-rs/tokio-proto#153 has a solution, we'd want to adapt.

In this case, what's the best way to test this? The Client doesn't know how to wait for a continue either. Would the best thing be to just add a test in tests/server.rs with just a TcpStream and timeouts?

@@ -119,6 +123,17 @@ pub fn should_keep_alive(version: HttpVersion, headers: &Headers) -> bool {
ret
}

/// Checks if a connection is expecting a `100 Continue` before sending its body.
#[inline]
pub fn expecting_continue(version: HttpVersion, headers: &Headers) -> bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we get a test case at the bottom, like there is for should_keep_alive?

@sfackler
Copy link
Contributor Author

Yeah, I was trying to figure out the right way to test this. Client support is something I'd love to have, but it's a bit more complex than the server side - you have to wait on the continue with a timeout in case the server doesn't know how to handle Expect: 100-continue, and resend the headers without the Expect if the server returns a 417. Manually driving the client side against a raw TcpStream seems like a decent option.

@sfackler
Copy link
Contributor Author

Updated with tests

@seanmonstar seanmonstar merged commit ecd4c86 into hyperium:master Jun 26, 2017
@seanmonstar
Copy link
Member

Thanks!

@martell
Copy link

martell commented Jun 26, 2017

Awesome now we have some support in tree. :)

@seanmonstar last week you said on irc that there might be something hyper can do about the buffering. Can we get an outline of your thoughts on that maybe in the discussion on #838 ?
I will also link the relevant different implementation attempts carl tried for the futures crate.

@seanmonstar
Copy link
Member

My thoughts are really short, probably flawed, and untested. Essentially, record some state Polled { NotPolled, TokioBufferedPolled, UserPolled }, updating in the Conn::poll method. Such that, on the 2nd poll, we now believe UserPolled, and would then respond with a 100-continue.

This might not work at all. It's possible that the buffer in Tokio will know better, and if the user polls after the buffer has, and the Conn hasn't notified that the task is ready for reading, it will short circuit and not poll the Conn again. I have no idea.

@sfackler sfackler deleted the auto-continue branch June 26, 2017 19:26
@sfackler
Copy link
Contributor Author

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants