You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
make eunit apps=couch_replicator suites=couch_replicator_small_max_request_size_target
This is related to issue #745 but breaking it out there to focus specifically on 413 response and socket handling.
The hope was that updating Mochiweb to 2.17 would fix the socket closing race condition but it doesn't seem to be the case. The above eunit test run reproduces the issues in about 10-15 runs locally on my setup (Erlang 19, master sha: cd598d8, macOS).
The text was updated successfully, but these errors were encountered:
nickva
added a commit
to cloudant/couchdb
that referenced
this issue
Mar 23, 2018
Previously, when the server decided too much data sent with the client's
request, it would immediately send a 413 response and close the socket. The
client side kept sending incoming data as the socket was closed with unread
data in it. When this happens the connection was reset instead of going through
a regular close sequence. The client, specifically the replicator client,
detected the connection reset event before it had a chance to process the 413
response.
Fixesapache#1211
There is a test to reproduce the issue (currently commented out):
4a73d03
Then run with:
make eunit apps=couch_replicator suites=couch_replicator_small_max_request_size_target
This is related to issue #745 but breaking it out there to focus specifically on 413 response and socket handling.
The hope was that updating Mochiweb to 2.17 would fix the socket closing race condition but it doesn't seem to be the case. The above eunit test run reproduces the issues in about 10-15 runs locally on my setup (Erlang 19, master sha: cd598d8, macOS).
The text was updated successfully, but these errors were encountered: