Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upload tests timeout with caddy server #67

Closed
Mikle-Bond opened this issue Jan 25, 2023 · 12 comments
Closed

Upload tests timeout with caddy server #67

Mikle-Bond opened this issue Jan 25, 2023 · 12 comments
Labels
question Further information is requested

Comments

@Mikle-Bond
Copy link

Hello.

I was trying to use caddy as reverse proxy in front of OST. The download tests work fine, but uploads behave weirdly.

2023-01-24 14_08_30-Speed Test by OpenSpeedTest com

It seems, that the first few packages are sent from browser to server, and the rest of transmission is not taken into account. Or fails. Or timeouts. I don't even know anymore. As graph shows overall average, there are 5-6 spikes at the beginning when xhr-connections are initiated, and then it approaches 0 as more 0-speed measurements are taken.

I have tried every trick I could find online:

  • setting reverse_proxy > transport > compression off
  • limiting connections to HTTP 1.1
    • via reverse_proxy > transport > versions 1.1
    • via tls > alpn http/1.1
  • increasing buffer size
    • via reverse_proxy > max_buffer_size 100MiB
    • via request_body > max_size 100MiB
    • via reverse_proxy > transport > max_response_header 100MiB (just to be sure)
  • forcing buffering with reverse_proxy > buffer_requests and/or reverse_proxy > buffer_responses
  • disabling buffering with reverse_proxy > flush_interval -1
  • increasing timeouts with reverse_proxy > transport > dial_timeout 10m

I've tried all of the above in different combinations, and yet, upload won't work.

I know this is caddy-specific issue, but in another issue you've mentioned that you're interested in making caddy work with OST. The configs at #44 didn't work for me, and the author there didn't confirm them to work either.

I have also tried to serve files directly via caddy's built-in file_server, and I tried to mimic the Nginx's configs taken from here, to no avail either. Exact same results.

Interestingly, when I proxied OST via caddy-l4 instance, without terminating TLS, everything worked fine.

I hope you will have a better luck configuring caddy. If you know anything else I can try, please let me know.

@openspeedtest
Copy link
Owner

openspeedtest commented Jan 26, 2023

@Mikle-Bond
Delete everything and try

         reverse_proxy http://localhost:3000 {
         buffer_requests
         buffer_responses
        }

https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#streaming
buffer_requests will cause the proxy to read the entire request body into a buffer before sending it upstream. This is very inefficient and should only be done if the upstream requires reading request bodies without delay (which is something the upstream application should fix).

buffer_responses will cause the entire response body to be read and buffered in memory before being proxied to the client. This should be avoided if at all possible for performance reasons, but could be useful if the backend has tighter memory constraints.

@openspeedtest
Copy link
Owner

         reverse_proxy http://localhost:3000 {
         buffer_requests
         buffer_responses
         flush_interval -1
         max_buffer_size 35MiB
        }

Adding flush_interval may help reduce memory usage.
max_buffer_size I didn't see a difference when tested with and without max buffer. Still, I think it's better to specify a number.

@openspeedtest
Copy link
Owner

Tested 10GbE with Caddy -> https://www.youtube.com/watch?v=0pD8fen2nNg
It was also tested with SSL on a Public Cloud.
I think it's working fine.

This was referenced Jan 26, 2023
@bt90
Copy link

bt90 commented Jan 26, 2023

@openspeedtest out of curiosity, did you check HTTP/3 ?

@openspeedtest
Copy link
Owner

@bt90 Yes #52 (comment)
It was tested using Caddy.
HTTP2 & 3 have Not yet been added to the Docker image.
I need to run more tests.
HTTP3 Nginx still in beta?.

@bt90
Copy link

bt90 commented Jan 26, 2023

Caddy enables HTTP/3 by default. That's why i'm asking. Note that you need an explicit binding for 443/udp if you're using their docker container.

As for nginx, it's still in development.

@openspeedtest
Copy link
Owner

Oh, I Installed it without using Docker https://caddyserver.com/docs/install#debian-ubuntu-raspbian

@bt90
Copy link

bt90 commented Jan 26, 2023

That should work. Is a firewall involved?

@openspeedtest
Copy link
Owner

openspeedtest commented Jan 26, 2023

@bt90
No, If UDP is blocked, I think HTTP2 will be used.
It was a Direct connection between M1 MacMini and My VMware ESXi Server.
Tested on a Public Cloud, and HTTP3 using Caddy worked fine.

@bt90
Copy link

bt90 commented Jan 26, 2023

Are you sure? I don't think that their HTTP/3 implementation is able to saturate a 10G link (yet).

quic-go/quic-go#3670

@openspeedtest
Copy link
Owner

openspeedtest commented Jan 26, 2023

@bt90 I thought you asked about HTTP3 handshake https://youtu.be/_QQX0Ezpq8U?t=1235 over firewall.
I tested Caddy with Internal TSL on localhost., it is not touching 10Gbps.

with TSL - H3 7000+ Download and 980+ Upload
without TSL H 1.1 9400+ Download and 9400+ Upload

Docker Nginx HTTP 1.1 with TSL 9400+ Download and 9400+ Upload

I think this was because of the consumer-grade hardware. I used.
I tested NPM & Traefik Proxy on the same device using HTTP1/2 last year using TSL.
Found similar performance degradation when using behind a reverse proxy.
Either this is because of the hardware I used or reverse proxy limitation by Design.

@headcrushed
Copy link

headcrushed commented Apr 5, 2023

To people who end up here because of unrealistic upload speeds.
Instead of

         reverse_proxy http://localhost:3000 {
         buffer_requests
         buffer_responses
         flush_interval -1
         max_buffer_size 35MiB
        }

use this

         reverse_proxy http://localhost:3000 {
             request_buffers 35MiB
             response_buffers 35MiB
             flush_interval -1
        }

It seems "buffer_requests", "buffer_responses" and "max_buffer_size" are deprecated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants