-
Notifications
You must be signed in to change notification settings - Fork 876
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File mode slow, live mode fast for same route, what am I doing wrong! #835
Comments
Ugh, my calculations were wrong it's faster than I thought. |
Yeah I've configured all that. I've been investigating this closer as I thought I was getting slower speeds yesterday and I've found something really quite strange. I am finding that with heavy logging enabled on the client (which is OS X), it is of the order of 3 or 4 x faster receiving the data from the server. I've been running dozens of tests one after the other, alternating between linking against SRT with heavy logging enabled and disabled and there is a clear correlation. |
On the client... This might be due to the "delivery rate" calculation on the receiving side. The faster the client receives the data, the faster the server tries to send. You can use this script to plot the stats. Like the charts in #807. |
Thanks. SRTO_MAXBW on the server didn't seem to make a difference. Also, I tried a longer test, transmitting 100MB instead of 10MB, and I got 5mb/s without heavy logging, and 10mb/s with heavy logging. |
So here are two logs showing the difference in the server heavy log output when heavy log is enabled/disabled on the client: Server log (client heavy logging enabled), transmitted at 10mb/s: Server log (client heavy logging disabled), transmitted at 3mb/s: For a start, when it's slower, the server log is massively bigger. |
Right, so here is what I have found. MAXBW does indeed help with this issue, however, I think I may have uncovered a bug because if you set MAXBW for the listen socket, then even though it gets passed down/inherited to the accepted socket (getsockopt on the accepted socket returns the correct value), it is not properly taken into account and in fact it behaves as if it is using the default maxbw of zero. However, after accepting a socket, if you explicitly called setsockopt on it for the maxbw, it then is taken into account. Here are two logs. I've deliberate specified a low value so I can clearly see whether it is working or not. The first is the log when maxbw is specified as 262144 only for the listen socket, and the accept socket is supposed to inherit this value: https://ovcollyer.synology.me:5001/d/f/506361188731330598 The second is the log when maxbw is set to the same value directly on the accepted socket immediately after it has been accepted: https://ovcollyer.synology.me:5001/d/f/506361155636174884 You don't even need to read the logs, just look at the size difference. When we rely on the listen socket value, it is ignored, which in my test scenario leads to tons of retransmissions as FileCC struggles to cope, hence the larger file. With the overridden value however, it correctly limits the bandwidth and we get a shorter log due to fewer retransmissions. So I see two issues here:
I assume you are right in your suggestion that when the client is slowed down by heavy logging this has a similar effect in reducing the bandwidth it tries to use. It's curious, however, that enabling heavy logging seemed to change things just so that it just about used the available bandwidth, and in fact made it behave as I would expect FileCC to behave with maxbw = 0, i.e. almost exactly utilising the available bandwidth....? |
Please collect CSV stats instead. Could you also check the branch in PR #807? |
Ok Maxim I'll try and figure out how to generate the stats (this is custom code within my project, not srt-live-transmit). Will also take a look at 807. |
Stats writing code in |
Ok, ignore my point 2 above, I was being a fool, I didn't notice maxbw is int64_t so I was creating some weird scenario by passing in an int. I'm having a bad day, please forgive me. |
Closing as abandoned. |
Between the two endpoints that I am currently testing (London -> Istanbul, both FTTC connections) I can get a really good performance in live mode - I've tried 7mb/s and it streams video flawlessly. I suspect I could go even higher but I've not tried.
However, I cannot get more than about 800kb/s using the file mode. I am using the buffer API, and in my test code I throw a 10MB chunk of data at the send function.
I assume I must be missing something? Here are my logs.
Server:
https://ovcollyer.synology.me:5001/d/f/506319724680847378
Client:
https://ovcollyer.synology.me:5001/d/f/506320602078912532
The text was updated successfully, but these errors were encountered: