-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Timed out waiting for socket" when attempting to upload multiple large files via Companion #3640
Comments
Hi! Actually we recently introduced that timeout to prevent memory leaks. Are you using self-hosted companion or Transloadit hosted? If you are using self-hosted, could you try to increase Or maybe we should prevent Uppy from sending any POST requests in the first place, unless it can also start a socket connection at the same time |
Thanks for picking this up mifi! We're currently self-hosted. I can try increasing EDIT: I tried upping the
Holding off the requests until sockets are available sounds like a better approach. 3rd party sources probably wouldn't like having many connections sitting idly anyway. How does Transloadit handle this use case? Do you limit file/batch sizes, file counts, etc? |
Hi @mifi, are there plans to modify this behavior? Thanks for any update! |
Thanks for your research into this. I think what is happening is that once More specifically:
On a side note I find it odd that you're getting the error These observations still make me think that this issue should be fixed in the Uppy client by making Uppy only send the |
Hi @mifi - thanks for trying to rally around getting this issue fixed. Unfortunately, we need companion working soon for this case or we are going to have to move off Uppy for these crossload services. Our use case is crossloading large video files, and several of them. This used to work just fine in uppy+companion 2.4.0 - but became broken in this state when we upgraded to 3.5.0 (not sure about any of the 3.x). It is not an option for us to roll back to 2.x. One thing I noticed is the same issue happens with the same error on the example demo - however, I also noticed a note in that example: Do you have an estimate of when this issue might get addressed? I have got to assume that crossloading many large files is a core use case for Uppy. Again, if not, let us know. |
@rcunning just communicated this with the team, we'll take a look. It's not immediately obvious to me where the fix is so I can't give an estimate as of now but tackling it this week.
Yes, it's a note to pair with optional |
@Murderlon Thank you for responding. Digging in the code a little, it appears that if we limited concurrency here it would solve the problem. Each time that |
Hi @Murderlon - any movement on this? I did a little experimentation and it appears that the drive and dropbox apis both limit concurrency by user to 3. It seems a fix in upload.js to limit concurrency to 3 for each |
From my understanding it's not as easy. We have exponential backoff on the client which uses the client's |
Just did a pairing session with @arturi and @aduh95 but we couldn't get it to work. We request a server token for the websocket connection instantly here: uppy/packages/@uppy/tus/src/index.js Lines 456 to 466 in 62b2cbd
But in uppy/packages/@uppy/tus/src/index.js Lines 593 to 597 in 62b2cbd
We can't define the socket connection inside the rate limiter, because it returns a For now, the best way to deal with this is setting a very high timeout for sockets. This requires a different approach which we still need to figure out. Simply limiting to 3 as per your suggestion on the server only works if the client coincidentally has the same limit, which is not a good solution. |
@Murderlon Ok, thanks for the update.... indeed, a bit more complex than I thought. I appreciate you trying to solve it the right way. We went ahead and implemented a band-aid patch to limit concurrency by |
I tried to find a solution, but I'm struggling to understand the rate limiter code and how it works with promises.
I think setting a very high timeout for sockets is not a viable solution as demonstrated above, because the connection to google drive will be terminated by them after 90 seconds of no activity, and probably similar for other providers |
This is what we tried but it didn't work and it makes things conceptually very hard to grasp/maintain. But there might not be another solution so we would have to try to make it work I suppose |
I am still facing this issue. I am on latest version of uppy trying to upload 1000+ images from google drive to s3. When I set the limit to 20, the upload doesn't go beyond 15% but when I set it to 3, the uploads sort of hangs around 80%. I am not able to upload entire dataset at once. Any help would be really appreciated. Example logs on companion server:
|
@lavisht22 what versions of Uppy and Companion are you using? |
@aduh95 I am using |
Fixes: transloadit#3640 Co-authored-by: Merlijn Vos <merlijn@soverin.net>
Fixes: #3640 Co-authored-by: Merlijn Vos <merlijn@soverin.net>
Hi all,
I'm using Uppy/Companion to offer file uploads from remote sources, and am hitting "Timed out waiting for socket connection" reliably in the following use case:
I have Tus and AwsS3Multipart set up as mutually-exclusive destinations, with both their
limit
config set to3
. The timeout occurs when either one is used, so it's destination-agnostic.Debugging
It looks like Companion attempts to assign sockets (apparently limited by the
limit
config for the destination) to every file the moment they're added to the upload batch, and starts a timeout for those that miss out on the first attempt. If no socket frees up in time (default 60000ms), the timeout error will be thrown on the server for any file still waiting on a socket. This leaves the UI in a seemingly frozen state, with no indication of a problem or offer to retry.Probable expected behavior
Perhaps socket assignment should be attempted only when they free up? The current approach works for many smaller files, but easily breaks if the files are too large or if the connection's slow at any given time.
What I've tried
limit
unset on the destination configs would allow all files to be assigned sockets immediately, but this workaround is not desirable, since the limit was placed to keep upload rates manageable.streamingUpload
totrue
based on the similarity of the problem described in Uploading from Google Drive hangs after a while #3098 didn't fix the problem.Thanks for any help!
The text was updated successfully, but these errors were encountered: