Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

abort() followed by start() is broken on non-file streams #275

Closed
mifi opened this issue Aug 31, 2021 · 19 comments · Fixed by #385
Closed

abort() followed by start() is broken on non-file streams #275

mifi opened this issue Aug 31, 2021 · 19 comments · Fixed by #385
Labels

Comments

@mifi
Copy link
Collaborator

mifi commented Aug 31, 2021

Describe the bug
With a non-file stream, when an upload is start()-ed and then after a few seconds of progress, we call abort(), followed by another start() a few seconds later, the upload grinds to a halt and tus-js-client leaks stream event listeners. This does not happen with file based streams (fs.createReadStream).

To Reproduce

mkdir stream-issue
cd stream-issue
yarn add tus-js-client into-stream@6 throttle

create index.js:

const fs = require('fs')
const { Upload } = require('tus-js-client');
const intoStream = require('into-stream');
const Throttle = require('throttle');

/*
const { Server, FileStore } = require('tus-node-server');
const server = new Server();
server.datastore = new FileStore({
    path: '/files'
});

const host = '127.0.0.1';
const port = 1080;
server.listen({ host, port }, () => {
    console.log(`[${new Date().toLocaleTimeString()}] tus server listening at http://${host}:${port}`);
}); */


// Replace this with the URL of an assembly that is in the state "UPLOADING", for example by pausing an upload using uppy
const assemblyUrl = 'http://api2.qarq.transloadit.com/assemblies/5c990a3f19cd48e8b457d20b8bea11a1'


;(async () => {
  const buf = Buffer.alloc(3e6) // 3MB
  const uploadSize = buf.length


  // THIS CODE REPRODUCES THE PROBLEM:
  const source = intoStream(buf)
  const stream = new Throttle(500e3);
  source.pipe(stream)


  // WITH THIS CODE IT WORKS AS EXPECTED
  /*
  fs.writeFileSync('./tmpfile', buf);
  const stream = fs.createReadStream('./tmpfile')
  */


  let hadFirstProgress = false;
  const tus = new Upload(stream, {
    endpoint: 'https://api2.transloadit.com/resumable/files/',
    uploadLengthDeferred: false,
    retryDelays: [0, 1000, 3000, 5000],
    uploadSize,
    chunkSize: 50e6,
    addRequestId: true,
    metadata: {
      filename: '10625646044fe84a0be97a4fd5c42181.mp4',
      filetype: 'video/mp4',
      username: 'John',
      license: 'Creative Commons',
      name: '10625646044fe84a0be97a4fd5c42181.mp4',
      type: 'video/mp4',
      check_test: '1',
      yo: '1',
      bla: '12333',
      assembly_url: assemblyUrl,
      fieldname: 'file'
    },
    onError (error) {
      console.log(error)
    },
    onProgress (bytesUploaded, bytesTotal) {
      if (!hadFirstProgress) onFirstProgress();
      hadFirstProgress = true;
      console.log({ bytesUploaded, bytesTotal })
    },
    onSuccess (data) {
      console.log('success', data)
    },
  })

  tus.start()

  function onFirstProgress() {
    setTimeout(() => {
      console.log('aborting')
      tus.abort();
    }, 2000)

    setTimeout(() => {
      console.log('resuming')
      tus.start()
    }, 5000)
  }
})().catch(console.error)

Now create an assembly in the state UPLOADING and replace assemblyUrl with the assembly's url.

node index.js

Observe the log output:

{ bytesUploaded: 0, bytesTotal: 3000000 }
{ bytesUploaded: 16384, bytesTotal: 3000000 }
{ bytesUploaded: 50000, bytesTotal: 3000000 }
{ bytesUploaded: 180224, bytesTotal: 3000000 }
{ bytesUploaded: 250000, bytesTotal: 3000000 }
{ bytesUploaded: 409600, bytesTotal: 3000000 }
{ bytesUploaded: 458752, bytesTotal: 3000000 }
{ bytesUploaded: 507904, bytesTotal: 3000000 }
{ bytesUploaded: 606208, bytesTotal: 3000000 }
{ bytesUploaded: 655360, bytesTotal: 3000000 }
{ bytesUploaded: 753664, bytesTotal: 3000000 }
{ bytesUploaded: 819200, bytesTotal: 3000000 }
{ bytesUploaded: 900000, bytesTotal: 3000000 }
{ bytesUploaded: 950272, bytesTotal: 3000000 }
{ bytesUploaded: 1015808, bytesTotal: 3000000 }
{ bytesUploaded: 1064960, bytesTotal: 3000000 }
{ bytesUploaded: 1130496, bytesTotal: 3000000 }
{ bytesUploaded: 1163264, bytesTotal: 3000000 }
aborting
resuming
{ bytesUploaded: 1146810, bytesTotal: 3000000 }
{ bytesUploaded: 1261568, bytesTotal: 3000000 }
{ bytesUploaded: 1261568, bytesTotal: 3000000 }
{ bytesUploaded: 1277952, bytesTotal: 3000000 }
{ bytesUploaded: 1277952, bytesTotal: 3000000 }
{ bytesUploaded: 1283616, bytesTotal: 3000000 }
{ bytesUploaded: 1283616, bytesTotal: 3000000 }
{ bytesUploaded: 1283616, bytesTotal: 3000000 }
{ bytesUploaded: 1289280, bytesTotal: 3000000 }
{ bytesUploaded: 1289280, bytesTotal: 3000000 }
{ bytesUploaded: 1294336, bytesTotal: 3000000 }
{ bytesUploaded: 1294336, bytesTotal: 3000000 }
{ bytesUploaded: 1300000, bytesTotal: 3000000 }
{ bytesUploaded: 1300000, bytesTotal: 3000000 }
{ bytesUploaded: 1321440, bytesTotal: 3000000 }
{ bytesUploaded: 1321440, bytesTotal: 3000000 }
{ bytesUploaded: 1310720, bytesTotal: 3000000 }
{ bytesUploaded: 1310720, bytesTotal: 3000000 }
{ bytesUploaded: 1316384, bytesTotal: 3000000 }
{ bytesUploaded: 1337824, bytesTotal: 3000000 }
{ bytesUploaded: 1337824, bytesTotal: 3000000 }
{ bytesUploaded: 1386976, bytesTotal: 3000000 }
{ bytesUploaded: 1386976, bytesTotal: 3000000 }
{ bytesUploaded: 1332768, bytesTotal: 3000000 }
{ bytesUploaded: 1354208, bytesTotal: 3000000 }
{ bytesUploaded: 1403360, bytesTotal: 3000000 }
{ bytesUploaded: 1403360, bytesTotal: 3000000 }
{ bytesUploaded: 1468896, bytesTotal: 3000000 }
{ bytesUploaded: 1468896, bytesTotal: 3000000 }
{ bytesUploaded: 1360720, bytesTotal: 3000000 }
{ bytesUploaded: 1409872, bytesTotal: 3000000 }
{ bytesUploaded: 1475408, bytesTotal: 3000000 }
{ bytesUploaded: 1475408, bytesTotal: 3000000 }
{ bytesUploaded: 1507968, bytesTotal: 3000000 }
{ bytesUploaded: 1507968, bytesTotal: 3000000 }
{ bytesUploaded: 1419744, bytesTotal: 3000000 }
{ bytesUploaded: 1485280, bytesTotal: 3000000 }
{ bytesUploaded: 1517840, bytesTotal: 3000000 }
{ bytesUploaded: 1517840, bytesTotal: 3000000 }
{ bytesUploaded: 1577072, bytesTotal: 3000000 }
{ bytesUploaded: 1577072, bytesTotal: 3000000 }
{ bytesUploaded: 1436128, bytesTotal: 3000000 }
{ bytesUploaded: 1501664, bytesTotal: 3000000 }
{ bytesUploaded: 1534224, bytesTotal: 3000000 }
{ bytesUploaded: 1593456, bytesTotal: 3000000 }
{ bytesUploaded: 1593456, bytesTotal: 3000000 }
{ bytesUploaded: 1708144, bytesTotal: 3000000 }
{ bytesUploaded: 1708144, bytesTotal: 3000000 }
(node:65178) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 end listeners added to [Throttle]. Use emitter.setMaxListeners() to increase limit
{ bytesUploaded: 1550608, bytesTotal: 3000000 }
{ bytesUploaded: 1609840, bytesTotal: 3000000 }
{ bytesUploaded: 1724528, bytesTotal: 3000000 }
{ bytesUploaded: 1724528, bytesTotal: 3000000 }
{ bytesUploaded: 1822832, bytesTotal: 3000000 }
{ bytesUploaded: 1822832, bytesTotal: 3000000 }
(node:65178) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 data listeners added to [Throttle]. Use emitter.setMaxListeners() to increase limit
{ bytesUploaded: 1617200, bytesTotal: 3000000 }
{ bytesUploaded: 1731888, bytesTotal: 3000000 }
{ bytesUploaded: 1830192, bytesTotal: 3000000 }
...many more progres events...
{ bytesUploaded: 2920560, bytesTotal: 3000000 }
{ bytesUploaded: 2979984, bytesTotal: 3000000 }
{ bytesUploaded: 2979984, bytesTotal: 3000000 }
{ bytesUploaded: 3000000, bytesTotal: 3000000 }
success undefined

Note: the file does get uploaded successfully, it just takes a very long time, and it leaks EventEmitters

Alternatively, easier to reproduce but different error/outcome:

  • Either replace endpoint: 'https://tusd.tusdemo.net/files/. No assemblyUrl needed. The upload completely hangs but crashes with a http 499 after a while
  • Or replace endpoint: 'http://127.0.0.1:1080/files/ and enable/uncomment the tus-node-server code. This also causes the upload to slow to a halt (even slower progress events).

Even without Throttle, the same issue happens. I also tried with a fs.createReadStream pipe through a PassThrough stream, and same issue.

Expected behavior
It should continue uploading with normal speed after calling start() again, just like for file streams.

Setup details
Please provide following details, if applicable to your situation:

@mifi mifi added the bug label Aug 31, 2021
mifi added a commit to transloadit/uppy that referenced this issue Sep 2, 2021
This allows for upload to start almost immediately without having to first download the file.
And it allows for uploading bigger files, because transloadit assembly will not timeout,
as it will get upload progress events all the time.
No longer need for illusive progress.
Also fix eslint warnings and simplify logic

Still TODO: TUS pause/resume has a bug:
tus/tus-js-client#275
@Acconut
Copy link
Member

Acconut commented Oct 13, 2021

@mifi I will try too look into this in the next week, FYI.

mifi added a commit to transloadit/uppy that referenced this issue Nov 1, 2021
…ad/download without saving to disk (#3159)

* rewrite to async/await

* Only fetch size (HEAD) if needed #3034

* Update packages/@uppy/companion/src/server/controllers/url.js

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

* Change HEAD to GET in getURLMeta

and abort request immediately upon response headers received
#3034 (comment)

* fix lint

* fix lint

* cut off length of file names

or else we get
"MetadataTooLarge: Your metadata headers exceed the maximum allowed metadata size" in tus / S3

* try to fix flaky test

* remove iife and cleanup code a bit

* fix lint by reordering code

* rename Uploader to MultipartUploader

* Rewrite Uploader to use fs-capacitor #3098

This allows for upload to start almost immediately without having to first download the file.
And it allows for uploading bigger files, because transloadit assembly will not timeout,
as it will get upload progress events all the time.
No longer need for illusive progress.
Also fix eslint warnings and simplify logic

Still TODO: TUS pause/resume has a bug:
tus/tus-js-client#275

* add comment in dev Dashboard and pull out variable

* fix a bug where remote xhr upload would ignore progress events in the UI

* fix bug where s3 multipart cancel wasn't working

* fix also cancel for xhr

* Rewrite providers to use streams

This removes the need for disk space as data will be buffered in memory and backpressure will be respected
#3098 (comment)
All providers "download" methods will now return a { stream } which can be consumed by uploader.

Also:
- Remove capacitor (no longer needed)
- Change Provider/SearchProvider API to async (Breaking change for custom companion providers)
- Fix the case with unknown length streams (zoom / google drive). Need to be downloaded first
- rewrite controllers deauth-callback, thumbnail, list, logout to async
- getURLMeta: make sure size is never NaN (NaN gets converted to null in JSON.stringify when sent to client but not when used in backend)
- fix purest mock (it wasn't returning statusCode on('response'))
- add missing http mock for "request" for THUMBNAIL_URL and http://url.myendpoint.com/file (these request errors were never caught by tests previously)
- "upload functions with tus protocol" test: move filename checking to new test where size is null. Fix broken expects
- fix some lint

* Implement streamingUpload flag

COMPANION_STREAMING_UPLOAD
Default to false due to backward compatibility
If set to true, will start to upload files at the same time as dowlnoading them, by piping the streams

- Also implement progress for downloading too
- and fix progress duplication logic
- fix test that assumed file was fully downloaded after first progress event

* rearrange validation logic

* add COMPANION_STREAMING_UPLOAD to env.test.sh too

* implement maxFileSize option in companion

for both unknown length and known length downloads

* fix bug

* fix memory leak when non 200 status

streams were being kept

* fix lint

* Add backward-compatibility for companion providers

Implement a new static field "version" on providers, which when not set to 2,
will cause a compatibility layer to be added for supporting old callback style provider api

also fix some eslint and rename some vars

* document new provider API

* remove static as it doesn't work on node 10

* try to fix build issue

* degrade to node 14 in github actions

due to hitting this error: nodejs/node#40030
https://github.com/transloadit/uppy/pull/3159/checks?check_run_id=3544858518

* pull out duplicated logic into reusable function

* fix lint

* make methods private

* re-add unsplash download_location request

got lost in merge

* add try/catch

as suggested #3159 (comment)

* Only set default chunkSize if needed

for being more compliant with previous behavior when streamingUpload = false

* Improve flaky test

Trying to fix this error:

FAIL packages/@uppy/utils/src/delay.test.js
  ● delay › should reject when signal is aborted

    expect(received).toBeLessThan(expected)

    Expected: < 70
    Received:   107

      32 |     const time = Date.now() - start
      33 |     expect(time).toBeGreaterThanOrEqual(30)
    > 34 |     expect(time).toBeLessThan(70)
         |                  ^
      35 |   })
      36 | })
      37 |

      at Object.<anonymous> (packages/@uppy/utils/src/delay.test.js:34:18)

https://github.com/transloadit/uppy/runs/3984613454?check_suite_focus=true

* Apply suggestions from code review

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

* fix review feedback & lint

* Apply suggestions from code review

Co-authored-by: Merlijn Vos <merlijn@soverin.net>

* remove unneeded ts-ignore

* Update packages/@uppy/companion/src/server/controllers/url.js

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

* Update packages/@uppy/companion/src/server/Uploader.js

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

* reduce nesting

* fix lint

* optimize promisify

#3159 (comment)

* Update packages/@uppy/companion/test/__tests__/uploader.js

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>
Co-authored-by: Merlijn Vos <merlijn@soverin.net>
@Acconut
Copy link
Member

Acconut commented Feb 21, 2022

Thank you for the detailed report, Mikael. I was able to reproduce the issue when uploading to tusd.tusdemo.net. After aborting and restarting the upload, a few KBs are transferred to the tus server but then it hangs until a timeout. I am quite certain that this is caused by how tus-js-client handles streams. In particular, I am talking about the StreamSource and SlicingStream classes: https://github.com/tus/tus-js-client/blob/master/lib/node/sources/StreamSource.js and https://github.com/tus/tus-js-client/blob/master/lib/node/sources/SlicingStream.js. I suspect there is some buggy behavior in this, but I wasn't able to pinpoint it directly. This is also partly because my experience with streams in Node.js does not reach very deep.

I would like to describe the goal behind the StreamSource class and maybe you can help me with finding a better implementation for it. Basically, StreamSource is an implementation of the FileSource interface:

interface FileSource {
size: number;
slice(start: number, end: number): Promise<SliceResult>;
close(): void;
}
interface SliceResult {
// Platform-specific data type which must be usable by the HTTP stack as a body.
value: any;
done: boolean;
}

It is used by tus-js-client to turn a file, blob, buffer, etc into chunks that can be sent in a single HTTP request. StreamSource does this for Node's readable streams. The challenge here is that the stream not only must be sliced in potentially multiple parts, but also that a certain amount of data must be buffered, which can be used for resuming the upload if some data got lost on the way to the server.

StreamSource has two major functions:

  1. constructor(stream, chunkSize) where stream is the readable stream provided by the user, for example the Throttle instance. chunkSize defines the number of bytes that StreamSource should buffer so that data can be retransmitted.
  2. slice(start, end) returns a Buffer or another Readable stream representing the data inside the stream beginning at position start and ending at end. The return value can be anything that can be sent inside a HTTP request by Node's http module. slice is invoked for every HTTP PATCH request that is sent. There are some constraints on this method:
  • end - start <= chunkSize: chunkSize is basically the maximum number of bytes that can be requested at a single time and is also the maximum number of bytes that shall be buffered.
  • If we have two calls, first slice(startA, endA) and then slice(startB, endB), then startA <= startB. This means, that tus-js-client may request the same data slice again (e.g., if an entire request must be retried). If the data has already been read from the stream, then the data should be taken from the internal buffer. However, the condition startA <= startB also means that the start position is monotonically increasing. So we always move forwards, and never backwards.
  • The return value of slice will be piped into a HTTP request. If this request is aborted/destroyed, the reading of data from the stream should be paused until another call to slice is made, which can resume the flow of data from the original stream.

Now, I believe that the current implementation of StreamSource sometimes loses data, as you have experienced in your bug report. I wanted to ask if you know a good way of implementing the above interface using Node's streams which does not leak memory, event listeners or data. I feel like I don't know enough about streams to pull this off.

Any help in that regard is appreciated. Maybe also @juliangruber has any idea about this.

@mifi
Copy link
Collaborator Author

mifi commented Feb 22, 2022

Thanks for explaining the constraints / specification of this class. One of the reasons that I never started working on this is because i didnt really know how it all works. I have some experience with working with streams, but i find working with streams and low level byte handling really mind bending and complicated. However I can try and spend a few hours to see if I can come up with something that works :)

@juliangruber
Copy link

The SlicingStream and StreamSource are using some outdated or non-existing nodejs stream patterns, I think we could improve this in general by giving them an overhaul, and hopefully also fix the bug in question here. Shall I give it a stab, and document the changes?

@mifi
Copy link
Collaborator Author

mifi commented Feb 22, 2022

I'm having a try at it now, trying to solve it by removing SlicingStream and instead return a Buffer, because a stream that produces streams complicates a lot

@juliangruber
Copy link

I think an api like this could work:

const slicer = new Slicer(chunkSize)
const seeker = new Seeker(from, to)
source.pipe(slicer).pipe(seeker)
const result = await seeker.find()
source.destroy()

@Acconut
Copy link
Member

Acconut commented Feb 22, 2022

@juliangruber Thank you very much! Are the Slicer and Seeker classes already available in some package on npm or would we have to develop these on our own?

@juliangruber
Copy link

I found some modules for this on npm, maybe some work for this use case:

@Acconut
Copy link
Member

Acconut commented Feb 22, 2022

I'm having a try at it now, trying to solve it by removing SlicingStream and instead return a Buffer, because a stream that produces streams complicates a lot

Interesting approach that I haven't thought about before. My concern would be that we first have to buffer the data (e.g. 10MB if that is the chunk size) before sending it off to the server. This would cause a time delay in comparison to the current implementation, but maybe this is acceptable. I am not concerned about memory usage since we are also currently buffering up to a limit of the given chunk size.

I found some modules for this on npm, maybe some work for this use case:

Awesome, I will also have a look into these!

@mifi
Copy link
Collaborator Author

mifi commented Feb 22, 2022

Interesting approach that I haven't thought about before. My concern would be that we first have to buffer the data (e.g. 10MB if that is the chunk size) before sending it off to the server. This would cause a time delay in comparison to the current implementation, but maybe this is acceptable. I am not concerned about memory usage since we are also currently buffering up to a limit of the given chunk size.

As far as I understand, we always have to buffer every chunk we read from the stream anyways, because the consumer might read (call slice on it) again later?

Actually when I look at the current implementation of StreamSource, I can never see this._buf ever being written to, Maybe this could be the bug? Or maybe I'm missing something.

Anyways I tried to reimplement it without streams, but I'm getting a multitude of test failures no matter what I do, so I'm a bit stuck and not sure if I should spend more time on this. Feel free to take a stab on this @juliangruber. FWIW here's my failing code so far:

async function readChunk (stream) {
  return new Promise((resolve, reject) => {
    stream.once('error', reject)

    function tryReadChunk () {
      const chunk = this._stream.read()

      if (chunk != null) {
        resolve(chunk)
        return
      }

      // todo must handle the end case here too
      stream.once('readable', () => tryReadChunk)
    }

    tryReadChunk()
  })
}

export default class StreamSource {
  constructor (stream, chunkSize) {
    this._stream = stream

    // Setting the size to null indicates that we have no calculation available
    // for how much data this stream will emit requiring the user to specify
    // it manually (see the `uploadSize` option).
    this.size = null

    this._chunkSize = +chunkSize

    stream.pause()
    this._ended = false
    stream.on('end', () => {
      this._ended = true
    })
    stream.on('error', (err) => {
      this._error = err
    })

    this._buf = Buffer.alloc(0)
    this._bufPos = 0
  }

  // See https://github.com/tus/tus-js-client/issues/275#issuecomment-1047304211
  async slice (start, end) {
    // Fail fast if the caller requests a proportion of the data which is not
    // available any more.
    if (start < this._bufPos) {
      throw new Error('cannot slice from position which we already seeked away')
    }

    if (this._error) throw this._error

    // Always attempt to drain the buffer first, even if this means that we
    // return less data than the caller requested.
    if (start < this._bufPos + this._buf.length) {
      const bufStart = start - this._bufPos
      const bufEnd = Math.min(this._buf.length, end - this._bufPos)

      const sliced = this._buf.slice(bufStart, bufEnd)

      sliced.size = sliced.length
      return { value: sliced }
    }

    // OK, we are outside the range of our stored buffer, and need to read from the stream itself

    const bytesToSkip = start - (this._bufPos + this._buf.length)

    let bytesRead = 0

    while (true) {
      if (this._ended) return { value: null, done: true }
      const receivedChunk = await readChunk(this._stream)

      bytesRead += receivedChunk.length
      if (bytesRead > bytesToSkip) {
        const bytesToSkipInChunk = bytesToSkip - (bytesRead - receivedChunk.length)
        const slicedChunk = receivedChunk.slice(bytesToSkipInChunk)
        this._buf = slicedChunk // store in case the consumer wants to read this chunk (or parts of it) again
        this._bufPos = start
        break
      }
    }

    const requestedLength = end - start

    // need to constrain the returned chunk size?
    const chunkToReturn = this._buf.slice(0, requestedLength)

    chunkToReturn.size = chunkToReturn.length
    return { value: chunkToReturn }
  }

  close () {
    this._stream.destroy()
  }
}

had to also add to .babelrc to makethe code run:

"exclude": [
  "@babel/plugin-transform-regenerator"
]

@juliangruber
Copy link

Will do 👍 And thanks for sharing your code

@mifi
Copy link
Collaborator Author

mifi commented Feb 24, 2022

anyone made any progress yet?

@juliangruber
Copy link

I haven't yet had time for this. If you want to continue on this, the first thing I would change is not to read from the stream manually, but always to use .pipe() and the other high level stream methods, as that usually makes everything safer

@Acconut
Copy link
Member

Acconut commented Mar 21, 2022

@mifi I will pick this issue up again this week!

@mifi
Copy link
Collaborator Author

mifi commented Mar 21, 2022

Great! My brain almost exploded last time when I tried to fix this.

@tim-kos
Copy link
Member

tim-kos commented May 30, 2022

Hey @Acconut , are you still on top of this one?

@Acconut
Copy link
Member

Acconut commented Jun 6, 2022

In #385 I implemented a new stream source, using which a tus uploaded can be start()-ed after being abort()-ed. It needs some more polishing and testing but that should go quick.

@Acconut
Copy link
Member

Acconut commented Jun 7, 2022

@mifi In tus-js-client v3.0.0-0 (prerelease for now) I have added a fix. Can you try it out and see if that helps with your original issue?

@mifi
Copy link
Collaborator Author

mifi commented Jun 27, 2022

Fantastic! I just tested and it looks like it's working nicely with pause/resume in the companion UI now

mifi added a commit to transloadit/uppy that referenced this issue Jun 29, 2022
by upgrading tus-js-client
see tus/tus-js-client#275
HeavenFox pushed a commit to docsend/uppy that referenced this issue Jun 27, 2023
…ad/download without saving to disk (transloadit#3159)

* rewrite to async/await

* Only fetch size (HEAD) if needed transloadit#3034

* Update packages/@uppy/companion/src/server/controllers/url.js

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

* Change HEAD to GET in getURLMeta

and abort request immediately upon response headers received
transloadit#3034 (comment)

* fix lint

* fix lint

* cut off length of file names

or else we get
"MetadataTooLarge: Your metadata headers exceed the maximum allowed metadata size" in tus / S3

* try to fix flaky test

* remove iife and cleanup code a bit

* fix lint by reordering code

* rename Uploader to MultipartUploader

* Rewrite Uploader to use fs-capacitor transloadit#3098

This allows for upload to start almost immediately without having to first download the file.
And it allows for uploading bigger files, because transloadit assembly will not timeout,
as it will get upload progress events all the time.
No longer need for illusive progress.
Also fix eslint warnings and simplify logic

Still TODO: TUS pause/resume has a bug:
tus/tus-js-client#275

* add comment in dev Dashboard and pull out variable

* fix a bug where remote xhr upload would ignore progress events in the UI

* fix bug where s3 multipart cancel wasn't working

* fix also cancel for xhr

* Rewrite providers to use streams

This removes the need for disk space as data will be buffered in memory and backpressure will be respected
transloadit#3098 (comment)
All providers "download" methods will now return a { stream } which can be consumed by uploader.

Also:
- Remove capacitor (no longer needed)
- Change Provider/SearchProvider API to async (Breaking change for custom companion providers)
- Fix the case with unknown length streams (zoom / google drive). Need to be downloaded first
- rewrite controllers deauth-callback, thumbnail, list, logout to async
- getURLMeta: make sure size is never NaN (NaN gets converted to null in JSON.stringify when sent to client but not when used in backend)
- fix purest mock (it wasn't returning statusCode on('response'))
- add missing http mock for "request" for THUMBNAIL_URL and http://url.myendpoint.com/file (these request errors were never caught by tests previously)
- "upload functions with tus protocol" test: move filename checking to new test where size is null. Fix broken expects
- fix some lint

* Implement streamingUpload flag

COMPANION_STREAMING_UPLOAD
Default to false due to backward compatibility
If set to true, will start to upload files at the same time as dowlnoading them, by piping the streams

- Also implement progress for downloading too
- and fix progress duplication logic
- fix test that assumed file was fully downloaded after first progress event

* rearrange validation logic

* add COMPANION_STREAMING_UPLOAD to env.test.sh too

* implement maxFileSize option in companion

for both unknown length and known length downloads

* fix bug

* fix memory leak when non 200 status

streams were being kept

* fix lint

* Add backward-compatibility for companion providers

Implement a new static field "version" on providers, which when not set to 2,
will cause a compatibility layer to be added for supporting old callback style provider api

also fix some eslint and rename some vars

* document new provider API

* remove static as it doesn't work on node 10

* try to fix build issue

* degrade to node 14 in github actions

due to hitting this error: nodejs/node#40030
https://github.com/transloadit/uppy/pull/3159/checks?check_run_id=3544858518

* pull out duplicated logic into reusable function

* fix lint

* make methods private

* re-add unsplash download_location request

got lost in merge

* add try/catch

as suggested transloadit#3159 (comment)

* Only set default chunkSize if needed

for being more compliant with previous behavior when streamingUpload = false

* Improve flaky test

Trying to fix this error:

FAIL packages/@uppy/utils/src/delay.test.js
  ● delay › should reject when signal is aborted

    expect(received).toBeLessThan(expected)

    Expected: < 70
    Received:   107

      32 |     const time = Date.now() - start
      33 |     expect(time).toBeGreaterThanOrEqual(30)
    > 34 |     expect(time).toBeLessThan(70)
         |                  ^
      35 |   })
      36 | })
      37 |

      at Object.<anonymous> (packages/@uppy/utils/src/delay.test.js:34:18)

https://github.com/transloadit/uppy/runs/3984613454?check_suite_focus=true

* Apply suggestions from code review

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

* fix review feedback & lint

* Apply suggestions from code review

Co-authored-by: Merlijn Vos <merlijn@soverin.net>

* remove unneeded ts-ignore

* Update packages/@uppy/companion/src/server/controllers/url.js

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

* Update packages/@uppy/companion/src/server/Uploader.js

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

* reduce nesting

* fix lint

* optimize promisify

transloadit#3159 (comment)

* Update packages/@uppy/companion/test/__tests__/uploader.js

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>

Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>
Co-authored-by: Merlijn Vos <merlijn@soverin.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants