Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zero bytes file saved with attachFieldsToBody options #200

Closed
PurwadiPw opened this issue Feb 21, 2021 · 9 comments · Fixed by #268
Closed

Zero bytes file saved with attachFieldsToBody options #200

PurwadiPw opened this issue Feb 21, 2021 · 9 comments · Fixed by #268

Comments

@PurwadiPw
Copy link

PurwadiPw commented Feb 21, 2021

🐛 Bug Report

My file saved was zero bites when i use the attachFieldsToBody: true options.
image

in this case file limit the limit file is also not working properly. It takes 2 minutes for a file size of 90MB to display the message and a message request file too large, please check multipart config

image

This is my route handler.

const uploadMedia = async (req, res) => {
  const { RecordID, FileName, Extension } = req.body;
  let isUploaded = true;
  const response = {
    uploaded: [],
    rejected: [],
  };

  try {
    const part = await req.body.file
    // const filePath = `uploads/${FileName.value}-${RecordID.value}.${Extension.value}`;
    const filePath = `uploads/${part.filename}`;
    await pump(part.file, fs.createWriteStream(filePath));
    const fileSize = fs.statSync(filePath).size;

    if (part.file.truncated) {
      isUploaded = false;

      // hapus file
      fs.unlinkSync(filePath);
      const rejectedFileDetails = {
        filename: part.filename,
        fileSize,
        reason: `File size too big`
      }
      response.rejected.push(rejectedFileDetails);

      // hapus content media
      await ContentMedia.destroy({
        where: {
          RecordID: RecordID.value
        }
      });
    } else {
      const uploadedFileDetails = {
        filename: part.filename,
        fileSize
      }
      response.uploaded.push(uploadedFileDetails);
    }
  } catch (error) {
    isUploaded = false;
    console.log(error);
    // throw new InternalErrorResponse();
  }

  // sending response
  if (isUploaded) {
    return new SuccessResponse('success', response);
  } else {
    return new UnprocessableEntityResponse(response);
  }
};

This is the register settings

server.register(require('fastify-multipart'), {
  limits: {
    fileSize: nconf.get("maxFileSize"), // 52428800
  },
  attachFieldsToBody: true,
  sharedSchemaId: '#mySharedSchema',
});

This my validation schema

const validateUploadMedia = {
  preValidation: [
    async function (request) {
      return await request.jwtVerify();
    },
  ],
  schema: {
    consumes: ['multipart/form-data'],
    body: {
      type: 'object',
      properties: {
        RecordID: { $ref: '#mySharedSchema' },
        FileName: { $ref: '#mySharedSchema' },
        Extension: { $ref: '#mySharedSchema' },
      },
      required: ['RecordID', 'FileName', 'Extension'],
    },
  },
};

my code was works fine when i use the normal register options only fileSize limits. but i need to send another field to my handler

try {
  const options = {
    limits: { 
      fileSize: nconf.get("maxFileSize"), // 52428800
    }
  };
  const parts = await req.files(options);
  for await (const part of parts) {
    const filePath = `tmp/${part.filename}.${Date.now()}`; // Create an unique filename to prevent duplicates
    await pump(part.file, fs.createWriteStream(filePath));
    const fileSize = fs.statSync(filePath).size;

    if (part.file.truncated) {
      fs.unlinkSync(filePath);
      const rejectedFileDetails = {
        filename: part.filename,
        fileSize,
        reason: `File size too big`
      }
      response.rejected.push(rejectedFileDetails);
    } else {
      const uploadedFileDetails = {
        filename: part.filename,
        fileSize
      }
      response.uploaded.push(uploadedFileDetails);
    }
  }
} catch (error) {
  // Prevent too verbose errors to be sent to the client. Ideally the max file size will be limited by the frontend.
}

Is there something wrong?

*sorry for my bad english

Expected behavior

The file size I upload must match what I submitted from the form

Environment

  • node version: v14.15.0
  • fastify version: ^3.1.1
  • fastify-multipart version: ^4.0.0
  • os: Mac
@mcollina
Copy link
Member

Can you please include a runnable example and script to reproduce this behavior? It's hard to help/diagnose these problems otherwise.

@PurwadiPw
Copy link
Author

Can you please include a runnable example and script to reproduce this behavior? It's hard to help/diagnose these problems otherwise.

working on it

@philosofonusus
Copy link

philosofonusus commented Mar 20, 2021

upload method:

  async upload(product: uploadProductDto) {
    try{
    console.log(product)
    const episodesFileNames = []
    const coverFileName = `${product.title.value}-${Math.floor(Math.random() * 1000)}${'.' + product.cover.filename.split('.')[1]}`
    await pipeline(product.cover.file, fs.createWriteStream(path.join(productFolder,'covers', coverFileName)))
    await tinify.fromFile(path.join(productFolder, 'covers', coverFileName)).toFile(path.join(productFolder, 'covers', coverFileName))
    for await (let part of product.episodes){
      const episodeFileName = `${product.title.value}-${Math.floor(Math.random() * 1000)}${'.' + part.filename.split('.')[1]}`
      await pipeline(part.file, fs.createWriteStream(path.join(productFolder,'videos', episodeFileName)))
      episodesFileNames.push(episodeFileName)
    }
      const production = new this.productModel({
        title: product.title.value,
        description: product.description.value,
        cover: coverFileName,
        episodes: episodesFileNames,
        tags: product.tags,
        releaseDate: product.releaseDate.value,
        alternativeNames: product.alternativeNames,
        studio: product.studio.value
      })
      await production.save()
    } catch (e) {
      console.log(e)
      return new HttpException("Something went wrong", 500)
    }
  }
}

Same problem with attachFieldsToBody

@Eomm
Copy link
Member

Eomm commented Mar 20, 2021

Could this be related to the default 1MB body limit?

@philosofonusus
Copy link

Could this be related to the default 1MB body limit?

I have just tried to change limit and it still doesn't work

@philosofonusus
Copy link

philosofonusus commented Mar 20, 2021

Could this be related to the default 1MB body limit?

<ref *1> {
  episodes: {
    fieldname: 'episodes',
    filename: 'kanpeki-ojou-sama-no-watakushi-1-360p-v1x.mp4',
    encoding: '7bit',
    mimetype: 'video/mp4',
    file: FileStream {
      _readableState: [ReadableState],
      _events: [Object: null prototype],
      _eventsCount: 5,
      _maxListeners: undefined,
      truncated: false,
      _read: [Function (anonymous)],
      [Symbol(kCapture)]: false
    },
    fields: [Circular *1],
    _buf: <Buffer 00 00 00 20 66 74 79 70 69 73 6f 6d 00 00 02 00 69 73 6f 6d 69 73 6f 32 61 76 63 31 6d 70 34 31 00 12 33 cb 6d 6f 6f 76 00 00 00 6c 6d 76 68 64 00 00 ... 75214763 more bytes>,
    toBuffer: [AsyncFunction: toBuffer]
  },
  alternativeNames: [
    {
      fieldname: 'alternativeNames',
      value: 'undefined',
      fieldnameTruncated: false,
      valueTruncated: false,
      fields: [Circular *1]
    },
    {
      fieldname: 'alternativeNames',
      value: '[object Object]',
      fieldnameTruncated: false,
      valueTruncated: false,
      fields: [Circular *1]
    }
  ],
  title: {
    fieldname: 'title',
    value: 'Yosuga no sora',
    fieldnameTruncated: false,
    valueTruncated: false,
    fields: [Circular *1]
  },
  description: {
    fieldname: 'description',
    value: ';kfka;kflkdassl;kfad;lk',
    fieldnameTruncated: false,
    valueTruncated: false,
    fields: [Circular *1]
  },
  cover: {
    fieldname: 'cover',
    filename: '16091008323746132425775042026537.jpg',
    encoding: '7bit',
    mimetype: 'image/jpeg',
    file: FileStream {
      _readableState: [ReadableState],
      _events: [Object: null prototype],
      _eventsCount: 5,
      _maxListeners: undefined,
      truncated: false,
      _read: [Function (anonymous)],
      [Symbol(kCapture)]: false
    },
    fields: [Circular *1],
    _buf: <Buffer ff d8 ff e0 00 10 4a 46 49 46 00 01 01 01 00 60 00 60 00 00 ff e1 c4 1b 45 78 69 66 00 00 4d 4d 00 2a 00 00 00 08 00 0d 01 00 00 03 00 00 00 01 0d 80 ... 4874410 more bytes>,
    toBuffer: [AsyncFunction: toBuffer]
  },
  studio: {
    fieldname: 'studio',
    value: 'kk',
    fieldnameTruncated: false,
    valueTruncated: false,
    fields: [Circular *1]
  },
  releaseDate: {
    fieldname: 'releaseDate',
    value: '1616280167094',
    fieldnameTruncated: false,
    valueTruncated: false,
    fields: [Circular *1]
  }
}

console.log of the body
You can see that files contain buffer.

@simoneb
Copy link
Contributor

simoneb commented Mar 23, 2021

Here's a repro of the issue. Reading the stream won't work, using toBuffer works.

The reason can be found in these lines of code. The file stream is read with toBuffer when the request is received, meaning that the stream won't be readable anymore in the handler because it's already been read.

I believe this is done because the request body needs to be processed in full before proceeding. I also verified that indeed the size limits are not validated eagerly but only after the request body is parsed.

Now this all looks like a bug but I guess it's somewhat intended. As you can see in the linked code an alternative to using toBuffer would be to provide an onFile handler.

const fs = require("fs");
const pump = require("pump");

const server = require("fastify")();

server.register(require("."), {
  attachFieldsToBody: true,
});

const handler = async (req) => {
  const part = await req.body.file;
  const filePath = `uploads/${part.filename}`;

  // does not work
  await pump(part.file, fs.createWriteStream(filePath));

  // works
  // fs.writeFileSync(filePath, await part.toBuffer());

  return "ok";
};

server.post("/upload", handler);

server.listen(3000);

@frapan
Copy link
Contributor

frapan commented Sep 14, 2021

I faced the same issue and proved that @simoneb was right.

As the documentation states, if you assign all fields to the body and don't define an onFile handler, the file content will be accumulated in memory. Therefore, the stream is empty because it has already been read, causing a zero bytes file.

The only way to access the file content is through the toBuffer method or by defining a custom onFile handler.

I think the documentation might be more explicit about this.
I'll provide a PR asap to improve the documentation.

@mcollina
Copy link
Member

I'll provide a PR asap to improve the documentation.

Awesome, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants