Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

buffered data in readable stream does not reach writable stream #33

Closed
lpinca opened this issue Feb 2, 2018 · 3 comments
Closed

buffered data in readable stream does not reach writable stream #33

lpinca opened this issue Feb 2, 2018 · 3 comments

Comments

@lpinca
Copy link

lpinca commented Feb 2, 2018

If an error occurs all streams are destroyed. This prevents buffered data in the pipe from reaching the destination.

Here is a test case:

const EventEmitter = require('events');
const net = require('net');
const pump = require('pump');
const { Writable } = require('stream');

class FakeParser extends Writable {
  constructor(opts) {
    super(opts);
  }

  _write(chunk, enc, cb) {
    console.log(chunk);
    setTimeout(cb, 200);
  }
}

const ee = new EventEmitter();
const opts = { allowHalfOpen: true };
const server = net.createServer(opts);

server.on('connection', (socket) => {
  const parser = new FakeParser({ highWaterMark: 3 });
  pump(socket, parser, console.error);

  setTimeout(() => {
    ee.emit('trigger econnreset');
    socket.write('foo');
  }, 150);
});

server.listen(() => {
  const socket = net.createConnection(
    Object.assign({ port: server.address().port }, opts)
  );

  ee.on('trigger econnreset', () => socket.destroy());

  socket.on('connect', () => {
    socket.write('foo');
    setTimeout(() => socket.write('bar'), 50);
    setTimeout(() => socket.write('baz'), 100);
  });
});

Expected result:

All three chunks are processed by the writable stream.

Actual result:

Only the first is processed.


This is probably done by design but I'd like to confirm this.

@mafintosh
Copy link
Owner

This is by design. You are explicitly tearing down you pipeline by destroying the socket. End it gracefully using socket.end() to flush all data before

@lpinca
Copy link
Author

lpinca commented Feb 2, 2018

In this scenario the socket is abruptly closed by the client (thus the error) so there is no way to close it gracefully and the parser should still receive all buffered data.

It works with plain .pipe as the destination stream is not destroyed but it requires more work to handle all the errors in the pipeline :)

I guess pump was not designed for this use case, thanks though!

@tnguyen14
Copy link

I think I have a similar question to @lpinca. When piping a requestjs stream to an express res stream, when an error occurs, res.destroy() is called with pump. Is there a way to call res.end() or handle the error differently somehow, or should I just stick to .pipe() in that case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants