Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prevent app crash on error event #80

Closed
wants to merge 13 commits into from
Closed

prevent app crash on error event #80

wants to merge 13 commits into from

Conversation

mike-marcacci
Copy link
Collaborator

It's possible for an Upload's stream to be ready before userland code has had a chance to attach event handlers or pipe it somewhere. If the client disconnects during this time, an error event is emitted:

new FileStreamDisconnectUploadError(
  'Request disconnected during file upload stream parsing.'
)

Because no error event listener has been assigned, the entire app will crash. While this is a great default for streams in general, it's probably not the desired behavior here. I've added a noop handler for the error event to prevent crashes.

@mike-marcacci
Copy link
Collaborator Author

It looks like some defaults changed in prettier, and exposed some issues w/ the npm scripts, so I updated those to get the CI to pass.

@jaydenseric
Copy link
Owner

I've updated dependencies and fixed lint issues, so if you rebase from master the diff will be a lot cleaner :)

The purpose of the `build:prettier` script is cleanup the un-pretty Bablel output in `lib`, so changing the glob to `src` doesn’t make sense. The `lint:prettier` glob includes `**/` so that it searches nested directories.
@jaydenseric
Copy link
Owner

Thanks for getting your hands dirty!

I hesitate to commit to any approach with handling these error when we don't have tests yet. It is really hard to be sure what's going on otherwise; if it fixes issues or creates new ones. Here is the issue for adding such tests: #79.

Does the ordering of the file end, error and limit event listeners matter?

Will this silence any errors that really should cause a crash?

Does this fix #77? Which is a better approach, a noop error listener on the file (file.stream.on('error', () => {})), or the parser itself (parser.on('error', () => {})) as sort of suggested in that issue?

Does this fix #78 or #45?

- Improved comment.
- More efficient noop function.
@mike-marcacci
Copy link
Collaborator Author

mike-marcacci commented Jun 7, 2018

Hey @jaydenseric – I rebased from master; thanks for that. I also had made some changes to the lint scripts so that it would pass locally. Let me know if you'd rather I just include the one change and a failing lint check.

As far as your concern RE tests go, I really appreciate that and will look for a way to add that here. The challenge with making a "real world" test is that we're essentially testing a race condition, and a correct, deterministic test won't look realistic.

Adding an error handler on the parser (parser.on('error', () => {})) is something else that should be done, and I'll add to this PR. But the reason they all need a handler is that whenever an 'error' event is emitted, and there isn't a handler, node will crash the app. This mean that for any stream that isn't handled (piped/etc) but which can still emit an error event has the potential to crash the entire app, and any live requests.

This is a really sane default for node, but currently there's no way for a user to synchronously add event handlers to the streams created by apollo-upload-client. Perhaps instead of adding a noop, we could provide a handler as an option?

I'm not sure about #78, but your suggestion of adding a handler for parser.on('error', ...) will fix #45, and I'll add that here shortly.

There's another bigger change I'm working on that may be related to #78, but I'll put up a PR and we can discuss that there.

@mike-marcacci
Copy link
Collaborator Author

This actually appears necessary for tests to pass on node v10

@jaydenseric
Copy link
Owner

jaydenseric commented Jun 8, 2018

@mike-marcacci you force pushed updates that wiped all of my commits on your branch…

screen shot 2018-06-08 at 10 37 50 am

@jaydenseric
Copy link
Owner

I just force-pushed it back to the original state, the only loss from you side is adding the no-op error handler to the parser. Be sure to pull first thing, and try not to regularly use force-push 😅

It's likely we will need one of the handlers, but maybe not both? My preference would be one on the parser itself as it might catch more (errors to do with the parser somehow but not individual files).

@mike-marcacci
Copy link
Collaborator Author

My apologies about the force-push; I totally blanked that you had made those changes, and do have a habit of force-pushing to PR branches with squashed or rebased changes.

Anyhow, I don't believe that errors propagate through piped streams the way you're suggesting. I had to write a small little test to make sure though:

const fs = require('fs');
const { Transform } = require('stream');

const stream1 = fs.createReadStream('input.txt');
stream1.on('error', () => {
  console.error('Error emitted on stream1');
});

const stream2 = new Transform({
  transform(chunk, encoding, callback) {
    stream2.emit('error', new Error())
    setTimeout(() => {
      callback(null, chunk.toString('utf8'));
    }, 1000);
  }
});
stream2.on('error', () => {
  console.error('Error emitted on stream2');
});

const stream3 = fs.createWriteStream('output.txt');
stream3.on('error', () => {
  console.error('Error emitted on stream3');
});

stream1.pipe(stream2).pipe(stream3)

This results in:

Error emitted on stream2

Note that you won't see an error event emitted on either stream1 or stream2. Note also, that if you remove the error handlers:

const fs = require('fs');
const { Transform } = require('stream');

const stream1 = fs.createReadStream('input.txt');
const stream2 = new Transform({
  transform(chunk, encoding, callback) {
    stream2.emit('error', new Error())
    setTimeout(() => {
      callback(null, chunk.toString('utf8'));
    }, 1000);
  }
});
const stream3 = fs.createWriteStream('output.txt');
stream1.pipe(stream2).pipe(stream3)

The app crashes:

events.js:167
      throw er; // Unhandled 'error' event
      ^

Error
    at Transform.transform [as _transform] (/Users/mike/Downloads/test.js:7:27)
    at Transform._read (_stream_transform.js:190:10)
    at Transform._write (_stream_transform.js:178:12)
    at doWrite (_stream_writable.js:410:12)
    at writeOrBuffer (_stream_writable.js:394:5)
    at Transform.Writable.write (_stream_writable.js:294:11)
    at ReadStream.ondata (_stream_readable.js:663:20)
    at ReadStream.emit (events.js:182:13)
    at addChunk (_stream_readable.js:283:12)
    at readableAddChunk (_stream_readable.js:264:11)
Emitted 'error' event at:
    at Transform.onerror (_stream_readable.js:687:12)
    at Transform.emit (events.js:182:13)
    at Transform.transform [as _transform] (/Users/mike/Downloads/test.js:7:13)
    at Transform._read (_stream_transform.js:190:10)
    [... lines matching original stack trace ...]
    at ReadStream.emit (events.js:182:13)

This is true even if handlers are present on stream1 and stream3, since they simply don't receive upstream or downstream errors.

Therefore, adding a handler to the parser won't help with errors emitted on individual file streams and vice versa.

@jaydenseric
Copy link
Owner

jaydenseric commented Jun 8, 2018

BTW (regarding #80 (comment)) I fixed the issue with the tests in Node.js v10 with 528216f, although the error handlers will still likely be necessary for other reasons. It's a little concerning that adding these no-op error handlers would have swept a valid bug under the carpet.

@mike-marcacci
Copy link
Collaborator Author

OK, I've done a small refactor and added tests for client-aborted requests. The middleware now only adds a handler to unhandled streams just before it triggers an error event with the destroy method, which allows node to clean up resources associated with the stream and also emits the event.

The new test ensures that:

  1. complete upload streams are unaffected by a client abort
  2. partially complete streams are destroyed with a FileStreamDisconnectUploadError
  3. unresolved upload promises are rejected with a UploadPromiseDisconnectUploadError

Either an error in the parser or a closed connection will result in this behavior.

@mike-marcacci
Copy link
Collaborator Author

Closing this in favor of #81.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants