Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid emitting error if upload was not done on closing request #83

Closed
nfantone opened this issue Jun 18, 2018 · 27 comments · Fixed by #81
Closed

Avoid emitting error if upload was not done on closing request #83

nfantone opened this issue Jun 18, 2018 · 27 comments · Fixed by #81
Labels

Comments

@nfantone
Copy link

nfantone commented Jun 18, 2018

I noticed that when a request ends, if at least one of the uploaded file streams was not consumed, an error is emitted. I've found out the hard way, that if you don't subscribe to the 'error' event on the readable stream, the emission halts the node process and brings down the entire server.

FileStreamDisconnectUploadError: Request disconnected during file upload stream parsing.
    at IncomingMessage.request.on (/dev/project/node_modules/apollo-upload-server/lib/middleware.js:151:15)
    at IncomingMessage.emit (events.js:182:13)
    at resOnFinish (_http_server.js:564:7)
    at ServerResponse.emit (events.js:187:15)
    at onFinish (_http_outgoing.js:683:10)
    at process._tickCallback (internal/process/next_tick.js:61:11)
Emitted 'error' event at:
    at IncomingMessage.request.on (/dev/project/node_modules/apollo-upload-server/lib/middleware.js:149:32)
    at IncomingMessage.emit (events.js:182:13)
    [... lines matching original stack trace ...]
    at process._tickCallback (internal/process/next_tick.js:61:11)

This has the nasty side-effect of potentially allowing any GraphQL client to send a file under an unexpected field and tear down your server. I realize this after tracing a bug to the UI where the file: Upload field in the mutation was inadvertently being sent as attachment. The API server was dying, without any chance of recovering or catching the error.

You can try this using the apollo-upload-examples. Just change the expected file field in the singleUpload mutation to any other name on the UI and run the upload.

Is there any way of preventing this kind of behavior?

@jaydenseric
Copy link
Owner

This is a known issue, and various attempts have been made to dealing with it correctly: #81.

Streams and async stuff are super tricky to get right, and part of the reason this has not been fixed sooner is that tests for certain types of errors (e.g. client disconnect at certain times in the processing) can be hard to write, and to add to the confusing the behavior in different Node.js versions can be completely different.

Help on this is welcome, a solid fix might be only a few lines away if you know a lot about streams and errors.

@nfantone
Copy link
Author

Hi, @jaydenseric. Many thanks for your prompt answer!

I can see now that there's an on going debate on the subject. Appreciate you pointing me to it. At the risk of repeating some of the stuff you + collaborators may have already covered:

  1. Why is it a requirement to emit an error on unconsumed items? Can't we just ignore that fact? If I, as an implementor, didn't do it - couldn't we assume that's on me? The way I see it, the benefits of not doing so outweighs the harm.

  2. Maybe some kind of schema check should be in order? Only generate streams for known/declared fields on your mutation.

@nfantone
Copy link
Author

nfantone commented Jun 18, 2018

Follow up on first point above:

  • Maybe device a different way of notifying errors? Instead of .emit on the actual stream, create and export an EventEmitter that users can subscribe to, to listen for such errors.

  • Allow users to "auto-consume" streams through a config key on middleware initialization?

The way I see it, any option is preferable before letting a server collapse.

@jaydenseric
Copy link
Owner

Best place to start is to PR a new test that demonstrates the issue, then we can look at specific fixes.

In the situation where a client sends a multipart request with a file for an argument that is meant to be some other type such as a string, I would expect the upload middleware to throw an error that can either be caught in middleware or be allowed through to send an bad request HTTP error response back to the client. Without a server crash!

@nfantone
Copy link
Author

nfantone commented Jun 18, 2018

In the situation where a client sends a multipart request with a file for an argument that is meant to be some other type such as a string

Or any other argument that's not an Upload. Even non-existing ones.

Best place to start is to PR a new test that demonstrates the issue, then we can look at specific fixes.

I understand where you're coming from, but I think the problem is pretty well "demonstrated" at this point. And writing a test covering all cases might not be trivial (a hunch, to be honest). As per my question above, I'm wondering if this isn't a self imposed issue: why are we emitting errors in the first place, any way? What's the benefit?

@jaydenseric
Copy link
Owner

The purpose of the test is not to explain the issue, but to give us a tool to work out a solution. Once the test passes, we know the problem is solved. At the moment you are the best person to write that test, as you understand the particular issue as a user and are motivated to find a solution. Otherwise I will have to write it, which eats into solution time 😓

The FileStreamDisconnectUploadError has to exist, because if a user presses their browser back button half way through a legitimate file upload the file stream in the resolver needs an event to handle for cleanup, mutation errors, etc.

@nfantone
Copy link
Author

nfantone commented Jun 18, 2018

The FileStreamDisconnectUploadError has to exist, because if a user presses their browser back button half way through a legitimate file upload the file stream in the resolver needs an event to handle for cleanup, mutation errors, etc.

Agreed. Not debating the general usefulness or errors. But, honestly, that's just not what happens in the current implementation. Right now,

  1. Handling is only possible if there are no other problems throughout the life span of the request.
  2. Otherwise, it can't be caught or handled.
  3. Crashes the process irrevocably.
  4. Prevents any other kind of processing prior to the stream consumption at any point on the middleware chain (like user authorization, mimetype checking, request validation, etc.).

My main point here is: why do we choose this vs. just ignoring the error/consuming the stream/any other solution? Is it per spec? Is it some kind of architectural design approach? I fail see how the FileStreamDisconnectUploadError has to exist in its current form. Do users pressing "back" half way through an upload represent a tangible enough scenario that could make us consider that the error is necessary? I'd argue that authorization checks that do not take down the entire process are way more important + frequent.

At the moment you are the best person to write that test, as you understand the particular issue as a user and are motivated to find a solution.

Gotcha. I'll try to come up with something and will let you know.

@jaydenseric
Copy link
Owner

jaydenseric commented Jun 18, 2018

I'll try to come up with something and will let you know.

❤️

Certain errors causing server crashes are a known issue: #45. We still want meaningful errors, just not server crashes. I think we can fix the server crash (being worked on).

Because the upload middleware is ignorant about the GraphQL schema (graphql middleware comes after) it might be hard or undesirable to prevent errors if people send valid multipart GraphQL requests, with invalid use of fields according to the schema. I don't think our middleware can check if error listeners from the resolvers are attached yet before emitting errors, as the error might happen before resolver code has run yet?

@jaydenseric
Copy link
Owner

jaydenseric commented Jun 18, 2018

I don't think our middleware can check if error listeners from the resolvers are attached yet before emitting errors, as the error might happen before resolver code has run yet?

Hmmm, maybe we can. It's probably safe to assume that if there are no listeners yet nothing happened that needs handling/cleanup.

cc @mike-marcacci

@mike-marcacci
Copy link
Collaborator

Ah yes, this is actually the approach I took with #80. Working on that test actually exposed some other issues, and I decided to roll it into #81, where it’s addressed a bit more elegantly.

Something else to note is that .destroy(err) is preferable to .emit(“error”, err) as it actually cleans up the stream’s resources. For this reason I opted to add a handler if none existed and still call destroy with the error.

Also keep in mind that the test in #81 is far better, as the fixtures in #80 can fit in a single TCP packet and cause the test pass when it shouldn’t. The improved test is currently skipped, as tap keeps calling it a failure even though all sub-tests pass, and the code behaves correctly in manual real-world tests.

@mike-marcacci
Copy link
Collaborator

Also note that the strategy discussed above and used in #80 does not fix the case where busbuy encounters a parsing error and emits an error on an unhandled stream.

This is why I opted to add a (configurable) handler up-front in #81, to catch any of these cases.

@nfantone
Copy link
Author

This is why I opted to add a (configurable) handler up-front

👍 I believe that's aligned with my suggestion above of allowing some other form of error subscription/emission. Good one.

Something else to note is that .destroy(err) is preferable to .emit(“error”, err)

Some things to consider here (if we haven't already):

  • .destroy is a rather new API (recently added in node 8.0.0).

EDIT: Saw the "engines" requirement. Pay no heed.

  • .destroy(err) emits 'error' - which is what's causing the crashes for me. However, it doesn't if you don't pass err.

@jaydenseric
Copy link
Owner

I believe that's aligned with my suggestion above of allowing some other form of error subscription/emission.

I don't think we should add any sort of onError middleware option, once bugs are fixed under the hood the current API should be sufficient. By design, we want as many errors as possible to be handled in the resolver, either via Upload scalar promise rejection, or an error event emitted on the resolved file stream. All other errors should be thrown in the upload middleware, so standard Koa/Express middleware error handling can occur.

@nfantone
Copy link
Author

nfantone commented Jun 18, 2018

By design, we want as many errors as possible to be handled in the resolver, either via Upload scalar promise rejection, or an error event emitted on the resolved file stream.

@jaydenseric The scenario we are dealing with here prevents those cases to actually occur:

  • The Upload scalar promise comes into place only when the resolver executes. Errors may be thrown even before that happens somewhere below/above (Koa vs. Express) the middleware chain.
  • As per above, an error handler cannot be attached to the file stream until after the upload promise resolves.

I don't see your proposed design as a general fit for the middleware. Are you thinking about something else?

@jaydenseric
Copy link
Owner

We already have a mechanism for middleware errors outside of resolvers, see the test for MapBeforeOperationsUploadError for example.

@mike-marcacci
Copy link
Collaborator

So, I think we have some nomenclature to clarify here. Koa will catch any errors that are thrown, or promises that are rejected as part of the middleware chain. But that’s not the only situation we have here, since an emitted error exists outside this chain, unless something inside is already listening for it, and throws/rejects in response.

The issue is that an error can happen before a resolver has attached handlers, and when an EventEmitter emits an unhandled “error” event, node terminates the process (crashes the app), and koa can’t do anything to prevent it.

Also, these streams are created by busboy as part of the “file” event, which is inaccessible to the user. Therefore, the apollo-upload-server is really the only place we can confidently attach a handler to prevent premature exits.

@nfantone would you mind looking over #81 if you get a chance? It has the kind of changes you’re looking for and I’m already using it in production, but it’s a pretty large change and could use some more eyes.

@jaydenseric
Copy link
Owner

Again, the issue is not the current API, it's that the API does not do what is is supposed to do under the hood. Yes, the right parser error events need to be listened, caught, and then thrown out of the middleware: #77. I just don't have days and weeks up my sleeve right now to work on it, I am spread super very thin right now.

@mike-marcacci we could split out the tests in #81, which make up about 2/3rds of the diff, into a fast-tracked PR. The tests could be merged a lot quicker and might be helpful if others take a crack at solutions.

@nfantone
Copy link
Author

nfantone commented Jun 18, 2018

@mike-marcacci Took a peek at your PR. There are obviously a lot of changes going on there that are really out of my capacity to fully follow or even grasp without going deep - but they seem ok to me and, again, in line with what we have been proposing in this thread.

However, your pattern seems to match what @jaydenseric mentioned should not be part of the middleware's API - namely, passing an ad-hoc error handler as an option. Am I missing something here? While I, with my limited knowledge, don't see any other way forward, is this the approach we should be pursuing?

Also, @jaydenseric, about your comment on handling errors outside the resolver: attaching a global handler to a Koa app won't cut it. Unhandled errors emitted from streams don't end up there and even if we manage to make it so, global handlers most probably won't have access to I/O resources that should be cleaned/closed - so its usefulness is kinda limited for the scenario we are discussing here.

@mike-marcacci better summed up the idea by saying:

Therefore, the apollo-upload-server is really the only place we can confidently attach a handler to prevent premature exits.

@twelve17
Copy link

twelve17 commented Oct 17, 2019

I seem to be running into this, but have been unable to reproduce it locally, using either or both the Chrome Dev Tools network throttling nor Mac OS's XCode "Network Link Conditioner" tool using the "Very Bad Network" setting.

From what I can see in the source, there are calls to an exit function that I thought was exiting the node process, but then realized it's another local function which does not actually exit. However, it does seem like, indeed, the server is exiting by way of an error being emitted, at least that is what it seems:

npm ERR! Exit status 1
npm ERR! [email protected] start:production: `node build/index.js`
npm ERR! errno 1
npm ERR! code ELIFECYCLE
at TCP._handle.close (net.js:606:12)
at Socket.EventEmitter.emit (domain.js:448:20)
at Socket.emit (events.js:203:15)
at socketOnClose (_http_server.js:441:3)
at abortIncoming (_http_server.js:449:9)
at IncomingMessage.EventEmitter.emit (domain.js:448:20)
at IncomingMessage.emit (events.js:198:13)
at Object.onceWrapper (events.js:286:20)
at IncomingMessage.request.once (/home/app/wb-graphql-service/node_modules/graphql-upload/lib/processRequest.js:300:35)
BadRequestError: Request disconnected during file upload stream parsing.
^
throw er; // Unhandled stream error in pipe.
internal/streams/legacy.js:57
at process._tickCallback (internal/process/next_tick.js:63:19)
at endReadableNT (_stream_readable.js:1129:12)
at Socket.EventEmitter.emit (domain.js:448:20)
at Socket.emit (events.js:203:15)
at socketOnEnd (_http_server.js:455:20)
Error: Parse Error

This is under [email protected], by way of [email protected] -> [email protected].

@mike-marcacci
Copy link
Collaborator

mike-marcacci commented Oct 17, 2019

@twelve17 are you handling errors on the stream created with createReadStream()? Node will intentionally crash if an "error" event occurs and there is no listener to handle it.

The scenario discussed in this issue is long fixed, and extremely thoroughly tested.

@twelve17
Copy link

are you handling errors on the stream created with createReadStream()

@mike-marcacci I am not! I had perused the other github issues related to this topic, and concur that it's one that has been discussed at length and that the issue was probably on my side, I just couldn't pinpoint, based on the exception, where I should be looking out for errors. Hopefully this will cover it. Thanks for your prompt reply.

@mike-marcacci
Copy link
Collaborator

No problem, happy to help!

@twelve17
Copy link

Hmm, it appears Koa has a default error function that is associated with the error listener by default. I've tried to crash Koa by way of throwing and/or emitting errors without success, but I know that error handling is a big messy topic that can consist of many unique use cases.

It's possible this may be an issue with koa's own error handling, but will need to dig deeper
into how the Request disconnected during file upload stream parsing error is causing koa to exit. If I find a lead, I'll post a follow-up here.

@twelve17
Copy link

@mike-marcacci I'm doing some more debugging and am able to replicate the error locally, so this is a bit of progress. If I may, could I ask a really dumb question about this bit of handling?

https://github.com/jaydenseric/graphql-upload/blob/master/src/processRequest.mjs#L153

/**
     * Handles when the request is closed before it properly ended.
     * @kind function
     * @name processRequest~abort
     * @ignore
     */
    const abort = () => {
      exit(
        createError(
          499,
          'Request disconnected during file upload stream parsing.'
        )
      )
    }

If the request disconnected, does it make sense to try and send back a HTTP response here?

@mike-marcacci
Copy link
Collaborator

mike-marcacci commented Oct 17, 2019

Hi @twelve17, indeed a response to the client will not be received, which is fine. We still need to exit out of the running behavior and destroy the streams to avoid leaking resources.

I still suspect you just haven't attached an error handler to the stream in your resolver, as koa has no way of "catching" an error of this sort. This is the kind of thing you would expect to write any time you create a stream or other event emitter:

const stream = createReadStream();

stream.on("error", error => {
  console.log("Here is the error in the upload stream:", error);

  // cleanup code goes here...

});

@twelve17
Copy link

@mike-marcacci I was handling errors on the graphql upload stream, but failed to do so on the other stream which was sending the data to the backend! So, thanks again for your help in steering me in the right direction. I owe you a 🍺!

@ChRomain
Copy link

ChRomain commented Jul 7, 2020

does anyone have news about this bug?
I'm facing the same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants