-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stream: ensure pipeline always destroys streams #31940
Conversation
There was an edge case where an incorrect assumption was made in regardos whether eos/finished means that the stream is actually destroyed or not.
23c5c02
to
5a55b1d
Compare
This is blocking backport of #31223 to node V13 which does not have |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is a backport it should target the relative branch (v13-staging in this case). Moreover I would expect to see a rather different change compared to what is on master.
This is not a backport. It fixes a problem on master for streams that do not |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was an edge case where an incorrect assumption was made in regardos whether eos/finished means that the stream is actually destroyed or not. PR-URL: #31940 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Ruben Bridgewater <[email protected]> Reviewed-By: Luigi Pinca <[email protected]>
Landed in b2be348 |
There was an edge case where an incorrect assumption was made in regardos whether eos/finished means that the stream is actually destroyed or not. Backport-PR-URL: nodejs#31975 PR-URL: nodejs#31940 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Ruben Bridgewater <[email protected]> Reviewed-By: Luigi Pinca <[email protected]>
There was an edge case where an incorrect assumption was made in regardos whether eos/finished means that the stream is actually destroyed or not. Backport-PR-URL: #31975 PR-URL: #31940 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Ruben Bridgewater <[email protected]> Reviewed-By: Luigi Pinca <[email protected]>
@ronag It seems this diff has introduced a major regression in Got: Note that I don't know if it's Got that relied on a buggy behavior or if it's an actual regression, but I figured I should ref the issues together 🙂 |
If the last stream in a pipeline is still usable/readable don't destroy it to allow further composition. Fixes: #32105 PR-URL: #32110 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Luigi Pinca <[email protected]>
Should this land on v12.x-staging? If so, please open a backport PR. |
Should not land on 12.x. Has indirect consequences that can cause breakage. |
Does this work with things like a TCP stream where destroy actually destroys the underlying fd was finish still can mean its waiting for FIN ack? |
Sorry, I didn't get that? |
simple example pipeline(socket, socket). This calls destroy automatically after eos now yes? Ie basically forces autoDestroy if I’m understanding that correctly. For streams that have postfinish state this can be problematic. For example TCP (this may have changed since I looked at it last), the stream emits finish when all pending data has been sent but end() has a side effect. Its sends a FIN packet also. Now you auto destroy it meaning FIN wont get resend if that packet is lost. My utp-native (tcp like stream over udp) module will def break because of this due to o/ |
I'm not familiar with this so I can't really answer. Maybe @addaleax or @jasnell? If more work needs to be done, e.g. send a FIN packet, I think that should be handled by Though, in v14 pipeline won't call destroy as it assumes that "standard" streams (such as net.Socket since #31806) will handle it themselves (#32158). |
follow up: wont this break all duplex streams?
Wont this destroy the socket when either of those pipelines are done? |
I can see tcp uses autoDestroy so it that part is prob fine, re my first q |
There has been follow up PRs that ensure that if:
The current implementation is a little bit messy, but if you look at this PR which cleans it up a bit it might be clear. |
@mafintosh Anything still left unanswered? |
@ronag i think the other pr solves my above issue. Didn’t realise this was older. |
Have you tested this with http servers? Calling destroy on a req destroys the full socket I think |
|
Heads up, this breaks tar-stream also due to the same reasons as HTTP. Just verified on latest 13. |
Test case echo hello > hello.txt
echo world > world.txt
tar c hello.txt world.txt > test.tar const tar = require('tar-stream')
const fs = require('fs')
const path = require('path')
const pipeline = require('stream').pipeline
fs.createReadStream('test.tar')
.pipe(tar.extract())
.on('entry', function (header, stream, done) {
console.log(header.name) // in 13 this will only unpack one file due to
// pipeline destroying the entire stream
pipeline(stream, fs.createWriteStream(path.join('/tmp', header.name)), done)
}) |
Let me put that into a proper issue |
Thanks for the navigation help @ronag :) |
This PR logically reverts nodejs#31940 which has caused lots of unnecessary breakage in the ecosystem. This PR also aligns better with the actual documented behavior: `stream.pipeline()` will call `stream.destroy(err)` on all streams except: * `Readable` streams which have emitted `'end'` or `'close'`. * `Writable` streams which have emitted `'finish'` or `'close'`. The behavior introduced in nodejs#31940 was much more aggressive in terms of destroying streams. This was good for avoiding potential resources leaks however breaks some common assumputions in legacy streams. Furthermore, it makes the code simpler and removes some hacks. Fixes: nodejs#32954 Fixes: nodejs#32955
This PR logically reverts #31940 which has caused lots of unnecessary breakage in the ecosystem. This PR also aligns better with the actual documented behavior: `stream.pipeline()` will call `stream.destroy(err)` on all streams except: * `Readable` streams which have emitted `'end'` or `'close'`. * `Writable` streams which have emitted `'finish'` or `'close'`. The behavior introduced in #31940 was much more aggressive in terms of destroying streams. This was good for avoiding potential resources leaks however breaks some common assumputions in legacy streams. Furthermore, it makes the code simpler and removes some hacks. Fixes: #32954 Fixes: #32955 PR-URL: #32968 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Mathias Buus <[email protected]>
This PR logically reverts nodejs#31940 which has caused lots of unnecessary breakage in the ecosystem. This PR also aligns better with the actual documented behavior: `stream.pipeline()` will call `stream.destroy(err)` on all streams except: * `Readable` streams which have emitted `'end'` or `'close'`. * `Writable` streams which have emitted `'finish'` or `'close'`. The behavior introduced in nodejs#31940 was much more aggressive in terms of destroying streams. This was good for avoiding potential resources leaks however breaks some common assumputions in legacy streams. Furthermore, it makes the code simpler and removes some hacks. Fixes: nodejs#32954 Fixes: nodejs#32955 PR-URL: nodejs#32968 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Mathias Buus <[email protected]> Backport-PR-URL: nodejs#32980
This PR logically reverts #31940 which has caused lots of unnecessary breakage in the ecosystem. This PR also aligns better with the actual documented behavior: `stream.pipeline()` will call `stream.destroy(err)` on all streams except: * `Readable` streams which have emitted `'end'` or `'close'`. * `Writable` streams which have emitted `'finish'` or `'close'`. The behavior introduced in #31940 was much more aggressive in terms of destroying streams. This was good for avoiding potential resources leaks however breaks some common assumputions in legacy streams. Furthermore, it makes the code simpler and removes some hacks. Fixes: #32954 Fixes: #32955 PR-URL: #32968 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Mathias Buus <[email protected]>
This PR logically reverts #31940 which has caused lots of unnecessary breakage in the ecosystem. This PR also aligns better with the actual documented behavior: `stream.pipeline()` will call `stream.destroy(err)` on all streams except: * `Readable` streams which have emitted `'end'` or `'close'`. * `Writable` streams which have emitted `'finish'` or `'close'`. The behavior introduced in #31940 was much more aggressive in terms of destroying streams. This was good for avoiding potential resources leaks however breaks some common assumputions in legacy streams. Furthermore, it makes the code simpler and removes some hacks. Fixes: #32954 Fixes: #32955 PR-URL: #32968 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Mathias Buus <[email protected]>
There was an edge case where an incorrect assumption was made
in regards to whether eos/finished means that the stream is
actually destroyed or not.
Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passes