-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
process: always make stdio blocking #12970
Conversation
Always make stdio blocking. There have only been a few exceptions to this rule and I can’t seem to find any reason for those that outweighs the benefits. Fixes: nodejs#6456 Fixes: nodejs#12921
This will mean if you are piping to a child process and that process stalls your main process will be stuck stalled also. (IIRC, that was the main reason.) I'm not sure this is a good idea. |
That being said we should make it non-blocking to pipes on windows as it was only ever blocking for compatibility reasons which long since changed. Also note that either: writing to a Console on Windows is still async, OR, your process will never return from writing to a Windows console. (Reason being that Windows consoles do not have any sort of write notification "event".) |
Sorry, could you clarify this a bit… Which process stalls? Why does that affect the main (= parent?) process? And why doesn’t that make problems on Windows? |
Hmmm, child/parent might be the wrong description. If you are piping to a process, and that process which is being piped to stalls, a write may never complete from stdout/err and your process may become stuck waiting for the write to complete (for the consumer to tell you that the write was complete).
My understanding is that it does but that this point was never reconsidered. |
But this is about stdio, so anything that deals with more than 1 process is about child processes?
Just to be clear, by “stalls” you mean it ends up waiting to be able to do a blocking write on stdio forever?
Why does the other process care about that? |
I suppose but it might not actually be a child, e.g.
Yes.
The other doesn't, in my example it is the one stuck but it is causing the upstream process to become stuck too. |
@Fishrock123 sorry, but can you give a full example of what you’re talking about, or otherwise explain why one process waiting to write to stdio causes problems elsewhere? :) |
Here is the reasoning that Pipes on Windows are blocking: nodejs/node-v0.x-archive#3584 This comment by @bnoordhuis seems important: #1771 (comment) I can't find the comment about the stalling issue off-hand but I know it is floating around somewhere on github. |
Thanks, that comment by @bnoordhuis explains it, I guess… ugh. Yea, we might need to close this without landing. :/ (I also agree with Ben, ideally stdio wouldn’t have to be blocking and instead we’d have ways to fix those problems properly…) |
One thing to consider is that in docker and other cloud deployments, its required that app logs are written to stdout, which the cloud will usually collect (via a pipe). So, logging will become a sync call possibly blocking the event loop in production. |
Closing given the conversation with @Fishrock123 here and on IRC |
Isn't that only when the system's buffer limit is reached due to the child process not draining the pipe? Is there a practical way to solve the problem of Here's a proof, for anyone interested: // flood.c
#include <signal.h>
#include <stdio.h>
#define FLOOD_SIZE 2048
static int running = 1;
void handle_sigint(int sig) {
#pragma unused(sig)
running = 0;
}
int main(void) {
signal(SIGINT, &handle_sigint);
FILE *urand = fopen("/dev/urandom", "rb");
if (urand == NULL) {
fputs("Could not open /dev/urandom", stderr);
return 1;
}
char buf[FLOOD_SIZE];
long count = 0;
while (running) {
count += 1;
fread(buf, FLOOD_SIZE, 1, urand);
fwrite(buf, FLOOD_SIZE, 1, stdout);
fprintf(stderr, "\x1b[32mwrote %llu bytes\x1b[m\n", (unsigned long long) (count * FLOOD_SIZE));
}
return 0;
} // clogged-sink.c
#include <stdio.h>
#include <unistd.h>
#include <signal.h>
#define SINK_SIZE 1024
static int running = 1;
void handle_sigint(int sig) {
#pragma unused(sig)
running = 0;
}
int main(void) {
char buf[SINK_SIZE];
while (running) {
sleep(1);
fread(buf, SINK_SIZE, 1, stdin);
fprintf(stderr, "\x1b[35mread %u bytes\x1b[m\n", SINK_SIZE);
}
return 0;
} $ cc -o flood flood.c
$ cc -o clogged-sink clogged-sink.c
$ ./flood | ./clogged-sink This is going to be an issue any way you spin it - something's got to buffer the output, and you run the risk of buffer overflows if the receiving child process isn't fast enough. What happens in that case? If you can define what you want to happen in those scenarios then this problem becomes much easier. Does the parent script crash? Does it block? Do we allocate more memory until the system runs out of it? An example of the problem with approaching this incorrectly, of course, is an incorrectly configured production system that logs things to stdout, piping them to a child process (not all that uncommon), the child process buggering up for some reason, and production systems randomly crashing or blocking (which looks like a crash) with extremely hard to diagnose bugs (assuming monitoring isn't set up correctly - and let's face it, a lot of the time it's not). What do we expect Node to do when the output buffer inevitably fills up? That is the question we need to answer. |
Always make stdio blocking. There have only been a few exceptions to
this rule and I can’t seem to find any reason for those that outweighs
the benefits (unfortunately).
Fixes: #6456
Fixes: #12921
Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passesAffected core subsystem(s)
process/stdio