-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error writing file: Broken pipe (IO::Error) #9065
Comments
Duplicate of #2713. |
Well... apparently I keep running in circles 😆 |
@jwoertink note that this also happens in Ruby. It's just that you need to write more items to trigger:
|
Ah interesting. So crystal tanks out earlier where Ruby can handle a little bit more it seems. I wonder what node does differently... This takes a while to run, but it finishes.
Now, I'm not actually doing any command line piping in Lucky. I am currently using a slightly older version of Dexter which just wraps the old logger. There is a chance that the new logger might fix the issue, but we saw several hours of downtime on our app, and there's a worry that pushing this up could take the app down again. |
You're talking about this error, rigth? https://forum.crystal-lang.org/t/unhandled-exception-error-writing-file-broken-pipe-errno/1396/4 In that case, it's unrelated because it's the TCP socket what is closed. Maybe the client moved to another page already? Any idea what trigger those? Client disconnection should be handled (and ignored) by the |
Yes, that error. And actually, in that case, that was me booting my app locally and hitting the home page. I wish I could just say "here's my app!" 😂 Is there a good way I can extract some of this out? I guess I'll try and make a mini http server and logger and see what I can get to happen. |
Ok, so I just checked bugsnag, here's the last error we got which was before dropping back down to 0.33
In case that helps any. I think I'm going to give the new logger a try and see if that makes a difference. But as for the original post, this issue is still valid as far as saying there's a crystal compiler issue even though the error is technically valid, right? |
Could it be that the server is writing the response but the client already closed the connection? I'll try to reproduce that scenario... |
Yes, the error from the compiler should be handled somehow to avoid that nasty output. But it's totally unrelated despite the message similarity. Regarding your error with the HTTP server, I can see in that backtrace that the exception is an I just tried with this simple server and doing require "http/server"
server = HTTP::Server.new do |context|
context.response.content_type = "text/plain"
10000000.times do |i|
context.response.print "Hello world! #{i}\n"
end
rescue e
puts e.inspect
raise e
end
address = server.bind_tcp 8080
puts "Listening on http://#{address}"
server.listen |
Oh yes, you're right. I have several different errors in here with the same error message. This is the right one:
|
@asterite you're right, I thought HTTP::Server was already silently ignoring client disconnections but it wasn't logging anything at all. Now with the new logger we could test that more effectively. |
I appreciate you guys helping me through all this ❤️ |
See https://pmhahn.github.io/SIGPIPE/ We want to handle EPIPE where Signal::PIPE.reset
loop { print "foo\n" } Nobody wants a HTTP server to exit because a client closed a connection. But we're so used to simple command line tools that merely trap SIGPIPE to exit the program ASAP, that we assume that a failed read/write on STDIN and STDOUT isn't important, but... it's still an error and it must be handled. Now, the exit on SIGPIPE behavior can be achieved in Crystal with the following: LibC.signal(LibC::SIGPIPE, ->(signal : Int32) { LibC._exit(0) })
loop { print "foo\n" } We can't use |
@ysbaddaden would a |
Maybe something on IO would fit better? Like |
Maybe we should have a special exception for |
Is there an easy way to repro this failing? (and not failing in 0.33)? |
Not really because it happened in 0.33 too (intermittently). It's just something changed in 0.34 that made it more apparent. I do see it occasionally locally when running Lucky through overmind, but it's very rare. On a side note, there was an issue in Lucky with the old logger that seems to be fixed with the new logger. I'm going to deploy today to see if, magically, this issue is also fixed 🤷♂️ |
Yeah, I still see the issue in 0.35.1. It's just never consistent enough to really understand what part is causing it. My guess is maybe something with a connection pool. I see you're using kemal @Dan-Do, are you doing any database stuff? Or just straight web requests? |
I am using ArangoDB but I don't think that's the cause. This issue only occured to static resource. |
I wonder if other functions not adapt to the following pull: |
Not sure if that'll be any help, but I also see this often on close's flush on our invidious instance:
then if it happens enough we somehow run out of fd and also get
As a workaround we restart invidious in a crontab and it's a bit weird, it normally doesn't happen but when it happens we get a lot e.g. with a restart every 10 minutes we get:
The numbers make me think it's an attack of sort (I, err, don't think we'd normally get 300k pages opened in 10 minutes for our small instance that keeps breaking down; but it got worse after invidio.us stopped -- I've just enabled access logs so will be able to confirm that soon... It's just too weird we get almost none for a while then a burst of them and restarting doesn't make it stop) I've tried reproducing and I can semi-reliably get the Broken pipe to happen if I load a page and press escape before it's done loading, so nginx will close the socket while invidious still tries to flush data and we pretty much get what I'd expect to get, I'd say these messages could just be silenced as they likely used to be and that part is an artefact of the log changes. OTOH the too many open files error makes me think we're leaking a number of fd when this happens, and I'd like to understand why -- in HTTP::Server the 'close' method is just setting a bool so I assume this is supposed to be closed by garbage collection once the server is no longer referenced? Thanks! |
I have the same issue using kemal. How to reproduce (don't know if it works on your environment):
|
Good call @Dan-Do it looks like that same basic flow is happening on Lucky too. Which really means it's something in |
Is there any update on this? |
I haven't updated to Crystal 1.0 yet, so it may or may not be fixed, but with 0.36.1 we still see the issue. |
Bump! On invidious, we simply removed the exception here: |
Using Crystal 1.1.1, and I still see this from time to time. It seems to only happen when a connection has gone stale. Like if I'm sitting on a page on my site, then my session times out, then on my next interaction I'm logged out, redirected and the broken pipe is thrown. It doesn't seem to really affect anything other than noise in the logs though. |
Looks like @BrucePerens may have found the issue in Lucky luckyframework/lucky#1608 ;however, I wonder if just rescuing |
If you want to test this, just delay your HTTP response until the browser times out. |
I've tried to reproduce these conditions with this simple program: require "http"
channel = Channel(Nil).new
server = HTTP::Server.new do |context|
channel.receive
context.response << "foo"
context.response.flush
puts "wrote response"
end
address = server.bind_unused_port
puts "Listening on #{address}"
spawn do
client = HTTP::Client.new(address.address, address.port)
client.read_timeout = 1.seconds
response = client.get("/")
puts response.body
rescue exc
exc.inspect_with_backtrace(STDERR)
channel.send(nil)
sleep 1.seconds
exit
end
server.listen It raises a |
I wonder if it has anything to do with the static file handler 🤔 I can't remember if I've seen it with an API only Lucky app, but I definitely see it all the time on every Lucky app (including the website) that I've worked with. It's for sure a tricky issue. |
I tried some variations of the @straight-shoota example and could not break it. Let's close this one and investigate more on the Lucky side. |
Yeah, we can investigate more on the Lucky side, though, some of the people in this thread got the error using Kemal, so it doesn't seem to be Lucky specific. |
The stack trace says it's happening in the Crystal stdlib. We just have to learn the right conditions before we ask any more of the Crystal folks. |
The original error doesn't seem to happen anymore. Your program doesn't try to write to the socket while it's closed (I have no idea how async works in crystal), so I just removed the client and ran curl manually with this, hitting ^c before 2s:
At which point the server does get sigpipes writing its response:
But it ignores it (no stack trace printed despite the rescue block) |
I think this deserves more love, high load Crystal web apps still see this error, here's a comment from Kemal issue list kemalcr/kemal#591 (comment) |
I believe we can easily ignore errors from broken pipes in some places such as That said, it seems we still don't quite understand the exact error conditions. There's no reproduction with plain stdlib, right? |
I've still never been able to reproduce it outside of my app, but on Crystal 1.12 and I still get it. Though, for me I don't need high load to reproduce it locally. |
I believe it should be enough to close the connection from the client-side before server is able to fulfill the request. You could simulate it with some long-running requests ( |
It happens 2 times on around 300 requests (static files handler of kemalcr) on my localhost PC. Don't know how to reproduce because it's random. |
I posted this on Kemal repo. But wanted to post it here too so that it gets more visibility. I am using I ran into this issue today when about 60 people tried to access my server at the same time. Here is the error I saw (multiple times) and the server became unresponsive. I just wanted to share this here. Is there a solution to this? It seems like it is not only a problem with logs getting cluttered with these messages, it also affects the responsiveness of the server. Thank you.
|
I still get this all the time. I haven't been able to reliably recreate the issue in simple Crystal though. No clue what causes it or how to fix it. |
I'm seeing this error in my production Lucky app pretty often. Prior to Crystal 0.34 I would see it maybe a handful of times a day, but after upgrading to Crystal 0.34, we noticed we were seeing it on almost every request coming in.
As per @waj comment you can see the easiest way to get a similar error is from:
Though, there's several other ways to get a similar error like shown in this thread
crystal eval "1.upto(1000) { |i| puts i }" | head
Or just putting those lines in to a separate crystal file and running it like normal.
The text was updated successfully, but these errors were encountered: