-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net: Fix handling of upstream connection timeout events. #4040 #4107
Conversation
Signed-off-by: Ramya <[email protected]>
Is the issue still happening with v1.8.7 ? |
I have been testing with the |
Please confirm if you can still repro the issue with v1.8.7 |
Config file:
|
valgrind_1_8_7.txt |
assigned @leonardo-albertovich for review |
Debug log & valgrind output on flb_segv_bug.log Debug log & valgrind output with this PR: flb_segv_fix.log The test sent ~500 log records of size 1 KB for 60 seconds. |
Other stack traces seen for the same issue:
|
Hi @krispraws I mostly agree with you on the diagnosis, however, the reason why your patch works is because you removed the The reason I think about it this way is because this means when the flag is set to FLB_FALSE the coroutine actually finished and the outer error handling code was executed (ie. I think what two things could happen : 1 - There's a connection timeout caused by a DNS lookup timeout where the timeout value is low enough that the events fire in the same event loop cycle or even worse the DNS timer fires at a later cycle (this sounds very strange to me and I could elaborate about it later on if you want to talk about it). 2 - There's a connection timeout caused by an actual connection attempt. In scenario 1 In scenario 2 I'd love to hear what you @krispraws and @edsiper think about it as this is an issue I have been thinking in background for some time (I was focusing on a different way to fix it which required more effort when it comes to detecting regressions caused by the change). If you have any questions or want to discuss it further feel free to reach me in the fluent slack. |
@leonardo-albertovich , thank you for looking at this, and for your detailed comment.
(Edited) This bug definitely occurs today in scenario 1 where the DNS timeout doesn't get executed and the DNS lookup is still pending. There is no valid socket linked to the connection since it cannot be created till the DNS lookup returns the address family. The I didn't fully grasp your idea about the flag and what side effects it would prevent. I am not on the slack yet but I'll try to join later today.
There is an underlying root cause here with slow DNS lookups and DNS timers not firing on time, that I have not diagnosed yet. I added logs to track the UDP DNS timer and saw it fire almost 30 seconds late in some cases. In my workflow, I am also trying to stress the kinesis.firehose output plugin to its limits so I don't use the threaded mode. I saw around 40 events being processed in an iteration of the event loop and each iteration took multiple seconds, which may explain why the timers fire late. However, the upstream timer always fires earlier than the DNS timer. I'd love to hear your thoughts on this. |
Here is a snippet of logs with the traces I added. It shows the DNS timer event executing very late.
|
In scenario 1 we don't really care about nor do we need the socket / shutdown at all so that's fine. What's certainly terrible is having timers fire 30 seconds after they should, in this case you're using the default value for the connection timeout which is 10 seconds which means the DNS lookup timeout should be 9 seconds which makes that terrible. The event loop in flb_engine_start is initialized with 256 slots so in order to have events spread on multiple epoll invocations the result count should be huge which makes no sense. |
I also considered adding some kind of delay (set a timestamp when moving a connection to the destroy queue and only destroy N seconds or more after it) before the connection context is actually destroyed - I think your flag approach achieves the same. |
Exactly, it achieves the same in a more controlled way, after all, we're seeing some very weird timing stuff there so I'd rather not depend on that. |
I had a few questions about c-ares and the async DNS lookups as well. Creating a ares_channel for each connection seems expensive. I don't have much knowledge/experience about c-ares but it seemed to be designed for a few channels with many queries on each. I could be wrong about that. |
It's true that using a channel per lookup is slightly more wasteful than desirable, at the moment the choice was made mostly to avoid making the system overly complex, especially when it comes to timeouts and the coroutine life cycle. |
@leonardo-albertovich , @edsiper , I dug into why the event loop and DNS lookups were slow for my workflow and I found something interesting. I was suspecting earlier that the change to use c-ares for async DNS was the cause. However I ran a test with a code change to use the old |
@@ -1081,6 +1083,30 @@ flb_sockfd_t flb_net_tcp_connect(const char *host, unsigned long port, | |||
if (is_async) { | |||
ret = flb_net_getaddrinfo(host, _port, &hints, &res, | |||
u_conn->u->net.dns_mode, connect_timeout); | |||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some context to explain this change better:
- The check for
u_conn->net_error
is unconditional by intention and doesn't distinguish between the DNS lookup succeeding or failing. - This is my understanding of the current code base (pseudo-code)
coroutine_1_flb_net_tcp_connect:
dns_lookup_async
----yield/resume_1---
fd = create_socket
conn->fd = fd
tcp_connect_async(fd)
----yield/resume_2---
http_proxy_connect
tls_connect_async
----yield/resume_3---
check tls_connect return val and return connection
coroutine_2_flb_upstream_conn_timeouts:
shutdown(fd) // if fd is not valid, this has no effect
conn->net_error = CONNECT_TIMEOUT
prepare_destroy_conn // delete event, close socket, move connection to pending destroy queue
coroutine_3_flb_upstream_conn_pending_destroy:
for c in pending_destroy_queue:
destroy_conn(c) //free up connection memory
coroutine_2 and coroutine_3 can execute after any of the yield/resume
points in coroutine_1.
3. I saw from logs that the DNS lookup timeout handler can execute very late when the plugin is under load. I want to avoid relying on it.
4. This specific change is for the following scenario that I saw in my logs for the bug:
coroutine_1
yield/resume_1
coroutine_2
coroutine_3
coroutine_1
Since there is no file descriptor set in the connection object yet, coroutine_1 doesn't know that the connection timeout triggered in upstream. The DNS lookup can succeed or fail. An error must be returned even if DNS succeeded. Otherwise the connection will be returned as a successful connection.
u_conn->net_error = ETIMEDOUT; | ||
prepare_destroy_conn(u_conn); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I discussed with @leonardo-albertovich on Slack. He thinks this change may have side effects. I'd like to understand under what scenario prepare_destroy_conn is needed here i.e. the scenario where it wouldn't be handled by the code that is trying to create the connection.
@krispraws I created this branch as I told you yesterday, https://github.com/leonardo-albertovich/fluent-bit/tree/upstream_conn_busy_flag_addition In that branch I incorporated your conditional shutdown change, the "in-between" timeout detection and the busy flag. I have been testing this using https://github.com/pwhelan/donotshout, playing around with MinJitter and MaxJitter and disabling both TruncatePercent and DropPercent. This is what my config file looks like :
This causes enough delays to naturally trigger the condition. |
@leonardo-albertovich - thanks. I was not able to look at it today but I will check it out and run it in my environment tomorrow. I noticed another issue that may need additional changes related to connection cleanup. When the keepalive count for a connection expires, it is marked for destruction and the tcp socket is closed by calling I don't understand why the tcp socket is closed before calling
|
Hey @leonardo-albertovich , I just went through your changes and they won't address multiple scenarios where the invalid memory access can happen. I can set up a repro in some time. If you want to use the flag approach, the flag should be set to TRUE before this line: https://github.com/fluent/fluent-bit/blob/master/src/flb_upstream.c#L523 and set to FALSE here: https://github.com/fluent/fluent-bit/blob/master/src/flb_upstream.c#L537 |
Actually you are correct, I hadn't faced that issue so I didn't even consider TLS and was trying to keep it at the lowest level I knew about so in case someone ended up using that function straight instead of through one of the wrapper layers it would be covered, I'll change it to those points you correctly marked and recreate the branch and PR so it's nice and tidy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this PR should not be merged because the main issue was addressed by another PR that was created as a result of the discussion in this one.
However, the log line additions to net_connect_async
and flb_tls_session_create
are still relevant and I think making a new PR that isolates those changes would be the way to go.
I'll create a new PR tomorrow. |
any update on this issue? |
It was addressed in the related PRs issued by me which have already been merged in master. |
Signed-off-by: Ramya [email protected]
Fluent Bit crashes with SIGSEGV consistently for my AWS workflow that creates a burst of TLS connections under high load.
The workflow uses the tail input plugin and the kinesis.firehose output plugin. The bug happens when DNS lookups are slow and the event loop processing is slow, so the DNS timeout doesn't get invoked on time. The upstream timeout handler gets invoked and moves the connection to the destroy queue. The scheduled event that removes the connection from the destroy queue and frees the memory also gets invoked. When the DNS call finally returns, the connection pointer is invalid. This causes segmentation faults with a variety of stack traces. I will post some of them in the comments.
I verified my initial hypothesis in #4040 (comment) by adding a lot of logs. This is one stack trace, with corresponding logs:
Logs from my branch with trace statements that print connection pointer:
The fix in this PR is not perfect because I haven't fully figured out how to make the upstream timeout handler cancel a pending DNS call. I have a couple of ideas around it but haven't tested them. It does prevent the invalid memory reference.
I will attach valgrind output and debug logs with and without the fix. Running the task with valgrind makes every connection timeout, however with the fix, the crashes don't happen.
There is also some performance regression in versions after 1.7.5 that make DNS lookups or TCP connections slow because I can run the same load on 1.7.5 with no issues.
Addresses #4040
Enter
[N/A]
in the box, if an item is not applicable to your change.Testing
Before we can approve your change; please submit the following in a comment:
Documentation
Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.