Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net: Fix handling of upstream connection timeout events. #4040 #4107

Closed
wants to merge 1 commit into from

Conversation

krispraws
Copy link
Contributor

@krispraws krispraws commented Sep 21, 2021

Signed-off-by: Ramya [email protected]

Fluent Bit crashes with SIGSEGV consistently for my AWS workflow that creates a burst of TLS connections under high load.
The workflow uses the tail input plugin and the kinesis.firehose output plugin. The bug happens when DNS lookups are slow and the event loop processing is slow, so the DNS timeout doesn't get invoked on time. The upstream timeout handler gets invoked and moves the connection to the destroy queue. The scheduled event that removes the connection from the destroy queue and frees the memory also gets invoked. When the DNS call finally returns, the connection pointer is invalid. This causes segmentation faults with a variety of stack traces. I will post some of them in the comments.

I verified my initial hypothesis in #4040 (comment) by adding a lot of logs. This is one stack trace, with corresponding logs:

Thread 3 (Thread 0x7f639fa85700 (LWP 18786)):
#0  0x00007f63a07893dc in epoll_wait (epfd=9, events=0x7f6398004870, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
#1  0x000000000077800f in _mk_event_wait (loop=0x7f6398004a00) at /home/ec2-user/fluent-bit/lib/monkey/mk_core/mk_event_epoll.c:366
#2  0x0000000000778324 in mk_event_wait (loop=0x7f6398004a00) at /home/ec2-user/fluent-bit/lib/monkey/mk_core/mk_event.c:169
#3  0x0000000000444bbc in log_worker_collector (data=0x7f6398004780) at /home/ec2-user/fluent-bit/src/flb_log.c:132
#4  0x000000000046b3c1 in step_callback (data=0x7f63980050d0) at /home/ec2-user/fluent-bit/src/flb_worker.c:44
#5  0x00007f63a1b1240b in start_thread (arg=0x7f639fa85700) at pthread_create.c:465
#6  0x00007f63a07890bf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 2 (Thread 0x7f63a1f3f340 (LWP 18784)):
#0  0x00007f63a075977b in __GI___nanosleep (requested_time=requested_time@entry=0x7ffd5112f7b0, remaining=remaining@entry=0x7ffd5112f7b0)
    at ../sysdeps/unix/sysv/linux/nanosleep.c:27
#1  0x00007f63a075969a in __sleep (seconds=0) at ../sysdeps/posix/sleep.c:55
#2  0x0000000000439bf6 in flb_main (argc=5, argv=0x7ffd5112f958) at /home/ec2-user/fluent-bit/src/fluent-bit.c:1286
#3  0x0000000000439c6d in main (argc=5, argv=0x7ffd5112f958) at /home/ec2-user/fluent-bit/src/fluent-bit.c:1303

Thread 1 (Thread 0x7f63a0486700 (LWP 18785)):
#0  x86_64_fallback_frame_state (context=0x7f6399997670, context=0x7f6399997670, fs=0x7f6399997760) at ./md-unwind-support.h:58
#1  uw_frame_state_for (context=context@entry=0x7f6399997670, fs=fs@entry=0x7f6399997760) at ../../../libgcc/unwind-dw2.c:1257
#2  0x00007f63a0a56e52 in _Unwind_Backtrace (trace=trace@entry=0x43a540 <unwind>, trace_argument=trace_argument@entry=0x7f6399997920)
    at ../../../libgcc/unwind.inc:290
#3  0x000000000043a637 in backtrace_full (state=0x7f63a1f4b000, skip=<optimized out>, callback=<optimized out>, 
    error_callback=<optimized out>, data=<optimized out>) at /home/ec2-user/fluent-bit/lib/libbacktrace-ca0de05/backtrace.c:127
#4  0x0000000000436ebb in flb_stacktrace_print (st=0xad4df0 <flb_st>) at /home/ec2-user/fluent-bit/include/fluent-bit/flb_stacktrace.h:71
#5  0x000000000043850c in flb_signal_handler (signal=11) at /home/ec2-user/fluent-bit/src/fluent-bit.c:559
#6  <signal handler called>
#7  0x000000000046894c in __mk_list_del (prev=0x0, next=0x0) at /home/ec2-user/fluent-bit/lib/monkey/include/monkey/mk_core/mk_list.h:87
#8  0x0000000000468984 in mk_list_del (entry=0x7f6398d725f0) at /home/ec2-user/fluent-bit/lib/monkey/include/monkey/mk_core/mk_list.h:93
#9  0x0000000000469548 in prepare_destroy_conn (u_conn=0x7f6398d72570) at /home/ec2-user/fluent-bit/src/flb_upstream.c:416
#10 0x00000000004695aa in prepare_destroy_conn_safe (u_conn=0x7f6398d72570) at /home/ec2-user/fluent-bit/src/flb_upstream.c:438
#11 0x0000000000469935 in create_conn (u=0x7f63982b36e0) at /home/ec2-user/fluent-bit/src/flb_upstream.c:534
#12 0x0000000000469e5e in flb_upstream_conn_get (u=0x7f63982b36e0) at /home/ec2-user/fluent-bit/src/flb_upstream.c:677
#13 0x000000000055b27a in request_do (aws_client=0x7f63982b3650, method=1, uri=0x7a2a1d "/", 
    body=0x7f639b139b10 "{\"DeliveryStreamName\":\"PUT-S3-4vsGw\",\"Records\":[{\"Data\":\"eyJpbnB1dF9maWxlIjoiL2hvbWUvZWMyLXVzZXIvZmxiLWRhdGEvaW5wdXRfZGF0YS9pbnB1dF8xNjMyMDk0OTk2LmxvZyIsImlucHV0X29mZnNldCI6MjcwNTg3NTIsImxvZyI6InsgXCJ"..., body_len=831653, 
    dynamic_headers=0xac9e60 <put_record_batch_header>, dynamic_headers_len=1) at /home/ec2-user/fluent-bit/src/aws/flb_aws_util.c:292
#14 0x000000000055ae0b in flb_aws_client_request (aws_client=0x7f63982b3650, method=1, uri=0x7a2a1d "/", 
    body=0x7f639b139b10 "{\"DeliveryStreamName\":\"PUT-S3-4vsGw\",\"Records\":[{\"Data\":\"eyJpbnB1dF9maWxlIjoiL2hvbWUvZWMyLXVzZXIvZmxiLWRhdGEvaW5wdXRfZGF0YS9pbnB1dF8xNjMyMDk0OTk2LmxvZyIsImlucHV0X29mZnNldCI6MjcwNTg3NTIsImxvZyI6InsgXCJ"..., body_len=831653, 
    dynamic_headers=0xac9e60 <put_record_batch_header>, dynamic_headers_len=1) at /home/ec2-user/fluent-bit/src/aws/flb_aws_util.c:160
#15 0x0000000000519e17 in put_record_batch (ctx=0x7f6398041770, buf=0x7f6398435dc0, payload_size=831653, num_records=500)
    at /home/ec2-user/fluent-bit/plugins/out_kinesis_firehose/firehose_api.c:828
#16 0x00000000005187e4 in send_log_events (ctx=0x7f6398041770, buf=0x7f6398435dc0)

Logs from my branch with trace statements that print connection pointer:

[2021/09/19 23:44:18] [trace] [upstream][conn=0x7f6398d72570][fd=-1][coro=0x7f6398d70a00] create_conn set connect_timeout=10
[2021/09/19 23:44:18] [trace] [io][conn=0x7f6398d72570][coro=0x7f6398d70a00] flb_io_net_connect called
[2021/09/19 23:44:30] [debug] [upstream][conn=0x7f6398d72570][fd=-1] flb_upstream_conn_timeouts connection #-1 to firehose.us-east-1.amazonaws.com:443 timed out after 10 seconds
[2021/09/19 23:44:30] [debug] [upstream][conn=0x7f6398d72570][fd=-1] flb_upstream_conn_timeouts cleaning up connection #-1 due to connect timeout
[2021/09/19 23:44:30] [trace] [upstream][conn=0x7f6398d72570][fd=-1] prepare_destroy_conn #-1 to firehose.us-east-1.amazonaws.com:443
[2021/09/19 23:44:32] [trace] [upstream][conn=0x7f6398d72570][fd=-1] flb_upstream_conn_pending_destroy connection #-1 net_error=110
[2021/09/19 23:44:32] [trace] [upstream][conn=0x7f6398d72570][fd=-1] destroy_conn connection #-1 tls_session=(nil). Memory for connection freed.
[2021/09/19 23:44:34] [ warn] [net][conn=0x7f6398d72570] flb_net_tcp_connect async getaddrinfo(host='firehose.us-east-1.amazonaws.com', err=12): Timeout while contacting DNS servers
[2021/09/19 23:44:34] [debug] [io][conn=0x7f6398d72570][coro=0x7f6398d70a00] flb_io_net_connect tcp connection failed
[2021/09/19 23:44:34] [debug] [upstream][conn=0x7f6398d72570][fd=-1][coro=0x7f6398d70a00] create_conn connection #-1 to firehose.us-east-1.amazonaws.com:443 failed, cleaning up
[2021/09/19 23:44:34] [trace] [upstream][conn=0x7f6398d72570][fd=-1] prepare_destroy_conn #-1 to firehose.us-east-1.amazonaws.com:443

The fix in this PR is not perfect because I haven't fully figured out how to make the upstream timeout handler cancel a pending DNS call. I have a couple of ideas around it but haven't tested them. It does prevent the invalid memory reference.

I will attach valgrind output and debug logs with and without the fix. Running the task with valgrind makes every connection timeout, however with the fix, the crashes don't happen.

There is also some performance regression in versions after 1.7.5 that make DNS lookups or TCP connections slow because I can run the same load on 1.7.5 with no issues.


Addresses #4040


Enter [N/A] in the box, if an item is not applicable to your change.

Testing
Before we can approve your change; please submit the following in a comment:

  • [Y] Example configuration file for the change
  • [Y] Debug log output from testing the change
  • [Y] Attached Valgrind output that shows no leaks or memory corruption was found

Documentation

  • [N/A ] Documentation required for this feature

Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

@edsiper
Copy link
Member

edsiper commented Sep 21, 2021

Is the issue still happening with v1.8.7 ?

@krispraws
Copy link
Contributor Author

Is the issue still happening with v1.8.7 ?

I have been testing with the master branch today. But I ran into this in 1.8.5 almost a month ago.

@edsiper
Copy link
Member

edsiper commented Sep 21, 2021

Please confirm if you can still repro the issue with v1.8.7

@krispraws
Copy link
Contributor Author

krispraws commented Sep 21, 2021

Config file:

[SERVICE]
    Flush           1
    Daemon          off
    Log_Level       debug

[INPUT]
    Name              tail
    Path              /home/ec2-user/flb-data/input_data/*.log
    Path_Key          input_file  
    Offset_Key        input_offset  
    refresh_interval  2
    rotate_wait       5
    db                /home/ec2-user/flb-data/input_db/fluentbit-logs.db
    db.sync           normal
    db.locking        true
    buffer_chunk_size 1MB
    buffer_max_size   50MB
    skip_long_lines   on
    mem_buf_limit     199800KB

[OUTPUT]
    name              kinesis_firehose
    match             *
    region            us-east-1
    delivery_stream   <redacted>
    role_arn          <redacted>            
    Retry_Limit       3

@krispraws
Copy link
Contributor Author

Please confirm if you can still repro the issue with v1.8.7

valgrind_1_8_7.txt
I just reproduced it with 1.8.7 and valgrind. See attached output.

@edsiper
Copy link
Member

edsiper commented Sep 21, 2021

assigned @leonardo-albertovich for review

@krispraws
Copy link
Contributor Author

krispraws commented Sep 21, 2021

Debug log & valgrind output on master branch without changes:

flb_segv_bug.log
valgrind_bug.txt

Debug log & valgrind output with this PR:

flb_segv_fix.log
valgrind_fix.txt

The test sent ~500 log records of size 1 KB for 60 seconds.

@krispraws
Copy link
Contributor Author

Other stack traces seen for the same issue:

[2021/09/19 20:42:51] [engine] caught signal (SIGSEGV)
#0  0x777d85            in  mk_event_add() at lib/monkey/mk_core/mk_event.c:96
#1  0x45b824            in  net_connect_async() at src/flb_network.c:416
#2  0x45d4a2            in  flb_net_tcp_connect() at src/flb_network.c:1274
#3  0x484cec            in  flb_io_net_connect() at src/flb_io.c:93
#4  0x46955e            in  create_conn() at src/flb_upstream.c:529
#5  0x469afb            in  flb_upstream_conn_get() at src/flb_upstream.c:677
#6  0x55aeb7            in  request_do() at src/aws/flb_aws_util.c:292
#7  0x55aa48            in  flb_aws_client_request() at src/aws/flb_aws_util.c:160
#8  0x519a54            in  put_record_batch() at plugins/out_kinesis_firehose/firehose_api.c:828
#9  0x518421            in  send_log_events() at plugins/out_kinesis_firehose/firehose_api.c:376
#10 0x51877d            in  add_event() at plugins/out_kinesis_firehose/firehose_api.c:451
#11 0x518aea            in  process_and_send_records() at plugins/out_kinesis_firehose/firehose_api.c:551
#12 0x517129            in  cb_firehose_flush() at plugins/out_kinesis_firehose/firehose.c:327
#13 0x450eff            in  output_pre_cb_flush() at include/fluent-bit/flb_output.h:509
#14 0x77a1c6            in  co_init() at lib/monkey/deps/flb_libco/amd64.c:117
#15 0xffffffffffffffff  in  ???() at ???:0

destroy_conn() also destroys the TLS session.

[2021/09/19 18:54:04] [engine] caught signal (SIGSEGV)
#0  0x7f3dfd5993a4      in  tcache_get() at 193-ga0bc5dd3be/malloc/malloc.c:2952
#1  0x7f3dfd5993a4      in  __GI___libc_malloc() at 193-ga0bc5dd3be/malloc/malloc.c:3053
#2  0x7f3dfe19ef18      in  CRYPTO_zalloc() at de/bits/string_fortified.h:230
#3  0x7f3dfe1242f2      in  EC_GROUP_new() at sr/include/bits/string_fortified.h:33
#4  0x7f3dfe124e15      in  EC_GROUP_dup() at sr/include/bits/string_fortified.h:242
#5  0x7f3dfe12349e      in  EC_KEY_set_group() at clude/internal/refcount.h:480
#6  0x7f3dfe128c3a      in  pkey_ec_paramgen() at sr/include/stdlib.h:407
#7  0x7f3dfe18bb5a      in  EVP_PKEY_paramgen() at sr/lib/gcc/x86_64-redhat-linux/7/include/stddef.h:61
#8  0x7f3dfe52229a      in  ssl_generate_param_group() at enssl/x509.h:4777
#9  0x7f3dfe54691d      in  tls_process_ske_ecdhe() at :2214
#10 0x7f3dfe54691d      in  tls_process_key_exchange() at :2290
#11 0x7f3dfe5470a4      in  ossl_statem_client_process_message() at :1048
#12 0x7f3dfe540dee      in  read_state_machine() at l/packet_local.h:636
#13 0x7f3dfe540dee      in  state_machine() at l/packet_local.h:434
#14 0x7f3dfe52dfff      in  ???() at ternal/dane.h:3670
#15 0x479f11            in  tls_net_handshake() at src/tls/openssl.c:417
#16 0x47a692            in  flb_tls_session_create() at src/tls/flb_tls.c:318
#17 0x484efe            in  flb_io_net_connect() at src/flb_io.c:125
#18 0x46955e            in  create_conn() at src/flb_upstream.c:529
#19 0x469afb            in  flb_upstream_conn_get() at src/flb_upstream.c:677

@krispraws
Copy link
Contributor Author

krispraws commented Sep 21, 2021

#3894 , #3866 , #3823 , #3687 , #3940 are possibly the same issue.

@leonardo-albertovich
Copy link
Collaborator

Hi @krispraws I mostly agree with you on the diagnosis, however, the reason why your patch works is because you removed the prepare_destroy_conn call in flb_upstream_conn_timeouts which is not really viable without properly assessing the possible side effects which is the reason why this fix has not been released yet. The way in which the event loop works, I think the best way to fix it wouldn't be making the upstream timeout handler aware of DNS lookups but rather, having a flag in flb_upstream_conn that marks it as busy (ie. it's either set in flb_io_net_connect before and after the flb_net_tcp_connect call or within flb_net_tcp_connect (which would be a bit messier)) and have flb_upstream_conn_active_destroy wait until that flag is FLB_FALSE in order to destroy the context.

The reason I think about it this way is because this means when the flag is set to FLB_FALSE the coroutine actually finished and the outer error handling code was executed (ie. create_conn -> flb_io_net_connect -> flb_net_tcp_connect where create_conn has a prepare_destroy_conn_safe call when flb_io_net_connect fails).

I think what two things could happen :

1 - There's a connection timeout caused by a DNS lookup timeout where the timeout value is low enough that the events fire in the same event loop cycle or even worse the DNS timer fires at a later cycle (this sounds very strange to me and I could elaborate about it later on if you want to talk about it).

2 - There's a connection timeout caused by an actual connection attempt.

In scenario 1 flb_net_tcp_connect exits on its own once the DNS lookup timer executes and cancels the lookup which would set toggle the flb_upstream_conn flag and once the coroutine finishes and yields execution (and other events have been processed) lets flb_upstream_conn_active_destroy take care of that connection.

In scenario 2 flb_upstream_conn_timeouts calls shutdown on the connection file descriptor which causes the syscall to abort and lets flb_net_tcp_connect return (toggling the flag and all of that).

I'd love to hear what you @krispraws and @edsiper think about it as this is an issue I have been thinking in background for some time (I was focusing on a different way to fix it which required more effort when it comes to detecting regressions caused by the change).

If you have any questions or want to discuss it further feel free to reach me in the fluent slack.

@krispraws
Copy link
Contributor Author

krispraws commented Sep 21, 2021

@leonardo-albertovich , thank you for looking at this, and for your detailed comment.
My reasoning for removing the prepare_destroy_conn call in flb_upstream_conn_timeouts for connect_timeouts was that

  • connect_timeout only applies to new connections
  • There should be another coroutine that is "responsible" for checking if the connection succeeded or failed, which can call prepare_destroy_conn to do the cleanup. In this case, it would be the coroutine that calls create_conn.
  • prepare_destroy_conn does 3 things - delete the event for the connection from the event loop, close the socket and move the connection from the busy queue to the pending destroy queue. With my change, removing the event gets deferred. I may be missing some implication here. The socket close can (imo) wait till create_conn returns or the code in flb_network detects the syscall failure.

(Edited) This bug definitely occurs today in scenario 1 where the DNS timeout doesn't get executed and the DNS lookup is still pending. There is no valid socket linked to the connection since it cannot be created till the DNS lookup returns the address family. The shutdown call in flb_upstream_conn_timeouts has no effect.
But it also exists even when there is a valid file descriptor, since flb_net_tcp_connect will still try to access u_conn if the syscall fails.

I didn't fully grasp your idea about the flag and what side effects it would prevent. I am not on the slack yet but I'll try to join later today.

There's a connection timeout caused by a DNS lookup timeout where the timeout value is low enough that the events fire in the same event loop cycle or even worse the DNS timer fires at a later cycle (this sounds very strange to me and I could elaborate about it later on if you want to talk about it).

There is an underlying root cause here with slow DNS lookups and DNS timers not firing on time, that I have not diagnosed yet. I added logs to track the UDP DNS timer and saw it fire almost 30 seconds late in some cases. In my workflow, I am also trying to stress the kinesis.firehose output plugin to its limits so I don't use the threaded mode. I saw around 40 events being processed in an iteration of the event loop and each iteration took multiple seconds, which may explain why the timers fire late. However, the upstream timer always fires earlier than the DNS timer. I'd love to hear your thoughts on this.

@krispraws
Copy link
Contributor Author

Here is a snippet of logs with the traces I added. It shows the DNS timer event executing very late.

[2021/09/19 23:44:18] [trace] [upstream][conn=0x7f6398d72570][fd=-1][coro=0x7f6398d70a00] create_conn set connect_timeout=10
[2021/09/19 23:44:18] [trace] [io][conn=0x7f6398d72570][coro=0x7f6398d70a00] flb_io_net_connect called
[2021/09/19 23:44:18] [trace] [net] flb_net_getaddrinfo coro=0x7f6398d70a00, dns_mode_textual=(null), timeout=10, dns_mode=U
[2021/09/19 23:44:18] [trace] [dns] flb_net_dns_lookup_context_create: coro=0x7f6398d70a00 dns_mode=U
[2021/09/19 23:44:18] [trace] [dns] flb_net_dns_lookup_context_create: coro=0x7f6398d70a00 created lookup_context=0x7f63982b1040, ares_channel=0x7f639927b720
[2021/09/19 23:44:18] [trace] [net] flb_net_getaddrinfo coro=0x7f6398d70a00, created lookup_context=0x7f63982b1040, ares_channel=0x7f639927b720
[2021/09/19 23:44:18] [trace] [dns] flb_dns_ares_socket: af=2, type=2, protocol=0, lookup_ctx=0x7f63982b1040
[2021/09/19 23:44:18] [trace] [dns] flb_dns_ares_socket: coro=0x7f6398d70a00, lookup_ctx=0x7f63982b1040, lookup_ctx->ares_channel=0x7f639927b720, lookup_ctx->finished=0, lookup_ctx->dropped=0, lookup_ctx->response_event.fd=0, ares_socket_created=0, ares_socket_registered=0, lookup_ctx->udp_timer=(nil)
[2021/09/19 23:44:18] [trace] [net] flb_dns_ares_socket Added event=0x7f63982b1040, type=4, mask=1
[2021/09/19 23:44:18] [trace] [dns] flb_dns_ares_connect: sockfd=62, sockaddr=0x7f6399997b20, userdata=0x7f63982b1040, ip=172.31.0.2, ip_ret=0 
[2021/09/19 23:44:18] [trace] [dns] flb_dns_ares_send: sockfd=62, iovec=0x7f6399997b50, len=1, userdata=0x7f63982b1040
[2021/09/19 23:44:18] [trace] [dns] flb_dns_ares_send: sockfd=62, iovec=0x7f6399997b50, len=1, userdata=0x7f63982b1040
[2021/09/19 23:44:18] [trace] [net] flb_net_getaddrinfo coro=0x7f6398d70a00, lookup_context=0x7f63982b1040, ares_channel=0x7f639927b720, computed timeout=9000, ares_socket_created=1, ares_socket_type=2
[2021/09/19 23:44:18] [debug] [net] flb_net_getaddrinfo coro=0x7f6398d70a00, lookup_context=0x7f63982b1040, ares_channel=0x7f639927b720 created udp timer=0x7f63982b10d0
[2021/09/19 23:44:22] [trace] [engine] Received custom event=0x7f63982b1040
[2021/09/19 23:44:22] [trace] [dns] flb_net_getaddrinfo_event_handler: coro=0x7f63982b1040, arg=0x7f6398d70a00, lookup_ctx=0x7f63982b1040, lookup_ctx->ares_channel=0x7f639927b720, lookup_ctx->finished=0, lookup_ctx->dropped=0, lookup_ctx->response_event.fd=62, ares_socket_type=2, ares_socket_created=1, ares_socket_registered=1, lookup_ctx->udp_timeout_detected=0x7f6399997dc0
[2021/09/19 23:44:22] [trace] [dns] flb_dns_ares_recvfrom: sockfd=62, from_sockaddr=0x7f63a0484a40, data=0x7f63a0484a60, data_len=4097, userdata=0x7f63982b1040 
[2021/09/19 23:44:22] [trace] [dns] flb_dns_ares_recvfrom: sockfd=62, from_sockaddr=0x7f63a0484a40, data=0x7f63a0484a60, data_len=4097, userdata=0x7f63982b1040 
[2021/09/19 23:44:30] [debug] [upstream][conn=0x7f6398d72570][fd=-1] flb_upstream_conn_timeouts connection #-1 to firehose.us-east-1.amazonaws.com:443 timed out after 10 seconds
[2021/09/19 23:44:30] [debug] [upstream][conn=0x7f6398d72570][fd=-1] flb_upstream_conn_timeouts cleaning up connection #-1 due to connect timeout
[2021/09/19 23:44:30] [trace] [upstream][conn=0x7f6398d72570][fd=-1] prepare_destroy_conn #-1 to firehose.us-east-1.amazonaws.com:443
[2021/09/19 23:44:32] [trace] [upstream][conn=0x7f6398d72570][fd=-1] flb_upstream_conn_pending_destroy connection #-1 net_error=110
[2021/09/19 23:44:32] [trace] [upstream][conn=0x7f6398d72570][fd=-1] destroy_conn connection #-1 tls_session=(nil). Memory for connection freed.
[2021/09/19 23:44:32] [trace] [scheduler] handling FLB_SCHED_TIMER_CB_ONESHOT event: event=0x7f63992af540, timer=0x7f63992af540, timer_type=3, timer_active=1, timer_fd=64, timer_data=0x7f63982b1040
[2021/09/19 23:44:32] [trace] [dns] flb_net_getaddrinfo_timeout_handler: coro=0x7f6398d70a00, lookup_ctx=0x7f63982b1040, lookup_ctx->ares_channel=0x7f639927b720, lookup_ctx->finished=0, lookup_ctx->dropped=0, lookup_ctx->response_event.fd=62, ares_socket_type=2, ares_socket_created=1, ares_socket_registered=1, lookup_ctx->udp_timer=0x7f63992af540
[2021/09/19 23:44:32] [trace] [dns] flb_net_getaddrinfo_callback: coro=0x7f6398d70a00, lookup_ctx=0x7f63982b1040, status=24, timeouts=0, res=(nil), lookup_ctx->ares_channel=0x7f639927b720, lookup_ctx->finished=1, lookup_ctx->dropped=0, lookup_ctx->response_event.fd=62, ares_socket_type=2, ares_socket_created=1, ares_socket_registered=1, lookup_ctx->udp_timeout_detected=0x7f6399997dc0
[2021/09/19 23:44:32] [trace] [dns] flb_dns_ares_close: coro=0x7f6398d70a00, sockfd=62, lookup_ctx=0x7f63982b1040, lookup_ctx->ares_channel=0x7f639927b720, lookup_ctx->finished=1, lookup_ctx->dropped=0, lookup_ctx->response_event.fd=62, ares_socket_created=1, ares_socket_registered=1, lookup_ctx->udp_timer=(nil)
[2021/09/19 23:44:34] [ warn] [net][conn=0x7f6398d72570] flb_net_tcp_connect async getaddrinfo(host='firehose.us-east-1.amazonaws.com', err=12): Timeout while contacting DNS servers
[2021/09/19 23:44:34] [debug] [io][conn=0x7f6398d72570][coro=0x7f6398d70a00] flb_io_net_connect tcp connection failed
[2021/09/19 23:44:34] [debug] [upstream][conn=0x7f6398d72570][fd=-1][coro=0x7f6398d70a00] create_conn connection #-1 to firehose.us-east-1.amazonaws.com:443 failed, cleaning up
[2021/09/19 23:44:34] [trace] [upstream][conn=0x7f6398d72570][fd=-1] prepare_destroy_conn #-1 to firehose.us-east-1.amazonaws.com:443
<SIGSEGV after this>

@leonardo-albertovich
Copy link
Collaborator

In scenario 1 we don't really care about nor do we need the socket / shutdown at all so that's fine. What's certainly terrible is having timers fire 30 seconds after they should, in this case you're using the default value for the connection timeout which is 10 seconds which means the DNS lookup timeout should be 9 seconds which makes that terrible. The event loop in flb_engine_start is initialized with 256 slots so in order to have events spread on multiple epoll invocations the result count should be huge which makes no sense.

@krispraws
Copy link
Contributor Author

krispraws commented Sep 21, 2021

I also considered adding some kind of delay (set a timestamp when moving a connection to the destroy queue and only destroy N seconds or more after it) before the connection context is actually destroyed - I think your flag approach achieves the same.

@leonardo-albertovich
Copy link
Collaborator

Exactly, it achieves the same in a more controlled way, after all, we're seeing some very weird timing stuff there so I'd rather not depend on that.

@krispraws
Copy link
Contributor Author

I had a few questions about c-ares and the async DNS lookups as well. Creating a ares_channel for each connection seems expensive. I don't have much knowledge/experience about c-ares but it seemed to be designed for a few channels with many queries on each. I could be wrong about that.

@leonardo-albertovich
Copy link
Collaborator

It's true that using a channel per lookup is slightly more wasteful than desirable, at the moment the choice was made mostly to avoid making the system overly complex, especially when it comes to timeouts and the coroutine life cycle.

@krispraws
Copy link
Contributor Author

@leonardo-albertovich , @edsiper , I dug into why the event loop and DNS lookups were slow for my workflow and I found something interesting. I was suspecting earlier that the change to use c-ares for async DNS was the cause. However I ran a test with a code change to use the old getaddrinfo api and saw similar issues where a majority of connections time out. I root-caused it to the change in base64 encoding that was introduced in the mbedtls 2.26.0 version. mbedtls was upgraded from 2.24 in v1.8.0. Downgrading to 2.24 and running the same workload led to dramatically better performance. I'll create a separate issue with more details to track this and discuss what the right solution is.

@@ -1081,6 +1083,30 @@ flb_sockfd_t flb_net_tcp_connect(const char *host, unsigned long port,
if (is_async) {
ret = flb_net_getaddrinfo(host, _port, &hints, &res,
u_conn->u->net.dns_mode, connect_timeout);
/*
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some context to explain this change better:

  1. The check for u_conn->net_error is unconditional by intention and doesn't distinguish between the DNS lookup succeeding or failing.
  2. This is my understanding of the current code base (pseudo-code)
coroutine_1_flb_net_tcp_connect:
dns_lookup_async
----yield/resume_1---
fd = create_socket
conn->fd = fd
tcp_connect_async(fd)
----yield/resume_2---
http_proxy_connect
tls_connect_async
----yield/resume_3---
check tls_connect return val and return connection

coroutine_2_flb_upstream_conn_timeouts:
shutdown(fd) // if fd is not valid, this has no effect
conn->net_error = CONNECT_TIMEOUT
prepare_destroy_conn // delete event, close socket, move connection to pending destroy queue

coroutine_3_flb_upstream_conn_pending_destroy:
for c in pending_destroy_queue:
   destroy_conn(c) //free up connection memory

coroutine_2 and coroutine_3 can execute after any of the yield/resume points in coroutine_1.
3. I saw from logs that the DNS lookup timeout handler can execute very late when the plugin is under load. I want to avoid relying on it.
4. This specific change is for the following scenario that I saw in my logs for the bug:

coroutine_1 
yield/resume_1
coroutine_2
coroutine_3
coroutine_1

Since there is no file descriptor set in the connection object yet, coroutine_1 doesn't know that the connection timeout triggered in upstream. The DNS lookup can succeed or fail. An error must be returned even if DNS succeeded. Otherwise the connection will be returned as a successful connection.

u_conn->net_error = ETIMEDOUT;
prepare_destroy_conn(u_conn);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I discussed with @leonardo-albertovich on Slack. He thinks this change may have side effects. I'd like to understand under what scenario prepare_destroy_conn is needed here i.e. the scenario where it wouldn't be handled by the code that is trying to create the connection.

@leonardo-albertovich
Copy link
Collaborator

@krispraws I created this branch as I told you yesterday, https://github.com/leonardo-albertovich/fluent-bit/tree/upstream_conn_busy_flag_addition

In that branch I incorporated your conditional shutdown change, the "in-between" timeout detection and the busy flag. I have been testing this using https://github.com/pwhelan/donotshout, playing around with MinJitter and MaxJitter and disabling both TruncatePercent and DropPercent.

This is what my config file looks like :

[SERVICE]
    Log_Level           info
    flush               1

[INPUT]
    name                dummy
    rate                1000

[INPUT]
    name                dummy
    rate                1000

[OUTPUT]
    workers             20
    match               *
    name                tcp
    host                vm1.vcap.me
    port                2021
    net.keepalive       false
    net.connect_timeout 3

[OUTPUT]
    workers             20
    match               *
    name                tcp
    host                vm1.vcap.me
    port                2021
    net.keepalive       false
    net.connect_timeout 3

[OUTPUT]
    workers             20
    match               *
    name                tcp
    host                vm1.vcap.me
    port                2021
    net.keepalive       false
    net.connect_timeout 3

This causes enough delays to naturally trigger the condition.

@krispraws
Copy link
Contributor Author

@krispraws I created this branch as I told you yesterday, https://github.com/leonardo-albertovich/fluent-bit/tree/upstream_conn_busy_flag_addition

@leonardo-albertovich - thanks. I was not able to look at it today but I will check it out and run it in my environment tomorrow.

I noticed another issue that may need additional changes related to connection cleanup. When the keepalive count for a connection expires, it is marked for destruction and the tcp socket is closed by calling flb_socket_close(fd). But the tls session is not destroyed until the real destroy happens in destroy_conn(). I am frequently seeing the file descriptor for the old connection be re-assigned almost immediately to a new connection, before the old connection's tls session is destroyed. When the new connection initiates the tls handshake, it always fails with SSL routines:ssl3_get_record:wrong version number:ssl/record/ssl3_record.c:332 and the session is destroyed and connection fails. If the file descriptor is assigned again to a new connection now, it seems to work.

I don't understand why the tcp socket is closed before calling SSL_shutdown(). Shouldn't the order be reversed? I don't see any way an ssl object can be re-used for a new tcp connection in the current code, so why is it not destroyed sooner? Can destroying the tls session be moved to prepare_destroy_conn()?

[2021/09/23 09:57:39] [debug] [upstream] KA connection #50 to firehose.us-east-1.amazonaws.com:443 is now available
[2021/09/23 09:57:39] [debug] [upstream] KA connection #50 to firehose.us-east-1.amazonaws.com:443 has been assigned (recycled)
[2021/09/23 09:57:39] [trace] [openssl] [tid=10170][fd=50] SSL_write start write
[2021/09/23 09:57:39] [trace] [openssl] [tid=10170][fd=50] SSL_write start write
[2021/09/23 09:57:41] [trace] [openssl] [tid=10170][fd=50] SSL_write start write
[2021/09/23 09:57:41] [trace] [openssl] [tid=10170][fd=50] SSL_read start read
[2021/09/23 09:57:41] [trace] [openssl] [tid=10170][fd=50] SSL_read start read
........
[2021/09/23 09:57:41] [trace] [openssl] [tid=10170][fd=50] SSL_read start read
[2021/09/23 09:57:41] [trace] [openssl] [tid=10170][fd=50] SSL_read start read
[2021/09/23 09:57:41] [debug] [http_client] server firehose.us-east-1.amazonaws.com:443 will close connection #50
[2021/09/23 09:57:41] [trace] [upstream] destroy connection #50 to firehose.us-east-1.amazonaws.com:443
[2021/09/23 09:57:42] [trace] [net] connection #50 in process to firehose.us-east-1.amazonaws.com:443
[2021/09/23 09:57:42] [trace] [openssl][tid=10170][fd=50] destroy_session
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL3 alert write:warning:close notify
[2021/09/23 09:57:42] [trace] [openssl][tid=10170][fd=50] create_session 
[2021/09/23 09:57:42] [trace] [tls] created TLS session for firehose.us-east-1.amazonaws.com:443, fd=50
[2021/09/23 09:57:42] [trace] [openssl] [tid=10170][fd=50] SSL_connect start handshake
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: before SSL initialization
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS write client hello
[2021/09/23 09:57:42] [error] [tls][tid=10170] connection #50 SSL_connect: error in error
[2021/09/23 09:57:42] [trace] [openssl] [tid=10170][fd=50] SSL_connect SSL_get_error returned 1
[2021/09/23 09:57:42] [error] [openssl] [tid=10170][fd=50] SSL_connect non-retryable error: ret=-1, ssl_err=1
[2021/09/23 09:57:42] [error] [openssl] [tid=10170][fd=50] SSL_connect error queue entry=140028891535104:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:ssl/record/ssl3_record.c:332:
[2021/09/23 09:57:42] [trace] [tls] net_handshake for firehose.us-east-1.amazonaws.com:443, fd=50 returned -1
[2021/09/23 09:57:42] [trace] [openssl][tid=10170][fd=50] destroy_session
[2021/09/23 09:57:42] [debug] [upstream] connection #50 failed to firehose.us-east-1.amazonaws.com:443
[2021/09/23 09:57:42] [trace] [upstream] destroy connection #50 to firehose.us-east-1.amazonaws.com:443
[2021/09/23 09:57:42] [trace] [net] connection #50 in process to firehose.us-east-1.amazonaws.com:443
[2021/09/23 09:57:42] [trace] [openssl][tid=10170][fd=50] create_session 
[2021/09/23 09:57:42] [trace] [tls] created TLS session for firehose.us-east-1.amazonaws.com:443, fd=50
[2021/09/23 09:57:42] [trace] [openssl] [tid=10170][fd=50] SSL_connect start handshake
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: before SSL initialization
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS write client hello
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 WANT_READ
[2021/09/23 09:57:42] [trace] [tls] net_handshake for firehose.us-east-1.amazonaws.com:443, fd=50 returned -2020
[2021/09/23 09:57:42] [trace] [openssl] [tid=10170][fd=50] SSL_connect start handshake
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS write client hello
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS read server hello
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS read server certificate
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS read server key exchange
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS read server done
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS write client key exchange
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS write change cipher spec
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS write finished
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 WANT_READ
[2021/09/23 09:57:42] [trace] [tls] net_handshake for firehose.us-east-1.amazonaws.com:443, fd=50 returned -2020
[2021/09/23 09:57:42] [trace] [openssl] [tid=10170][fd=50] SSL_connect start handshake
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS write finished
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS read change cipher spec
[2021/09/23 09:57:42] [debug] [tls][tid=10170] connection #50 SSL_connect: SSLv3/TLS read finished
[2021/09/23 09:57:42] [trace] [openssl] [tid=10170][fd=50] SSL_connect connection and handshake OK

@krispraws
Copy link
Contributor Author

Hey @leonardo-albertovich , I just went through your changes and they won't address multiple scenarios where the invalid memory access can happen. I can set up a repro in some time. If you want to use the flag approach, the flag should be set to TRUE before this line: https://github.com/fluent/fluent-bit/blob/master/src/flb_upstream.c#L523 and set to FALSE here: https://github.com/fluent/fluent-bit/blob/master/src/flb_upstream.c#L537
Otherwise setting/unsetting the flag will be spread all over the code.

@leonardo-albertovich
Copy link
Collaborator

code

Actually you are correct, I hadn't faced that issue so I didn't even consider TLS and was trying to keep it at the lowest level I knew about so in case someone ended up using that function straight instead of through one of the wrapper layers it would be covered, I'll change it to those points you correctly marked and recreate the branch and PR so it's nice and tidy.

Copy link
Collaborator

@leonardo-albertovich leonardo-albertovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this PR should not be merged because the main issue was addressed by another PR that was created as a result of the discussion in this one.
However, the log line additions to net_connect_async and flb_tls_session_create are still relevant and I think making a new PR that isolates those changes would be the way to go.

@krispraws
Copy link
Contributor Author

I think this PR should not be merged because the main issue was addressed by another PR that was created as a result of the discussion in this one.
However, the log line additions to net_connect_async and flb_tls_session_create are still relevant and I think making a new PR that isolates those changes would be the way to go.

I'll create a new PR tomorrow.

@JeffLuoo
Copy link
Contributor

JeffLuoo commented Nov 4, 2021

any update on this issue?

@leonardo-albertovich
Copy link
Collaborator

any update on this issue?

It was addressed in the related PRs issued by me which have already been merged in master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants