Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault when uploading logs to s3 #3823

Closed
lucastt opened this issue Jul 21, 2021 · 7 comments
Closed

Segmentation fault when uploading logs to s3 #3823

lucastt opened this issue Jul 21, 2021 · 7 comments
Labels

Comments

@lucastt
Copy link

lucastt commented Jul 21, 2021

Bug Report

Describe the bug

I run fluent bit as a daemon set in a k8s cluster and use it to send logs to S3 using S3 output plugin and every now and then one of fluent bit's pods restart due to SIGSEGV after failing to upload the log file to S3 bucket. It does not happen with all pods. I'm not really sure why it is happening.

[2021/07/21 20:52:37] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4183354 watch_fd=107 name=/var/lib/docker/containers/9f6556518167288f11595db2bd3aebeef147b35492475abae1928d32cc855fcc/9f6556518167288f11595db2bd3aebeef147b35492475abae1928d32cc855fcc-json.log.1
[2021/07/21 20:52:37] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4183553 watch_fd=108 name=/var/log/containers/login-8b5469c5c-bftpc_prod_login-9f6556518167288f11595db2bd3aebeef147b35492475abae1928d32cc855fcc.log
[2021/07/21 20:52:44] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/42/n3L0C6Rz.gz
[2021/07/21 20:52:44] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/42/S9uUy3CM.gz
[2021/07/21 20:52:44] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4183354 watch_fd=107
[2021/07/21 20:53:42] [ info] [input:tail:tail.0] inode=4182696 handle rotation(): /var/log/containers/api-mobile-ro-564bbbf8f7-qxbvc_prod_api-mobile-ro-50468aaf5c32903a8ee7ad10acc23c32b8560e034ca42fa1753daf836f7e4bce.log => /var/lib/docker/containers/50468aaf5c32903a8ee7ad10acc23c32b8560e034ca42fa1753daf836f7e4bce/50468aaf5c32903a8ee7ad10acc23c32b8560e034ca42fa1753daf836f7e4bce-json.log.1
[2021/07/21 20:53:42] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4182696 watch_fd=97
[2021/07/21 20:53:42] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4182696 watch_fd=109 name=/var/lib/docker/containers/50468aaf5c32903a8ee7ad10acc23c32b8560e034ca42fa1753daf836f7e4bce/50468aaf5c32903a8ee7ad10acc23c32b8560e034ca42fa1753daf836f7e4bce-json.log.1
[2021/07/21 20:53:42] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4182676 watch_fd=110 name=/var/log/containers/api-mobile-ro-564bbbf8f7-qxbvc_prod_api-mobile-ro-50468aaf5c32903a8ee7ad10acc23c32b8560e034ca42fa1753daf836f7e4bce.log
[2021/07/21 20:53:44] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/42/kGOlU05z.gz
[2021/07/21 20:53:49] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4182696 watch_fd=109
[2021/07/21 20:55:34] [ warn] [engine] failed to flush chunk '1-1626900925.790141043.flb', retry in 9 seconds: task_id=6, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:34] [ warn] [engine] failed to flush chunk '1-1626900931.400691868.flb', retry in 7 seconds: task_id=17, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:35] [ warn] [engine] failed to flush chunk '1-1626900924.702792210.flb', retry in 10 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:35] [ warn] [engine] failed to flush chunk '1-1626900926.871787800.flb', retry in 9 seconds: task_id=11, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:35] [ warn] [engine] failed to flush chunk '1-1626900924.661554381.flb', retry in 10 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:35] [ warn] [engine] failed to flush chunk '1-1626900924.664490596.flb', retry in 6 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:35] [ warn] [engine] failed to flush chunk '1-1626900929.745265037.flb', retry in 6 seconds: task_id=14, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:35] [ warn] [engine] failed to flush chunk '1-1626900926.557089830.flb', retry in 6 seconds: task_id=10, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:35] [ warn] [engine] failed to flush chunk '1-1626900926.112047034.flb', retry in 8 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:35] [ warn] [engine] failed to flush chunk '1-1626900924.665241524.flb', retry in 10 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0)
[2021/07/21 20:55:36] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4183552 watch_fd=104
[2021/07/21 20:55:36] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4182847 watch_fd=102
[2021/07/21 20:55:36] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4182790 watch_fd=103
[2021/07/21 20:55:36] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4182772 watch_fd=100
[2021/07/21 20:55:36] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4182755 watch_fd=101
[2021/07/21 20:56:54] [ info] [input:tail:tail.0] inode=4182865 handle rotation(): /var/log/containers/mini-main-6d65d7f499-xtv7s_prod_mini-main-76182b9a168559b2a4a8b4cac2bb9382b519227ed76d72862e0a0f2b7840c588.log => /var/lib/docker/containers/76182b9a168559b2a4a8b4cac2bb9382b519227ed76d72862e0a0f2b7840c588/76182b9a168559b2a4a8b4cac2bb9382b519227ed76d72862e0a0f2b7840c588-json.log.1
[2021/07/21 20:56:54] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4182865 watch_fd=38
[2021/07/21 20:56:54] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4182865 watch_fd=111 name=/var/lib/docker/containers/76182b9a168559b2a4a8b4cac2bb9382b519227ed76d72862e0a0f2b7840c588/76182b9a168559b2a4a8b4cac2bb9382b519227ed76d72862e0a0f2b7840c588-json.log.1
[2021/07/21 20:56:54] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4182703 watch_fd=112 name=/var/log/containers/mini-main-6d65d7f499-xtv7s_prod_mini-main-76182b9a168559b2a4a8b4cac2bb9382b519227ed76d72862e0a0f2b7840c588.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.imoveis-56f6cf86fc-mvw2k_prod_imoveis-d68d8b57fdbeb5c6295750f8b60036fdd90308d0981e06f863300f8c10ff1091.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.api-mobile-ro-564bbbf8f7-qxbvc_prod_api-mobile-ro-50468aaf5c32903a8ee7ad10acc23c32b8560e034ca42fa1753daf836f7e4bce.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.markito-68984697c7-4pvc2_prod_markito-58cffa80b7c65478ac4f643ec6665f7e6d0de00ea78617eb1cb37c68a57139bb.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.sauron-worker-75d9c5fb4c-ztzww_prod_sauron-worker-168795c577199c8771a2cdf7e399d2bc7e7c313fde93999c1540b5d85d4b391b.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.login-8b5469c5c-bftpc_prod_login-9f6556518167288f11595db2bd3aebeef147b35492475abae1928d32cc855fcc.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.instana-agent-mz6bc_monitoring_instana-agent-leader-elector-9cd738370d204aaa980082a4dace1d226d29a5b309e28216d2470bf1be1f6a09.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.quinto-messenger-sqs-worker-85b67c49dc-jd9cg_prod_quinto-messenger-sqs-worker-4ef5947f2163586ee661d09669035df9257cae304432282ee16afaff49a3c03a.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.bigfone-5ffcfc7d5f-4bg4k_prod_bigfone-bb8c9b0cdc7b97c0d8d7add15c4e5d05006e21b2e2ff537c7f989a54d273b621.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.kube2iam-ks2rb_kube-system_kube2iam-864851be6bd395d30269c9f7ac275ea3686f3788229ace78e2310c9e652429c4.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.mini-main-6d65d7f499-xtv7s_prod_mini-main-76182b9a168559b2a4a8b4cac2bb9382b519227ed76d72862e0a0f2b7840c588.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.linhadireta-5cc8456857-rkdvc_prod_linhadireta-6c4c6dd4862d6684a66049452e877999ecf36ebc640546cbb052587f2eba00a6.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.owner-fees-api-7f89d5d974-848l2_prod_owner-fees-api-aa68696e35987ae4f5542c58d184876e970a34368e6a2f30164c9f0e4472007e.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.spot-termination-exporter-glrn2_monitoring_spot-termination-exporter-837e7d438ff0d566def6da8805d1d2519389c0fcc5d68a5c9d98e2445e82331c.log
[2021/07/21 20:56:54] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.fluent-bit-nsb8p_kube-system_fluent-bit-25b4b66018575ff81c3d51bea281c05861507de5c45814789ac599f1da6a8d4a.log
[2021/07/21 20:56:55] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/DbWAPxzU.gz
[2021/07/21 20:56:55] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/sslCzMIe.gz
[2021/07/21 20:56:56] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/YTnZ9S38.gz
[2021/07/21 20:56:56] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/F7DazpTS.gz
[2021/07/21 20:56:56] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/95226I0P.gz
[2021/07/21 20:56:56] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/zHlIfdY7.gz
[2021/07/21 20:56:57] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/V3t0rufr.gz
[2021/07/21 20:56:57] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/BkZq3wv4.gz
[2021/07/21 20:56:57] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/WLqgX96Y.gz
[2021/07/21 20:56:57] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/GkcAXgoF.gz
[2021/07/21 20:56:58] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/lotxBaAn.gz
[2021/07/21 20:56:59] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/Sgq3pDm3.gz
[2021/07/21 20:56:59] [ info] [output:s3:s3.2] Successfully uploaded object /kube/k8s-prod-02/2021/07/21/20/46/icgUM6gK.gz
[2021/07/21 20:56:59] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=4182865 watch_fd=111
[2021/07/21 20:57:04] [error] [upstream] connection #-1 to s3.us-east-1.amazonaws.com:443 timed out after 10 seconds
[2021/07/21 20:57:04] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.quinto-messenger-kafka-worker-55dc6cbc77-cpz6j_prod_quinto-messenger-kafka-worker-bcaa6d3c6734f973a1bc172e475d3ed6c8c563969cf0f1e4598f7db13fb5f3df.log
[2021/07/21 20:57:04] [error] [src/flb_http_client.c:1163 errno=32] Broken pipe
[2021/07/21 20:57:04] [error] [output:s3:s3.2] PutObject request failed
[2021/07/21 20:57:04] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.bigfone-twilio-worker-7fdd449768-csggn_prod_bigfone-twilio-worker-6299ad4941d4e6a9240b97ac04a3db835462b920aa4785acd552e4bb96b03774.log
[2021/07/21 20:57:04] [error] [src/flb_http_client.c:1163 errno=32] Broken pipe
[2021/07/21 20:57:04] [error] [output:s3:s3.2] PutObject request failed
[2021/07/21 20:57:04] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.calico-node-2ww84_kube-system_calico-node-fb3ab7bda10201e51e47872871ba1b89bb10519b22f67683411e09db1347aa00.log
[2021/07/21 20:57:04] [error] [output:s3:s3.2] PutObject: Could not parse response
[2021/07/21 20:57:04] [error] [output:s3:s3.2] Raw PutObject response: 
[2021/07/21 20:57:04] [error] [output:s3:s3.2] PutObject request failed
[2021/07/21 20:57:04] [ warn] [engine] failed to flush chunk '1-1626901015.497206755.flb', retry in 11 seconds: task_id=5, input=tail.0 > output=s3.2 (out_id=2)
[2021/07/21 20:57:04] [ warn] [engine] failed to flush chunk '1-1626901017.177277853.flb', retry in 8 seconds: task_id=13, input=tail.0 > output=s3.2 (out_id=2)
[2021/07/21 20:57:04] [ warn] [engine] failed to flush chunk '1-1626901017.654403234.flb', retry in 8 seconds: task_id=15, input=tail.0 > output=s3.2 (out_id=2)
[2021/07/21 20:57:05] [  Error] epoll_ctl: Invalid argument, errno=22 at /tmp/fluent-bit/lib/monkey/mk_core/mk_event_epoll.c:136
[2021/07/21 20:57:05] [error] [net] socket #74 could not connect to s3.us-east-1.amazonaws.com:443
[2021/07/21 20:57:05] [engine] caught signal (SIGSEGV)
#0  0x5614f349143d      in  __mk_list_del() at lib/monkey/include/monkey/mk_core/mk_list.h:88
#1  0x5614f3491468      in  mk_list_del() at lib/monkey/include/monkey/mk_core/mk_list.h:93
#2  0x5614f3491f1a      in  prepare_destroy_conn() at src/flb_upstream.c:390
#3  0x5614f3491f7c      in  prepare_destroy_conn_safe() at src/flb_upstream.c:412
#4  0x5614f3492252      in  create_conn() at src/flb_upstream.c:501
#5  0x5614f34926b4      in  flb_upstream_conn_get() at src/flb_upstream.c:640
#6  0x5614f356a32a      in  request_do() at src/aws/flb_aws_util.c:284
#7  0x5614f3569f3d      in  flb_aws_client_request() at src/aws/flb_aws_util.c:160
#8  0x5614f353ae0c      in  s3_put_object() at plugins/out_s3/s3.c:1128
#9  0x5614f3539dc2      in  upload_data() at plugins/out_s3/s3.c:811
#10 0x5614f353c35f      in  cb_s3_flush() at plugins/out_s3/s3.c:1477
#11 0x5614f347c800      in  output_pre_cb_flush() at include/fluent-bit/flb_output.h:470
#12 0x5614f392f706      in  co_init() at lib/monkey/deps/flb_libco/amd64.c:117
#13 0xffffffffffffffff  in  ???() at ???:0

Expected behavior

Fluent bit handling this kind of failure without segmentation violations.

Your Environment

  • Version used: 1.7.0
  • Configuration:
    [SERVICE]
        Flush         10
        Config_Watch  On
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Listen   0.0.0.0
        HTTP_Port     2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE output-elasticsearch.conf
    @INCLUDE output-s3.conf
  • Environment name and version (e.g. Kubernetes? What version?): Kubernetes 1.17.17
  • Filters and plugins:
    [OUTPUT]
        Name                         s3
        Match                        kube*
        bucket                       ${S3_BUCKET}
        region                       us-east-1
        total_file_size              50M
        use_put_object               On
        send_content_md5             On
        s3_key_format                /%Y/%m/%d/%H/%M/$UUID.gz

Additional context

I also have other output to elastic search and some logs are routed to both outputs. I didn't post it and the filters because I don't think it is related to the problem but if you think it would help I can provide it.

@atheriel
Copy link
Contributor

I believe there were some connection-related improvements in the 1.7.x series that could have fixed this, it's probably worth upgrading to 1.7.9 or the 1.8.x releases.

@lucastt
Copy link
Author

lucastt commented Aug 4, 2021

Sorry, @atheriel, for my delayed response, I've had some problems lately and couldn't come back to this issue. I tested with version 1.8.2 and 1.8.3 and had different problems, so I'd like to avoid updating to 1.8.x. I'll try 1.7.9 and see what happens. Thanks!!

@lucastt
Copy link
Author

lucastt commented Aug 4, 2021

Sadly it looks like the problem keep happening:

[2021/08/04 19:36:37] [ info] [output:s3:s3.2] upload_timeout reached for kube-apps.var.log.containers.nodejs-example-keepalive-5766ff5c6-nodejs-example-keepalive-8ed71ee0cc4f4a8d760cb1ee70fd04e163427df82b181204f3466711e4c49f0d.log
[2021/08/04 19:36:37] [error] [output:s3:s3.2] PutObject: Could not parse response
[2021/08/04 19:36:37] [error] [output:s3:s3.2] Raw PutObject response: 
[2021/08/04 19:36:37] [error] [output:s3:s3.2] PutObject request failed
[2021/08/04 19:36:37] [ warn] [engine] failed to flush chunk '1-1628105787.931152737.flb', retry in 6 seconds: task_id=0, input=tail.0 > output=s3.2 (out_id=2)
[2021/08/04 19:36:37] [ warn] [engine] failed to flush chunk '1-1628105788.788960349.flb', retry in 8 seconds: task_id=9, input=tail.0 > output=s3.2 (out_id=2)
[2021/08/04 19:36:37] [error] [upstream] connection #-1 to s3.us-east-1.amazonaws.com:443 timed out after 10 seconds
[2021/08/04 19:36:37] [error] [upstream] connection #-1 to s3.us-east-1.amazonaws.com:443 timed out after 10 seconds
[2021/08/04 19:36:37] [error] [upstream] connection #-1 to s3.us-east-1.amazonaws.com:443 timed out after 10 seconds
[2021/08/04 19:36:37] [  Error] epoll_ctl: Bad file descriptor, errno=9 at /tmp/fluent-bit/lib/monkey/mk_core/mk_event_epoll.c:136
[2021/08/04 19:36:37] [engine] caught signal (SIGSEGV)
#0  0x56357e00ae1b      in  prepare_destroy_conn_safe() at src/flb_upstream.c:408
#1  0x56357e00b114      in  create_conn() at src/flb_upstream.c:501
#2  0x56357e00b576      in  flb_upstream_conn_get() at src/flb_upstream.c:640
#3  0x56357e0e2c75      in  request_do() at src/aws/flb_aws_util.c:284
#4  0x56357e0e2888      in  flb_aws_client_request() at src/aws/flb_aws_util.c:160
#5  0x56357e0b3757      in  s3_put_object() at plugins/out_s3/s3.c:1128
#6  0x56357e0b270d      in  upload_data() at plugins/out_s3/s3.c:811
#7  0x56357e0b4caa      in  cb_s3_flush() at plugins/out_s3/s3.c:1477
#8  0x56357dff5800      in  output_pre_cb_flush() at include/fluent-bit/flb_output.h:470
#9  0x56357e4a8046      in  co_init() at lib/monkey/deps/flb_libco/amd64.c:117

It do not happen a lot, every now and then one of the pods restart due to this problem, it's a little bit annoying.

@lucastt
Copy link
Author

lucastt commented Aug 4, 2021

This last sample was happening after updating to 1.7.9

@lucastt
Copy link
Author

lucastt commented Aug 4, 2021

I managed to test with 1.8.3 and it happened again, it's a bit different though:

[2021/08/04 20:59:00] [error] [upstream] connection #-1 to vpc-logs-stag-abxpy5evamyo4bjqceb3i3p7ee.us-east-1.es.amazonaws.com:443 timed out after 10 seconds
[2021/08/04 20:59:04] [engine] caught signal (SIGSEGV)
#0  0x55cb05668564      in  mk_event_add() at lib/monkey/mk_core/mk_event.c:96
#1  0x55cb05187f22      in  net_connect_async() at src/flb_network.c:369
#2  0x55cb05188bf2      in  flb_net_tcp_connect() at src/flb_network.c:832
#3  0x55cb051ae254      in  flb_io_net_connect() at src/flb_io.c:89
#4  0x55cb05193eb1      in  create_conn() at src/flb_upstream.c:497
#5  0x55cb051941a0      in  flb_upstream_conn_get() at src/flb_upstream.c:586
#6  0x55cb0520714b      in  cb_es_flush() at plugins/out_es/es.c:766
#7  0x55cb0517e0de      in  output_pre_cb_flush() at include/fluent-bit/flb_output.h:490
#8  0x55cb0566a9a6      in  co_init() at lib/monkey/deps/flb_libco/amd64.c:117
#9  0xffffffffffffffff  in  ???() at ???:

Sometimes this error log connection #-1 repeats many times before the segmentation fault. I noticed it happens faster with 1.8.3 too. I think the error happening on 1.7.9 is different because of the Bad descriptor... message.

I also checked this other issue #2507 and used the config:

[OUTPUT]
    name            something
    match            *
    net.keepalive false

It seems to not change the behavior.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 5, 2021

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Sep 5, 2021
@github-actions
Copy link
Contributor

This issue was closed because it has been stalled for 5 days with no activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants