Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes tail file crashes #6958

Closed
jensg-st opened this issue Mar 6, 2023 · 5 comments
Closed

Kubernetes tail file crashes #6958

jensg-st opened this issue Mar 6, 2023 · 5 comments
Labels
waiting-for-release This has been fixed/merged but it's waiting to be included in a release.

Comments

@jensg-st
Copy link

jensg-st commented Mar 6, 2023

Bug Report

Describe the bug

We have just updated fluent-bit from 2.0.8 to 2.0.9 and now the fluent-bit pod crashes with the attached logs. It works with the exact same configuration with 2.0.8. The configuration is rather simple.

To Reproduce

logLevel: debug
config:
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*api*.log,/var/log/containers/*flow*.log
        multiline.parser docker, cri
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On
  outputs: |
    [OUTPUT]
        Name stdout
  • Example log message if applicable:
[2023/03/06 08:56:36] [debug] [input:tail:tail.0] [static files] processed 0b, done
[2023/03/06 08:56:47] [engine] caught signal (SIGSEGV)
#0  0x7f07e2fcd319      in  ???() at ???:0
#1  0x7f07e2ed7f75      in  ???() at ???:0
#2  0x7f07e2ee99c5      in  ???() at ???:0
#3  0x559e77e486f2      in  flb_sds_printf() at src/flb_sds.c:429
#4  0x559e77ff0452      in  debug_event_mask() at plugins/in_tail/tail_fs_inotify.c:69
#5  0x559e77ff0924      in  tail_fs_event() at plugins/in_tail/tail_fs_inotify.c:199
#6  0x559e77e57d4a      in  flb_input_collector_fd() at src/flb_input.c:1882
#7  0x559e77e8a7aa      in  flb_engine_handle_event() at src/flb_engine.c:490
#8  0x559e77e8a7aa      in  flb_engine_start() at src/flb_engine.c:853
#9  0x559e77e31b24      in  flb_lib_worker() at src/flb_lib.c:629
#10 0x7f07e36b7ea6      in  ???() at ???:0
#11 0x7f07e2f6ba2e      in  ???() at ???:0
#12 0xffffffffffffffff  in  ???() at ???:0
  • Steps to reproduce the problem:

Your Environment

  • Version used: 2.0.9 / Helm chart 0.24.0
  • Environment name and version (e.g. Kubernetes? What version?): k3s 1.25
  • Server type and version: Kubernetes k3s
  • Operating System and version: 5.15.0-43-generic 20.04.1-Ubuntu SMP Thu Jul 14 15:20:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
@patrick-stephens
Copy link
Contributor

@leonardo-albertovich I think you looked at a fix in this area? I'm sure I recall seeing a PR merged to master for it.

@jensg-st does it work when the log level is higher than debug?

@leonardo-albertovich
Copy link
Collaborator

Indeed, #6798 in master and #6913 in 2.0, both of them already merged.

@patrick-stephens
Copy link
Contributor

Ah, in which case this should be fixed next release @jensg-st

@patrick-stephens patrick-stephens added waiting-for-release This has been fixed/merged but it's waiting to be included in a release. and removed status: waiting-for-triage labels Mar 6, 2023
@jensg-st
Copy link
Author

jensg-st commented Mar 6, 2023

Thanks a lot. Looks like we can close this issue?

@patrick-stephens
Copy link
Contributor

For unstable releases you can use these nightly updated containers (not recommended for production):

  • ghcr.io/fluent/fluent-bit/unstable:2.0
  • ghcr.io/fluent/fluent-bit/unstable:master

For now the workaround is to set log_level info (or higher than debug basically).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
waiting-for-release This has been fixed/merged but it's waiting to be included in a release.
Projects
None yet
Development

No branches or pull requests

3 participants