Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter plugin memory leak with latest master build #3197

Closed
JeffLuoo opened this issue Mar 9, 2021 · 4 comments
Closed

Filter plugin memory leak with latest master build #3197

JeffLuoo opened this issue Mar 9, 2021 · 4 comments
Assignees

Comments

@JeffLuoo
Copy link
Contributor

JeffLuoo commented Mar 9, 2021

Is your feature request related to a problem? Please describe.

A memory leak is detected when tested with valgrind.

[SERVICE]
    Flush                      1
    Log_Level                  debug
    Parsers_File               parsers.conf

[INPUT]
    Name              dummy
    Tag               k8s_container.test1.test2.test3
    dummy             {"key": 1, "sub": {"stream": "stdout", "id": "some id"}, "kubernetes": {"labels": {"team": "Santiago Wanderers"}}}

[FILTER]
    Name          record_modifier
    Match_Regex   *
    Record        test test

[OUTPUT]
    Name  stdout
    Match *
[email protected] ~/fluent-bit/build% sudo valgrind --leak-check=full ./bin/fluent-bit -c ../conf/in_dummy.conf
==2478901== Memcheck, a memory error detector
==2478901== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==2478901== Using Valgrind-3.16.1 and LibVEX; rerun with -h for copyright info
==2478901== Command: ./bin/fluent-bit -c ../conf/in_dummy.conf
==2478901== 
Fluent Bit v1.8.0
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2021/03/09 16:59:02] [ info] Configuration:
[2021/03/09 16:59:02] [ info]  flush time     | 1.000000 seconds
[2021/03/09 16:59:02] [ info]  grace          | 5 seconds
[2021/03/09 16:59:02] [ info]  daemon         | 0
[2021/03/09 16:59:02] [ info] ___________
[2021/03/09 16:59:02] [ info]  inputs:
[2021/03/09 16:59:02] [ info]      dummy
[2021/03/09 16:59:02] [ info] ___________
[2021/03/09 16:59:02] [ info]  filters:
[2021/03/09 16:59:02] [ info]      record_modifier.0
[2021/03/09 16:59:02] [ info] ___________
[2021/03/09 16:59:02] [ info]  outputs:
[2021/03/09 16:59:02] [ info]      stdout.0
[2021/03/09 16:59:02] [ info] ___________
[2021/03/09 16:59:02] [ info]  collectors:
[2021/03/09 16:59:02] [ info] [engine] started (pid=2478901)
[2021/03/09 16:59:02] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2021/03/09 16:59:02] [debug] [storage] [cio stream] new stream registered: dummy.0
[2021/03/09 16:59:02] [ info] [storage] version=1.1.1, initializing...
[2021/03/09 16:59:02] [ info] [storage] in-memory
[2021/03/09 16:59:02] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/03/09 16:59:02] [ warn] [filter] NO match rule for record_modifier.0 filter instance, unloading.
[2021/03/09 16:59:02] [debug] [stdout:stdout.0] created event channels: read=18 write=19
[2021/03/09 16:59:02] [debug] [router] match rule dummy.0:stdout.0
[2021/03/09 16:59:02] [ info] [sp] stream processor started
[2021/03/09 16:59:04] [debug] [task] created task=0x5316d10 id=0 OK
[0] k8s_container.test1.test2.test3: [1615309143.704854634, {"key"=>1, "sub"=>{"stream"=>"stdout", "id"=>"some id"}, "kubernetes"=>{"labels"=>{"team"=>"Santiago Wanderers"}}}]
[2021/03/09 16:59:04] [debug] [out coro] cb_destroy coro_id=0
[2021/03/09 16:59:04] [debug] [task] destroy task=0x5316d10 (task_id=0)
[0] k8s_container.test1.test2.test3: [1615309144.726497477, {"key"=>1, "sub"=>{"stream"=>"stdout", "id"=>"some id"}, "kubernetes"=>{"labels"=>{"team"=>"Santiago Wanderers"}}}]
[2021/03/09 16:59:05] [debug] [task] created task=0x5369ce0 id=0 OK
[2021/03/09 16:59:05] [debug] [out coro] cb_destroy coro_id=1
[2021/03/09 16:59:05] [debug] [task] destroy task=0x5369ce0 (task_id=0)
[0] k8s_container.test1.test2.test3: [1615309145.691450227, {"key"=>1, "sub"=>{"stream"=>"stdout", "id"=>"some id"}, "kubernetes"=>{"labels"=>{"team"=>"Santiago Wanderers"}}}]
[2021/03/09 16:59:06] [debug] [task] created task=0x53bc7e0 id=0 OK
[2021/03/09 16:59:06] [debug] [out coro] cb_destroy coro_id=2
[2021/03/09 16:59:06] [debug] [task] destroy task=0x53bc7e0 (task_id=0)
[0] k8s_container.test1.test2.test3: [1615309146.691257140, {"key"=>1, "sub"=>{"stream"=>"stdout", "id"=>"some id"}, "kubernetes"=>{"labels"=>{"team"=>"Santiago Wanderers"}}}]
[2021/03/09 16:59:07] [debug] [task] created task=0x540f2e0 id=0 OK
[2021/03/09 16:59:07] [debug] [out coro] cb_destroy coro_id=3
[2021/03/09 16:59:07] [debug] [task] destroy task=0x540f2e0 (task_id=0)
^C[2021/03/09 16:59:08] [engine] caught signal (SIGINT)
[0] k8s_container.test1.test2.test3: [1615309147.691189030, {"key"=>1, "sub"=>{"stream"=>"stdout", "id"=>"some id"}, "kubernetes"=>{"labels"=>{"team"=>"Santiago Wanderers"}}}]
[2021/03/09 16:59:08] [debug] [task] created task=0x5461e30 id=0 OK
[2021/03/09 16:59:08] [ warn] [engine] service will stop in 5 seconds
[2021/03/09 16:59:08] [debug] [out coro] cb_destroy coro_id=4
[2021/03/09 16:59:08] [debug] [task] destroy task=0x5461e30 (task_id=0)
[2021/03/09 16:59:08] [debug] [input chunk] dummy.0 is paused, cannot append records
[2021/03/09 16:59:09] [debug] [input chunk] dummy.0 is paused, cannot append records
[2021/03/09 16:59:10] [debug] [input chunk] dummy.0 is paused, cannot append records
[2021/03/09 16:59:11] [debug] [input chunk] dummy.0 is paused, cannot append records
[2021/03/09 16:59:12] [ info] [engine] service stopped
==2478901== 
==2478901== HEAP SUMMARY:
==2478901==     in use at exit: 81 bytes in 3 blocks
==2478901==   total heap usage: 2,553 allocs, 2,550 frees, 2,095,806 bytes allocated
==2478901== 
==2478901== 81 (32 direct, 49 indirect) bytes in 1 blocks are definitely lost in loss record 3 of 3
==2478901==    at 0x483AB65: calloc (vg_replace_malloc.c:760)
==2478901==    by 0x19B4FE: flb_calloc (flb_mem.h:78)
==2478901==    by 0x19B580: flb_kv_item_create_len (flb_kv.c:37)
==2478901==    by 0x19B6C9: flb_kv_item_create (flb_kv.c:77)
==2478901==    by 0x16DA88: flb_filter_set_property (flb_filter.c:232)
==2478901==    by 0x1559B4: flb_service_conf (fluent-bit.c:830)
==2478901==    by 0x1560BB: flb_main (fluent-bit.c:1097)
==2478901==    by 0x15626A: main (fluent-bit.c:1170)
==2478901== 
==2478901== LEAK SUMMARY:
==2478901==    definitely lost: 32 bytes in 1 blocks
==2478901==    indirectly lost: 49 bytes in 2 blocks
==2478901==      possibly lost: 0 bytes in 0 blocks
==2478901==    still reachable: 0 bytes in 0 blocks
==2478901==         suppressed: 0 bytes in 0 blocks
==2478901== 
==2478901== For lists of detected and suppressed errors, rerun with: -s
==2478901== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)

Describe the solution you'd like

Describe alternatives you've considered

Additional context

@JeffLuoo JeffLuoo changed the title Memory Leak: latest master build Filter plugin memory leak with latest master build Mar 9, 2021
@nokute78
Copy link
Collaborator

I also reproduce it.

Note: Match_Regex property may affect.
Record test test is not released with Match_Regex property.

In contrast, this configuration is not leaked.
s/Match_Regex/Match/

[SERVICE]
    Flush                      1
    Log_Level                  debug
    Parsers_File               parsers.conf

[INPUT]
    Name              dummy
    Tag               k8s_container.test1.test2.test3
    dummy             {"key": 1, "sub": {"stream": "stdout", "id": "some id"}, "kubernetes": {"labels": {"team": "Santiago Wanderers"}}}

[FILTER]
    Name          record_modifier
    Match *   
#    Match_Regex   *
    Record        test test

[OUTPUT]
    Name  stdout
    Match *

@JeffLuoo
Copy link
Contributor Author

@nokute78 Thank you for the reply!

edsiper added a commit that referenced this issue Mar 15, 2021
@edsiper edsiper self-assigned this Mar 15, 2021
@edsiper
Copy link
Member

edsiper commented Mar 15, 2021

Thanks.

It was a leak when the plugin failed to initialize due to a wrong matching rule, just fixed in e312ef1

@edsiper edsiper closed this as completed Mar 15, 2021
@JeffLuoo
Copy link
Contributor Author

Thanks for the fix.

DrewZhang13 pushed a commit to DrewZhang13/fluent-bit that referenced this issue May 3, 2021
DrewZhang13 pushed a commit to DrewZhang13/fluent-bit that referenced this issue May 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants