Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[backport v1.1] pkg/sensors: reduce stack trace map memory footprint #2548

Merged
merged 1 commit into from
Jun 13, 2024

Conversation

mtardy
Copy link
Member

@mtardy mtardy commented Jun 13, 2024

Backport of upstream commit 22510d9

We stopped on a stack trace map that has a max_entries of 32768, which is 64 bits pointers * PERF_MAX_STACK_DEPTH (which is fixed at 127 for now), so 127*64/8=1016 bytes per entry + it's key_size of 32 bits (4 bytes) so 1020 bytes per entry. So 1020 * 32768 = 33,423,360 bytes. From bpftool, this map has a total bytes_memlock of 34,079,040 bytes. So for each stack trace map we load, we had 34MB of kernel memory, and it happened to be loaded many times when we were loading any tracing policy.

Since the map is used by the generic program, the loader will allocate the memory needed for the map even if we don't create a reference from the agent side and create an anonymous map. So we end up allocating a small map of max_entries 1 by default and resize it when the tracing policy actually specifies a matchAction containing a kernelStackTrace or userStackTrace to true. This should drastically reduce the memory footprint of this feature when it's unused.

Reduce the kernel memory footprint (accounted by the cgroup memory controller) of the stack trace feature when unused.

@mtardy mtardy added kind/backport This PR provides functionality previously merged into master. release-note/bug This PR fixes an issue in a previous release of Tetragon. labels Jun 13, 2024
@mtardy mtardy requested a review from a team as a code owner June 13, 2024 09:02
@mtardy mtardy requested review from tpapagian and removed request for a team June 13, 2024 09:02
[ upstream commit 22510d9 ]

We stopped on a stack trace map that has a max_entries of 32768, which
is 64 bits pointers * PERF_MAX_STACK_DEPTH (which is fixed at 127 for
now), so 127*64/8=1016 bytes per entry + it's key_size of 32 bits (4
bytes) so 1020 bytes per entry. So 1020 * 32768 = 33,423,360 bytes.
From bpftool, this map has a total bytes_memlock of 34,079,040 bytes.
So for each stack trace map we load, we had 34MB of kernel memory, and
it happened to be loaded many times when we were loading any tracing
policy.

Since the map is used by the generic program, the loader will allocate
the memory needed for the map even if we don't create a reference from
the agent side and create an anonymous map. So we end up allocating a
small map of max_entries 1 by default and resize it when the tracing
policy actually specifies a matchAction containing a kernelStackTrace or
userStackTrace to true. This should drastically reduce the memory
footprint of this feature when it's unused.

Signed-off-by: Mahe Tardy <[email protected]>
@mtardy mtardy force-pushed the pr/mtardy/backport-v1.1-stacktrace-memory branch from 1ba7dac to c4e5412 Compare June 13, 2024 09:18
@mtardy mtardy merged commit 4fe3b44 into v1.1 Jun 13, 2024
40 checks passed
@mtardy mtardy deleted the pr/mtardy/backport-v1.1-stacktrace-memory branch June 13, 2024 13:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/backport This PR provides functionality previously merged into master. release-note/bug This PR fixes an issue in a previous release of Tetragon.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants