Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change the PID filter implementation to use bloom filter #851

Merged
merged 3 commits into from
May 22, 2024

Conversation

grcevski
Copy link
Contributor

Our existing PID filtering for kprobes used an LRU hashmap, which can easily overflow with large number of PIDs. This PR changes the underlying implementation to use a bloom filter, which will favour false positives over negative collisions in the map.

Essentially, we set a bit on a bit array for a given namespace,pid tuple. If this PID matches, we create an event. The userspace code in Go will consult the internal map if this event is possibly a false positive and filter it out.

Adding and removing processes to be tracked is more expensive now, since we rebuild the PID array from scratch each time and we update the BPF array.

@grcevski grcevski requested review from mariomac and marctc as code owners May 21, 2024 19:13
@codecov-commenter
Copy link

codecov-commenter commented May 21, 2024

Codecov Report

Attention: Patch coverage is 96.42857% with 1 lines in your changes are missing coverage. Please review.

Project coverage is 76.16%. Comparing base (a22db74) to head (954dd5e).
Report is 3 commits behind head on main.

Files Patch % Lines
pkg/internal/ebpf/httpfltr/httpfltr.go 96.42% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #851      +/-   ##
==========================================
- Coverage   79.63%   76.16%   -3.48%     
==========================================
  Files         117      117              
  Lines        8050     8044       -6     
==========================================
- Hits         6411     6127     -284     
- Misses       1231     1496     +265     
- Partials      408      421      +13     
Flag Coverage Δ
integration-test ?
k8s-integration-test 65.13% <96.42%> (-0.06%) ⬇️
oats-test 35.81% <89.28%> (+0.26%) ⬆️
unittests 46.07% <17.85%> (+0.07%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@mariomac mariomac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing

@@ -6,15 +6,16 @@
#include "bpf_core_read.h"
#include "pid_types.h"

#define MAX_CONCURRENT_PIDS 3000 // estimate: 1000 concurrent processes (including children) * 3 namespaces per pid
#define MAX_CONCURRENT_PIDS 3001 // estimate: 1000 concurrent processes (including children) * 3 namespaces per pid
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In another PR, I'd make this value configurable by the user.

Here is an example about how we do this for the Network flows:

beyla/bpf/flows_common.h

Lines 39 to 45 in a22db74

// Key: the flow identifier. Value: the flow metrics for that identifier.
// The userspace will aggregate them into a single flow.
struct {
__uint(type, BPF_MAP_TYPE_LRU_PERCPU_HASH);
__type(key, flow_id);
__type(value, flow_metrics);
} aggregated_flows SEC(".maps");

// Resize aggregated flows map according to user-provided configuration
spec.Maps[aggregatedFlowsMap].MaxEntries = uint32(cacheMaxSize)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool!

@grcevski grcevski merged commit 6137faf into grafana:main May 22, 2024
6 checks passed
@grcevski grcevski deleted the new_pids_filter branch May 22, 2024 17:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants