-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
usm: map cleaner: Use batch operations #20907
usm: map cleaner: Use batch operations #20907
Conversation
87ee980
to
404495b
Compare
Bloop Bleep... Dogbot HereRegression Detector ResultsRun ID: a6707cbb-2a67-4215-943f-0452d734d53d ExplanationA regression test is an integrated performance test for Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval. We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed. No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Fine details of change detection per experiment.
|
Using batch operations allowing us to cut allocations, memory pressure and general runtime by 50%.
404495b
to
5d65920
Compare
Where are the allocation savings coming from? You still have to accumulate all the keys you want to delete. |
// we resort to unsafe.Pointers because by doing so the underlying eBPF | ||
// library avoids marshaling the key/value variables while traversing the map |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please re-add the comment (not necessarily here) as this is still relevant. You're still passing a unsafe.Pointer
to MapIterator.Next()
.
The reason why this was done originally was to avoid the marshalling cost as noted in the comment.
keyPtr: unsafe.Pointer(reflect.ValueOf(key).Elem().Addr().Pointer()), | ||
valPtr: unsafe.Pointer(reflect.ValueOf(val).Elem().Addr().Pointer()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's really nice to see a bunch of hacks go away now that we have generics :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, the use of generics introduced improvement, as we don't need marshalBytes!
totalCount, deletedCount := 0, 0 | ||
var next K | ||
var n int | ||
for { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really want to execute this in a loop? I think it's not that unlikely that in a real-world workload this could run forever which would cause the MapCleaner
to hang during termination, no?
In this pathological scenario you would supply also a stale timestamp to the callback
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about running forever / adding timed context here
but instead, we can "halt" if we processed mc.emap.MaxEntries
what do you think about that?
Along side with aborting the loop when we got 0 entries from the batch api, it will guarantee we won't be stuck forever
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds reasonable to me 👍
// we force types to be of pointer kind because of the reasons mentioned above | ||
if reflect.ValueOf(key).Kind() != reflect.Ptr { | ||
return nil, fmt.Errorf("%T is not a pointer kind", key) | ||
func NewMapCleaner[K any, V any](emap *cebpf.Map, defaultBatchSize uint32) (*MapCleaner[K, V], error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit/suggestion: let's try to be more strict with the type constraints here. For example, we don't want pointers, interfaces etc since this would cause issues with the code downstream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried to look for a way of doing that, but couldn't find such a way (to exclude interface / pointer)
if you have a way - I'd be happy to learn
having said that - we still have a guard - UTs
What does this PR do?
Migrates map cleaner to use batch operations.
The new method will allow us to reduce the cost of the map cleaner (performance wise) and reduce the number of syscalls we perform.
Motivation
Using batch operations allowing us to cut allocations, memory pressure and general runtime by 50%.
Additional Notes
Benchmarks:
Batches - improve memory, allocations and runtime by 50%.
Without batches - Improves allocations by 20%, memroy by 33%, and runtime by 15%.
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Reviewer's Checklist
Triage
milestone is set.major_change
label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.changelog/no-changelog
label has been applied.qa/skip-qa
label is not applied.team/..
label has been applied, indicating the team(s) that should QA this change.need-change/operator
andneed-change/helm
labels have been applied.k8s/<min-version>
label, indicating the lowest Kubernetes version compatible with this feature.