-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Randomized allocation sampling #104955
base: main
Are you sure you want to change the base?
Randomized allocation sampling #104955
Conversation
This change is some preparatory refactoring for the randomized allocation sampling feature. We need to add more state onto allocation context but we don't want to do a breaking change of the GC interface. The new state only needs to be visible to the EE but we want it physically near the existing alloc context state for good cache locality. To accomplish this we created a new ee_alloc_context struct which contains an instance of gc_alloc_context within it. The new ee_alloc_context.combined_limit field should be used by fast allocation helpers to determine when to go down the slow path. Most of the time combined_limit has the same value as alloc_limit, but periodically we need to emit an allocation sampling event on an object that is somewhere in the middle of an AC. Using combined_limit rather than alloc_limit as the slow path trigger allows us to keep all the sampling event logic in the slow path.
combined_limit is now synchronized in GcEnumAllocContexts instead of RestartEE. This requires the GC being constrained in how it updates the alloc_ptr and alloc_limit. No GC behavior changed, in practice, but the constraints are now part of the EE<->GC contract so that we can rely on them in the EE code.
Co-authored-by: Jan Kotas <[email protected]>
20be562
to
3a77982
Compare
62716a0
to
8272401
Compare
Co-authored-by: Jan Kotas <[email protected]>
The code of GetAllocContext() was constructing a PTR_gc_alloc_context which does a host->target pointer conversion. Those conversions work by doing a lookup in a dictionary of blocks of memory that we have previously marshalled and the pointer being converted is expected to be the start of the memory block. In this case we had never previously marshalled the gc_allocation_context on its own. We had only marshalled the m_pRuntimeThreadLocals block which includes the gc_allocation_context inside of it at a non-zero offset. This caused the host->target pointer conversion to fail which in turn meant commands like !threads in SOS would fail. The fix is pretty trivial. We don't need to do a host->target conversion here at all because the calling code in the DAC is going to immediately convert right back to a host pointer. We can avoid the conversion in both directions by eliminating the cast and returning the host pointer directly.
This change is some preparatory refactoring for the randomized allocation sampling feature. We need to add more state onto allocation context but we don't want to do a breaking change of the GC interface. The new state only needs to be visible to the EE but we want it physically near the existing alloc context state for good cache locality. To accomplish this we created a new ee_alloc_context struct which contains an instance of gc_alloc_context within it. The new ee_alloc_context::combined_limit should be used by fast allocation helpers to determine when to go down the slow path. Most of the time combined_limit has the same value as alloc_limit, but periodically we need to emit an allocation sampling event on an object that is somewhere in the middle of an AC. Using combined_limit rather than alloc_limit as the slow path trigger allows us to keep all the sampling event logic in the slow path.
- removed unnecessary UpdateCombinedLimit() in thread detach - updated comment for workaround on 96081 - swapped to updating combined_limit inside GcEnumAllocContexts() instead of in RestartEE()
Co-authored-by: Jan Kotas <[email protected]>
8272401
to
3551533
Compare
This feature allows profilers to do allocation profiling based off randomized samples. It has better theoretical and empirically observed accuracy than our current allocation profiling approaches while also maintaining low performance overhead. It is designed for use in production profiling scenarios. For more information about usage and implementation, see the included doc docs/design/features/RandomizedAllocationSampling.md
3551533
to
91b51be
Compare
Functional testing found and fixed an off-by-one error in the RNG code but otherwise things looked fine. I also resynced this PR on top of the latest changes in #104849 and #104851. The last commit, now number 10, remains the interesting one. I also did some performance testing using GCPerfSim as an allocation benchmark. My default configuration was 4 threads, workstation mode, 500GB of allocations entirely with small objects and no survival. It is intended to put maximum stress on the allocation code paths. GCPerfSim command line: EDIT: Don't rely on these numbers, they are misleading. See #104955 (comment) Benchmarks - No Tracing enabled
Benchmarks - Tracing AllocSampling+GC keywords, verbose level
Overall it looks like around ~0.9 additional seconds for the PR to do 500GB of allocations. On a tight microbenchmark its noticeable and then as other GC or non-allocation costs increase it becomes relatively less noticeable. I'm investigating to see if it can be improved at all. |
Continued perf investigation+testing has cast my previous results into doubt. After lots of searching for what could have caused the regression, my best explanation is that it actually had nothing to do with the source changes in this PR and instead it is either non-determinism in the build process or some user error on my part. I reached that conclusion by doing the following: I've had a folder on my machine C:\git\runtime3 that throughout the entire process has been synced here:
This folder has no changes from any of my PRs in it and I've been using the build here for all the baseline measurements. Then I executed the following changes:
I can consistently reproduce the same magnitude perf regression using the coreclr built in the artifacts directory, but the regression doesn't appear using the build in the backup or backup_2 directory. I've done many runs on each binary switching between them in a semi-randomized ordering trying to ensure that the results for each binary are repeatable and robust relative to background noise on the machine. Beyond that I've also got many other builds that include different subsets of the change but there is no clear relationship between the source and the perf results. During one period I progressively added functionality starting from the baseline without triggering the regression to occur, then during another period I was progressively removing functionality from the final PR and the regression would always occur. Even deleting the entirety of the source changes in that folder and syncing it back to the baseline didn't eliminate the perf overhead. Every build was done in a new folder starting without an artifacts folder to remove opportunity for incremental build problems to play a role. The only explanations I have that make sense to me are either: (a) non-deterministic builds are giving bi-modal perf results for the same input source code or (b) I am making some other error in my testing methodology repeatedly I'm going to see if I can get another machine to repeat some of the original experiments but at the moment I no longer have any evidence the PR is causing a regression. |
These tests were never intended to built or run automatically but recursive globbing patterns are causing them to get included. I considered locating each such globbing pattern and making an exclusion or changing the tests so that they would build successfully in the automated build, but those options seemed like more work now and potentially more work in the future to maintain it. Given these manual tests will probably have very little ongoing usage I went with the cheap and simple option of adding an underscore to the csproj files.
@jkotas @MichalStrehovsky - Functional and perf testing both looked good now, all outstanding comments on the PRs have been addressed, and CI is green. From my perspective this is ready to be merged unless any further review is planned? I could check in #104849, #104851, then this PR in sequence but I'm not sure that gives any advantage over just checking in this PR alone and closing 104849 and 104851 as no longer needed. |
|
src/tests/tracing/eventpipe/randomizedallocationsampling/manual/README.md
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this PR in sequence but I'm not sure that gives any advantage over just checking in this PR alone
I think it still makes sense. I am not familiar enough with tracing to tell whether it is hooked up correctly. It would be best to for somebody familar with the tracing to review that part. It will be much easier if the delta does not show the other changes.
Also, this is non-trivial feature (a few thousand lines), with risk of introducing regressions (as demonstrated by the GC stress crash). Given that we are feature complete for .NET 9, should this get an approval from Jeff or tactics before merging for .NET 9?
(flags & GC_ALLOC_LARGE_OBJECT_HEAP) ? 1 : | ||
0; // SOH | ||
unsigned int heapIndex = 0; | ||
#ifdef BACKGROUND_GC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BACKGROUND_GC
is GC-specific ifdef. It has no meaning in VM
(flags & GC_ALLOC_LARGE_OBJECT_HEAP) ? 1 : | ||
0; // SOH | ||
unsigned int heapIndex = 0; | ||
#ifdef BACKGROUND_GC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same
Yes, I was assuming that would be part of the process. |
Tagging subscribers to this area: @tommcdon |
This PR is intended to supercede #100356. Currently it includes 5 commits, the first 4 of which are the same changes found in #104849 and #104851. Once those PRs are merged this PR will be rebased to remove that portion of the changes. The interesting commit is last one which adds the final code necessary to enable the feature on both CoreCLR and NativeAOT. The PR is currently in draft mode because I have yet to bring over the functional tests and validate functionality and perf are operating as expected.
This feature allows profilers to do allocation profiling based off randomized samples. It has better theoretical and empirically observed accuracy than our current allocation profiling approaches while also maintaining low performance overhead. It is designed for use in production profiling scenarios. For more information about usage and implementation, see the included doc docs/design/features/RandomizedAllocationSampling.md