release-19.2: rowexec: release buckets from hash aggregator eagerly #47519
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport 1/2 commits from #47466.
/cc @cockroachdb/release
rowexec: release buckets from hash aggregator eagerly
This commit makes hash aggregator release the memory under buckets
eagerly (once we're done with the bucket) so that it is returned to the
system. This can matter a lot when we have large number of buckets (on
the order of 100k). Previously, this would happen only on the flow
shutdown, once we're losing the references to
hashAggregator
processor. But it was problematic - we "released" the associated memory
from the memory accounting, yet we were holding the references still.
With this commit we will reduce the memory footprint and we'll be a lot
closer to what our memory accounting thinks we're using.
Fixes: #47205.
Release note (bug fix): Previously, CockroachDB was incorrectly
releasing memory used by hash aggregation (we were releasing the correct
amount from the internal memory accounting system but, by mistake, were
keeping the references to the actual memory for some time which
prohibited the memory to be garbage collected). This could lead to
a crash (which was more likely when hash aggregation had to store on the
order of 100k of groups) and is now fixed.