-
Notifications
You must be signed in to change notification settings - Fork 15
RowColumnRangeExtractor uses smaller presized IdentityHashMaps #6136
Conversation
Generate changelog in
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good - just one concern about the semantics, but otherwise this is nice. Thank you!
IdentityHashMap<byte[], LinkedHashMap<Cell, Value>> collector = new IdentityHashMap<>(canonicalRows.size()); | ||
IdentityHashMap<byte[], Column> rowsToLastCompositeColumns = new IdentityHashMap<>(canonicalRows.size()); | ||
IdentityHashMap<byte[], Integer> rowsToRawColumnCount = new IdentityHashMap<>(canonicalRows.size()); | ||
Set<byte[]> emptyRows = Collections.newSetFromMap(new IdentityHashMap<>(0)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any reason we went with 0 instead of size here? I get that we're not going to have this as the same size as the others, but are we expecting the size of emptyRows to be closer to 4 than 32 (the minimum and default size)? I don't actually have any insight into this, so it could go either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
main intent here was generally just assuming that most queries will not have any empty rows, so we shouldn't allocate for them, and if there are empty rows, we'll just eat the amortized cost of growing this IdentityHashMap
.put( | ||
Cell.create(row, pair.lhSide), | ||
Value.create(c.getColumn().getValue(), pair.rhSide)); | ||
if (previous != null) { | ||
notLatestVisibleValueCellFilterCounter.get().inc(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the semantics are different to the past. The main one being that we put regardless, and increment the metric if the value existed beforehand. The difference that I am concerned about is that, in the past, if it already existed, we didn't overwrite the value - in this case, we do, blindly. I don't know this codepath very well, but I don't think that's what we want here.
LinkedHashMap<Cell, Value> cellToValue = collector.get(row); | ||
if (cellToValue == null) { | ||
cellToValue = collector.computeIfAbsent(row, _b -> new LinkedHashMap<>(1)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think your computeIfAbsent
was correct before - the main concern I had was fixed by using putIfAbsent
below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(But this is correct too!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went back and forth a bit on this, and ultimately went with cellToValue.containsKey
& cellToValue.put
rather than putIfAbsent
to avoid always allocating the Value
in cases where we have invisible values. This still removes the extra collector.get
via computeIfAbsent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks correct - leave it to you whether you want to clean up that last part by doing what you had before with putIfAbsent
, or keep what you have.
General
Before this PR:
Atlas Cassandra clients processing lots of
multiget_slice
requests allocate significantObject[]
as part of constructing multipleIdentityHashMap<byte[], V>
for each page of results.After this PR:
We presize the
IdentityHashMap
s to the expected number of rows we're getting, and simplify theRowColumnRangeExtractor
extract API.==COMMIT_MSG==
RowColumnRangeExtractor uses smaller presized IdentityHashMaps
==COMMIT_MSG==
Priority: P2
Concerns / possible downsides (what feedback would you like?): Does this seem like reasonable performance change and refactor?
Is documentation needed?: no
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?: Changes the Java API for
RowColumnRangeExtractor
slightly, but it is all package private and I do not see any non-Atlas changes that would be impacted.Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?: no
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.): yes
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?: n/a
Does this PR need a schema migration? no
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:
What was existing testing like? What have you done to improve it?: these code paths are hit by existing Cassandra ETE tests and some unit tests.
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.: n/a
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?: n/a
Execution
How would I tell this PR works in production? (Metrics, logs, etc.):
Has the safety of all log arguments been decided correctly?: n/a
Will this change significantly affect our spending on metrics or logs?: n/a
How would I tell that this PR does not work in production? (monitors, etc.):
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:
Development Process
Where should we start reviewing?:
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:
Please tag any other people who should be aware of this PR:
@jeremyk-91
@sverma30
@raiju