You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's annoying to have PRs constantly bombarded erroneous with "semantic got 3% faster" / "semantic got 5% slower" messages from CodSpeed. This noise obscures what effect the PR actually has on performance.
If we want to optimize semantic, we need to have stable measures to evaluate changes against.
Cause
I believe the cause to be reallocations of the various Vecs in Semantic struct.
Often Vecs can grow without changing location in memory - the allocator just extends the allocation in place. But sometimes it can't, and then you have a massive memory copy to move contents of the Vec to a new location.
I believe this non-deterministic element is what makes the benchmarks so noisy.
Possible solution
Cache the Vecs semantic needs in the Allocator. Re-use them on each run of semantic and then put them back into the allocator for the next run.
During benchmark warmup, these Vecs will grow to the size they need to be, so there'll be no reallocations during the measured run.
This will also improve semantic's performance in the real world where you're running it on multiple files in series.
Difficulty
Allocator uses interior mutability so you only need a &Allocator. How to combine that with this scheme?
The text was updated successfully, but these errors were encountered:
Hopefully #3776 has solved this. #3784 re-enabled semantic benchmarks and we can see if they stay steady now.
Leaving this open for now until we see if it's worked.
NB "Possible solution" section above would still be of benefit - it'd improve semantic's real-world perf. Although oxc-project/backlog#31 would also have same effect.
Problem
Benchmarks for
oxc_semantic
are very unstable.This is bad for 2 reasons:
Cause
I believe the cause to be reallocations of the various
Vec
s inSemantic
struct.Often
Vec
s can grow without changing location in memory - the allocator just extends the allocation in place. But sometimes it can't, and then you have a massive memory copy to move contents of theVec
to a new location.I believe this non-deterministic element is what makes the benchmarks so noisy.
Possible solution
Cache the
Vec
s semantic needs in theAllocator
. Re-use them on each run of semantic and then put them back into the allocator for the next run.During benchmark warmup, these Vecs will grow to the size they need to be, so there'll be no reallocations during the measured run.
This will also improve semantic's performance in the real world where you're running it on multiple files in series.
Difficulty
Allocator
uses interior mutability so you only need a&Allocator
. How to combine that with this scheme?The text was updated successfully, but these errors were encountered: