-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
try to use flyweight pattern with FileBufferCache #12
Comments
From TechPaper: QuestDB is still the fastest for \emph{Iteration} because it utilises the flyweight pattern \citep[p.~218]{gamma1994design}. |
i9-13900HX Did some tests on a strategy (PerformanceTraderInnerStrategyTest with precalculated values) with IPrimitiveArrayAllocator and classical Serde implementations (instantiating actual objects from the decompressed buffer). None (as before): 26.78/ms With IFlyweightSerdeProvider integrated: None (as before): 26.59/ms Thus using buffers or flyweight serdes in FileBufferCache alone is slower. Using heap objects is more efficient because deserialization (using ISerde) is done in parallel to strategy and objects on heap are the most efficient to access for this test case. => [x] Next will be to test if turning off compression and using the underlying mmap-buffer for the flyweight serdes improves the speed. That way flyweight pattern is used like it is supposed to (even though we still allocate some shallow buffer slice wrappers). => [x] Also try to use Bisect in ArrayAllocatorFileBufferCacheResult. |
i9-13900HX With Bisect in ArrayAllocatorFileBufferCacheResult: None (as before): 25.15/ms => Not much of a difference. |
i9-12900HX Underlying MMapped-File Flyweight without compression: TimeseriesDbPerformanceTest: None (as before): 52.09/µs => Overhead for FDate-Creation is too high, thus values to small and values might get deserialized multiple times during lookups. Likely QuestDB is faster because in the test there is a tight loop around the mmap-buffer and because the FDate objects are allocated on the stack instead of the heap there. PerformanceTraderInnerStrategyTest: None (as before): 28.55/ms => "FlyweightNoCompression" can be slightly faster than "None (as before)", though most likely not worth the storage and additional I/O without compression (457.7 MB vs 1.4 GB on Disk with this Test). Though it could be useful for low latency applications (HFT) or when memory usage should be minimized. The difference might become more prominent in an optimization run (Workshop5StrategyTest.optimize). Workshop5StrategyTest.optimize (NoPrecalc): None (as before): 5675.36/ms => does not seem to make it faster |
instead of having to deserialize the data and storing the objects, we could just store the decompressed bytes and reference the data with the flyweight pattern. Specialized ISerde implementations should be able to do that.
Drawback: The segments byte[] array will then be referenced as long as a flyweight object still exists that is based on that data.
The text was updated successfully, but these errors were encountered: