Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resolve performance issue caused by using DoubleArrayCache.getArrayExact #557

Closed
wants to merge 2 commits into from

Conversation

protogenes
Copy link
Contributor

@protogenes protogenes commented Nov 6, 2022

displaying live data from data sets with fluctuating size created a lot of GC pressure because the DoubleArrayCache required a new allocation almost every call to getArrayExact

adjusting the ErrorDataSetRenderer and related classes to call DoubleArrayCache.getArray (using best fit strategy) instead removes this overhead

the time spend in the cache easily took 90% of the render time in our case, because the cache list grews quite large

displaying live data from data sets with fluctuating size created a lot of GC pressure because the DoubleArrayCache required a new allocation almost every call to getArrayExact
adjusting the ErrorDataSetRenderer and related classes to call DoubleArrayCache.getArray (using best fit strategy) instead removes this overhead
the time spend in the cache easily took 90% of the render time in our case, because the cache list grews quite large
@ennerf
Copy link
Collaborator

ennerf commented Nov 6, 2022

I ran into the same issue a couple years ago and did a little proof of concept with a plugin mechanism for the allocation (see #370 for the discussion).

@protogenes protogenes temporarily deployed to coverage November 8, 2022 14:52 Inactive
@protogenes protogenes temporarily deployed to coverage November 8, 2022 14:52 Inactive
@codecov
Copy link

codecov bot commented Nov 8, 2022

Codecov Report

Base: 51.74% // Head: 51.70% // Decreases project coverage by -0.03% ⚠️

Coverage data is based on head (2e55436) compared to base (5719d92).
Patch coverage: 55.00% of modified lines in pull request are covered.

Additional details and impacted files
@@             Coverage Diff              @@
##               main     #557      +/-   ##
============================================
- Coverage     51.74%   51.70%   -0.04%     
+ Complexity     6333     6332       -1     
============================================
  Files           364      364              
  Lines         36792    36817      +25     
  Branches       5991     5996       +5     
============================================
+ Hits          19037    19038       +1     
- Misses        16500    16525      +25     
+ Partials       1255     1254       -1     
Impacted Files Coverage Δ
...rtfx/axes/spi/transforms/DefaultAxisTransform.java 50.00% <0.00%> (ø)
...in/java/io/fair_acc/dataset/utils/AssertUtils.java 17.80% <0.00%> (-3.68%) ⬇️
...tfx/renderer/datareduction/DefaultDataReducer.java 95.97% <100.00%> (ø)
...air_acc/chartfx/renderer/spi/CachedDataPoints.java 62.74% <100.00%> (ø)
...acc/chartfx/renderer/spi/ErrorDataSetRenderer.java 79.15% <100.00%> (ø)
...va/io/fair_acc/chartfx/ui/HiddenSidesPaneSkin.java 44.49% <0.00%> (-0.48%) ⬇️
...fair_acc/chartfx/utils/SimplePerformanceMeter.java 88.70% <0.00%> (+1.61%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@RalphSteinhagen
Copy link
Member

Thanks @ennerf for finding and linking to the old PR and discussion thread. I guess most of what has been described there still holds true.

The core question is a 80/20-type decision: default/safe behaviour for novice/new users (the 80% use case) vs. extensibility for more custom/special cases (the 20% use case).

We are open for PR. May I suggest: rather than modifying the existing code (which may perturb other users) it would be nice to generate an interface and allow to supply of custom array caches.

@protogenes and/or @ennerf would you be up for it?

@protogenes
Copy link
Contributor Author

protogenes commented Nov 9, 2022

Thank you for the feedback, #370 didn't pop up during my search.

Let me add my thoughts on the topic:

First, adding an extension point to the various renderer and algorithm implementations requires quite a bit of intricate work to move away from the current global/static cache access. That is not some task that I would attempt to solve with my surface level knowledge of the library.

The performance issue that @ennerf and I encountered are also not solved while keeping the current usage of getArrayExact in the new implementations (and staying true to its name). The new caches should probably only provide get(int length) and add(T) methods and the default cache implementation would be in the manner of ExactLengthArrayCache contrary to e.g. MinLengthArrayCache or TieredLengthArrayCache.

Regardless of that I'm convinced the current or default implementation of the cache and its usage need to be improved. I don't think anyone would be irked by a faster cache implementation and renderer.
The discussion in #370 didn't give much rationale as to why getArrayExact should be used throughout ErrorDataSetRenderer and related classes.
In my opinion there is only one possible downside from reusing larger arrays:
a chance that a large cached array is stolen by a concurrent request for a shorter array.
This may lead to higher overall allocation which is very limited by the best fit length selection of the cache for normal use cases and can't happen at all for the renderers, which are not concurrent to each other.

The performance of the current cache also deteriorates as it traverses the whole list each time in O(n). Using a ConcurrentSkipListMap or alike could fix this as well. We work around this by priming the cache with arrays of known maximum length.

@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

ennerf added a commit that referenced this pull request Aug 15, 2023
ennerf added a commit that referenced this pull request Aug 16, 2023
ennerf added a commit that referenced this pull request Aug 17, 2023
ennerf added a commit that referenced this pull request Aug 18, 2023
@ennerf ennerf mentioned this pull request Aug 18, 2023
2 tasks
ennerf added a commit that referenced this pull request Aug 24, 2023
wirew0rm pushed a commit that referenced this pull request Sep 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants