-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add concurrency for reward calculation to scale out #1291
Conversation
On hold for now. There are atleast 3 - There needs to be additional safety. |
Alternatively, can go the other way, and merge changeset. That should be free of levedb writes - the reason it however appears to work so far and adds up in tests may be because all of the writes are per owner only, and as such the writes don't clobber the other reads to end up in partial states. It's technically unsynchronized, but in this very specific context, it's never runs into trouble. Perhaps, the closest philosophical analogy is to having global array, reading them from multiple threads, but making promises that we'll only ever write to a particular range per thread that aren't seen by other threads, and there by free of races. Works, but unsafe and fragile. May be possible to get away with it for the moment as a short term solution if these assumptions can be verified further, though this cannot be a permanent solution as these assumptions can get broken anytime. |
Newer strategy, given some of the assumptions above still holds true - uses 2 thread pools:
This is now verifiably faster (~24 secs) than an synchronized in-mem changeset merge, as well as manually synchronized flushes. Also note that boost asio internally synchronises the pool dispatch and posts, so eliminates the need for external locks. Sample:
|
…/ain into fix/concurrent_rewards_calc2
dcadd9f
to
7f6e5ea
Compare
7f6e5ea
to
a95ca0e
Compare
/kind fix
Example of mainnet simulated splits:
Without:
In parallel: