-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] New trade stats #4577
Closed
Closed
[WIP] New trade stats #4577
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Instead of trying to find faulty keys, we stick to the mechanics we have in place. This way, we have less false positives. However, we enforce a certain Bisq version format, as parsing and business logic relies on that now.
Turns out, scanning resources does not work reliable enough. Thus, an array of Strings denoting historical Bisq versions it is. An other way to put it is it is an array denoting which data stores are there in the resources.
If we use the diff sync between seed nodes we create a race condition where when a seednode gets updated to the new system, it does not sync up properly with the other seed nodes. And that would be fatal. So for the time being, when a seednode asks for data, it uses the "old" big requests with all object keys. Should not be a problem for now since they have enough bandwidth.
CI does have troubles with tests which do file operations. Thus, these tests have been disabled. They have been useful during development and are useful for testing locally, though.
This test addresses the migration scenario where a user does not upgrade on the first possible occation (to the first Bisq version that has the new database structure in place) but does so later.
…m/chimp1984/bisq into chimp-reduce_initial_request_size
AppendOnlyDataStoreService does not have a map but aggregates the data form its services.
We use a set at the filter so it is not deterministic which item get truncated.
If we get logs from windows users the missing line breaks makes it harder to read.
…-versioned resource file anymore. Instead use readStore(); to create the live store. Add more logs
…esources. Improve logs
Use only min. data set WIP - some modules not applied yet, test not updated, json dump not updated
File size of 74000 objects is about 3.6 MB compared to 16.5 MB with the old data.
chimp1984
added a commit
to chimp1984/bisq
that referenced
this pull request
Oct 1, 2020
This is intended to be able to merge the code base before bisq-network#4577 is merged. We can add the code base but not using the feature yet. Once we are ready for deployment we can revert this commit and have the feature activated.
This was referenced Oct 1, 2020
Replaced by 4599 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Replace TradeStatistics2 with a new much smaller TradeStatistics3 object.
It converts existing TradeStatistics2 to the new data as well as data received from the network, which get rebroadcast as new data.
The data store got reduced from 16.5 MB to 3.6 MB. If is also much better for privacy as it only stores the bare minimum of data and does not link to the offers.
It is still early WIP, missing tests and updates in other modules. Just wanted to share early as it should be deployed with the intitial data request reduction and the persistence redesign.
Depends/based on #4519
Starts at commit 5e09eeb earlier commits are from #4519