Upgrade the SQL tables to refactor the code in a more optimised way #52
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
The current crawler looks like a quick patch in our attempt to shift from BoltDB to Postgresql. The logic is crappy as we tend to fetch PeerInfo one by one from the SQL database (which is awful and a waste of CPU cycles).
This PR aims to solve those problems by refactoring the crawler to use better the power of SQL (as we want to keep it because it is pretty nice to post-process the data).
Tasks
- [ x] PeerInfo
- [ x] IpInfo
- [ x] ConnectionEvents
- [ x] Network-related data (Ethereum, IPFS, etc)
- [ ] Optimise the logic of the crawler to get the advantage of the SQL queries
- [ ] peering Strategy
- [ ] Make new Prometheus metrics exporters based on the SQL database