Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

READY: GigaChannel Community gossip caching #4648

Merged
merged 1 commit into from
Jul 3, 2019

Conversation

ichorid
Copy link
Contributor

@ichorid ichorid commented Jul 3, 2019

This makes GigaChannel Community cache the results of the SQL query that selects the channel contents to gossip around. The resulting blob is then saved for reuse in the Community object. The blob will be sent to 30 peers, and then the new query will be performed. This significantly reduces the background DB activity.

Fixes #4636

@ichorid ichorid added this to the V7.3: Gigachannels milestone Jul 3, 2019
@ichorid ichorid marked this pull request as ready for review July 3, 2019 08:52
@ichorid ichorid changed the title GigaChannel Community gossip caching READY: GigaChannel Community gossip caching Jul 3, 2019
@ichorid ichorid requested review from xoriole and qstokkink July 3, 2019 09:10
Copy link
Contributor

@xoriole xoriole left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of relying on the randomness, the number of torrents that can be sent can be played with. For example, a test scenario could be.

  • Say max_entries that can fit in a gossip is 4, renewal period is 3 and Node 0 has 3 (<max_entries) at time t, then
  • At time t+1, Node 0 gossips random torrents (all 3 in this case) to Node1. Node 1 confirms receival of those 3 torrents.
  • At time t+2, Node 0 adds more 3 more torrents, totalling 6. With this at least one more random torrent will be there for a new gossip. But since we are caching gossipping data, the next gossip would be a cached one and be equal to the first gossip. Assertion could be for the equality here.
  • At time t+3, gossip_renewal_period has reached so the new gossip should be sent. In this case, at least one torrent should be different from the previous gossips which could be checked.
    This should always pass. Just a thought.

@ichorid
Copy link
Contributor Author

ichorid commented Jul 3, 2019

Instead of relying on the randomness, the number of torrents that can be sent can be played with. For example, a test scenario could be.

* Say max_entries that can fit in a gossip is 4, renewal period is 3 and Node 0 has 3 (<max_entries) at time t, then

* At time t+1, Node 0 gossips random torrents (all 3 in this case) to Node1. Node 1 confirms receival of those 3 torrents.

* At time t+2, Node 0 adds more 3 more torrents, totalling 6. With this at least one more random torrent will be there for a new gossip. But since we are caching gossipping data, the next gossip would be a cached one and be equal to the first gossip. Assertion could be for the equality here.

* At time t+3, gossip_renewal_period has reached so the new gossip should be sent. In this case, at least one torrent should be different from the previous gossips which could be checked.
  This should always pass. Just a thought.

Good point! This points me to another solution: just delete the old torrents on the sender and add new ones.

@ichorid ichorid force-pushed the f_gossip_sql_caching branch from 49fbe43 to 5ade652 Compare July 3, 2019 14:35
This makes GigaChannel Community cache the results of the SQL query
that selects the channel contents to gossip around. The resulting
blob is saved for reuse in the Community object. The blob will be sent
to 30 peers, and then the new query will be performed. This
significantly reduces the background DB activity.
@ichorid ichorid force-pushed the f_gossip_sql_caching branch from 5ade652 to 35bd4e0 Compare July 3, 2019 14:51
@ichorid ichorid requested a review from qstokkink July 3, 2019 14:58
Copy link
Contributor

@qstokkink qstokkink left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much appreciated 👍

@ichorid ichorid merged commit 154c24d into Tribler:devel Jul 3, 2019
@ichorid ichorid deleted the f_gossip_sql_caching branch July 3, 2019 19:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

GigaChannel Community 100% CPU
4 participants