-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
network [NET-264]: rework node propagation logic #211
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
harbu
changed the title
Propagation caching revisited
network [NET-264]: Propagation caching revisited
Sep 29, 2021
NET-264 network: remove node buffer + add event handlers for when neighbors are available
An alternative solution to NET-166 in which we get rid of the unwanted node messagebuffer cache thingy. |
harbu
changed the title
network [NET-264]: Propagation caching revisited
network [NET-264]: rework node propagation logic
Oct 14, 2021
teogeb
approved these changes
Oct 14, 2021
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good improvement 👍
timoxley
added a commit
that referenced
this pull request
Oct 15, 2021
* main: fix(network): Node: rm unused constructor opts network: unit test for TrackerConnector (#250) network [NET-264]: rework node propagation logic (#211) broker: remove Todo type (#248) Update plugins.md Update README.md Update README.md DEVOP-102: run broker Docker build in self hosted runner (#249) network: refactor inbound/outboundNodes to neighbors (#247)
timoxley
pushed a commit
that referenced
this pull request
Oct 19, 2021
## Motivation While revisiting NET-264 #16 it came to our attention that lack of buffering on the Node (logic layer) does cause some real issues. This came up while studying an integration test related to the storage node assignment publish / subscribe flow. Basically there is no way of a user of `streamr-client` to ensure that neighbors are present when calling only `publish` (and not explicitly subscribing). In response, we came up with an idea of a more purposeful "cache" or message propagation logic (and the ability to turn off "caching" effectively if desired later). It can also be considered good service that node attempts to propagate a node to at least a certain number of nodes before "giving up" albeit it does involve more code (and complexity). ## Idea We've had message buffering before but here I extracted and encapsulated the logic into its own thing, attempting to make it more clear how the propagation logic works. The core idea is that a node will attempt to propagate a message within a certain `ttl` to a total of `minPropagationTargets` neighbors. Only upon the `ttl` expiration or `minPropagationTargets` being send to (or buffer becoming full), is a node allowed to "give up" on propagating a message. ## Implementation There are three new classes from top to bottom by usage: - Class `Propagation` is responsible for message propagation. - Class `PropagationTaskStore` keeps track of _active_ propagation tasks and allows querying a set of tasks by stream for example. Class `Propagation` uses this. - Class `FifoMapWithTtl` is a bounded map with FIFO replacement policy and max size. Entries also have a TTL after which they are considered stale and not returned by queries.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
While revisiting NET-264 #16 it came to our attention that lack of buffering on the Node (logic layer) does cause some real issues. This came up while studying an integration test related to the storage node assignment publish / subscribe flow. Basically there is no way of a user of
streamr-client
to ensure that neighbors are present when calling onlypublish
(and not explicitly subscribing). In response, we came up with an idea of a more purposeful "cache" or message propagation logic (and the ability to turn off "caching" effectively if desired later).It can also be considered good service that node attempts to propagate a node to at least a certain number of nodes before "giving up" albeit it does involve more code (and complexity).
Idea
We've had message buffering before but here I extracted and encapsulated the logic into its own thing, attempting to make it more clear how the propagation logic works.
The core idea is that a node will attempt to propagate a message within a certain
ttl
to a total ofminPropagationTargets
neighbors. Only upon thettl
expiration orminPropagationTargets
being send to (or buffer becoming full), is a node allowed to "give up" on propagating a message.Implementation
There are three new classes from top to bottom by usage:
Propagation
is responsible for message propagation.PropagationTaskStore
keeps track of active propagation tasks and allows querying a set of tasks by stream for example. ClassPropagation
uses this.FifoMapWithTtl
is a bounded map with FIFO replacement policy and max size. Entries also have a TTL after which they are considered stale and not returned by queries.