Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching in message validation #527

Closed
1 of 4 tasks
Stebalien opened this issue Jul 26, 2024 · 5 comments
Closed
1 of 4 tasks

Caching in message validation #527

Stebalien opened this issue Jul 26, 2024 · 5 comments

Comments

@Stebalien
Copy link
Member

Stebalien commented Jul 26, 2024

  • Drop equivocations before validation because they cannot be useful. Unfortunately, we'd need access to equivocation state.
  • Potentially cache valid/invalid messages. Unclear how useful this is, it only helps with rebroadcast.
  • Cache merkle-tree hashes. not an issue in practice
  • Split pubsub message IDs into "validation IDs" and "propagation IDs"
    • At the validation layer, use the message hash to deduplicate (make sure to make this work with rebroadcast).
    • After successful validation, identify the message based on the hash of the gpbft value.
    • Only propagate messages with unique GPBFT values.
@masih
Copy link
Member

masih commented Jul 27, 2024

Drop equivocations before validation because they cannot be useful.

I think gpbft should do this as an initial check prior to validation.

Potentially cache valid/invalid messages. Unclear how useful this is, it only helps with rebroadcast.

Agreed. We should measure time spent on validation on the mainnet before actioning this bad boy.

@rjan90
Copy link
Contributor

rjan90 commented Aug 23, 2024

Agreed. We should measure time spent on validation on the mainnet before actioning this bad boy.

@masih Do we have some metrics to see if this is worth prioritising before mainnet launch?

@masih
Copy link
Member

masih commented Aug 26, 2024

Potentially cache valid/invalid messages. Unclear how useful this is, it only helps with rebroadcast.

Agreed. We should measure time spent on validation on the mainnet before actioning this bad boy.

Do we have some metrics to see if this is worth prioritising before mainnet launch?

Yes. This has been measured, and caching already implemented.

@masih
Copy link
Member

masih commented Aug 26, 2024

@rjan90 In terms of ticket management, I recommend breaking this issue up into multiple ones captured in this repo, plus an additional issue at libp2p pubsub repo for the last issue (the f3 part would be integrating with the right hooks once the actual feature is implemented in pubusub).

@Stebalien
Copy link
Member Author

Broken into:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

3 participants