-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool stuff!
Co-authored-by: Tomasz Drwięga <[email protected]>
Still missing logic for the Beefy Worker to change the BEEFY validator set when GRANDPA one changes (and add the corresponding header digest). |
Fixed this in |
@andresilva @tomusdrw can you please take another look? |
primitives/beefy/Cargo.toml
Outdated
@@ -20,7 +25,7 @@ sp-std = { version = "4.0.0", path = "../std", default-features = false } | |||
[dev-dependencies] | |||
hex = "0.4.3" | |||
hex-literal = "0.3" | |||
sp-keystore = { version = "0.12.0", path = "../keystore" } | |||
sp-keystore = { version = "0.12.0", path = "../keystore", default-features = false } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't be necessary to disable the std
feature since unit tests are always built with std
enabled.
sp-keystore = { version = "0.12.0", path = "../keystore", default-features = false } | |
sp-keystore = { version = "0.12.0", path = "../keystore" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for taking so long to review. Overall LGTM, just minor nits.
client/beefy/src/gossip.rs
Outdated
// TODO: right now we're using an object-level rebroadcast gate that will only | ||
// allow **a single message** being rebroadcast every `REBROADCAST_AFTER` minutes. | ||
// | ||
// Should we instead have a per-message/hash rebroadcast cooldown? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO we should not make this a closure. The way the gossip state machine currently works is that it will call into message_allowed
to get a function for validation. When it's time to do a rebroadcast it will pass the returned function through all messages in order to check whether to (re-)propagate. If this isn't a closure then do_rebroadcast
will have the same value for all messages and thus the comment above won't apply (i.e. we'll rebroadcast all messages since do_rebroadcast
will stay true).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, that makes sense, done
} | ||
|
||
let msg = match VoteMessage::<NumberFor<B>, Public, Signature>::decode(&mut data) { | ||
Ok(vote) => vote, | ||
Err(_) => return true, | ||
Err(_) => return false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch.
client/beefy/src/worker.rs
Outdated
@@ -22,20 +22,22 @@ use codec::{Codec, Decode, Encode}; | |||
use futures::{future, FutureExt, StreamExt}; | |||
use log::{debug, error, info, log_enabled, trace, warn}; | |||
use parking_lot::Mutex; | |||
use tokio::time::{sleep, Duration}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you replace this with futures_timer::Delay
? We are currently only depending directly on tokio for the executor (and some tests).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
client/beefy/Cargo.toml
Outdated
@@ -14,6 +14,7 @@ hex = "0.4.2" | |||
log = "0.4" | |||
parking_lot = "0.12.0" | |||
thiserror = "1.0" | |||
tokio = { version = "1.15", features = ["time"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be removed if we use futures_timer
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
return | ||
}, | ||
}; | ||
if log_enabled!(target: "beefy", log::Level::Debug) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would do it regardless. Probably rustc
is smart enough to generate code that is similar to this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep it explicit for now (I'm not sure rustc reasons that far ahead). This will change anyway when we add non-authority nodes support (paritytech/grandpa-bridge-gadget#407).
// Vote if there's now a new vote target. | ||
if let Some(target_number) = self.current_vote_target() { | ||
self.do_vote(target_number); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed? The only reason for not having a vote target would be if rounds is not initialized, that will happen when we get a finality notification and we'll also attempt to vote after.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
depending on min_block_delta
, vote target can be some not-yet-finalized block. In that case, current_vote_target
is None
(see last lines in fn vote_target
).
client/beefy/src/worker.rs
Outdated
/// Wait for BEEFY runtime pallet to be available. | ||
#[cfg(not(test))] | ||
async fn wait_for_runtime_pallet(&mut self) { | ||
loop { | ||
let at = BlockId::hash(self.best_grandpa_block_header.hash()); | ||
if let Some(active) = self.client.runtime_api().validator_set(&at).ok().flatten() { | ||
if active.id() == GENESIS_AUTHORITY_SET_ID { | ||
// When starting from genesis, there is no session boundary digest. | ||
// Just initialize `rounds` to Block #1 as BEEFY mandatory block. | ||
self.init_session_at(active, 1u32.into()); | ||
} | ||
// In all other cases, we just go without `rounds` initialized, meaning the worker | ||
// won't vote until it witnesses a session change. | ||
// Once we'll implement 'initial sync' (catch-up), the worker will be able to start | ||
// voting right away. | ||
break | ||
} else { | ||
info!(target: "beefy", "Waiting BEEFY pallet to become available..."); | ||
sleep(Duration::from_secs(5)).await; | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this will work as at
will never change throughout the loop. Can we instead listen to finality notifications until the runtime call works?
block_finality_notifications().take_while(|notif| {
runtime_api().validator_set(notif.block)
// ...
})
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch!
I had meant to do let at = BlockId::number(client.info().finalized_number);
but your idea is even better since we don't do any sleeping.
What I don't like about using take_while
is that it consumes the finality stream so we can't keep using the same stream once we see pallet available. As a workaround, I just get a new stream that will provide new notifications from here on out (no older notifs). This is incorrect in the node catch-up scenario, but that's not implemented yet anyway and we'll tackle it when we get there.
let (validators, validator_set_id) = if let Some(rounds) = &self.rounds { | ||
if !rounds.should_self_vote(&(payload.clone(), target_number)) { | ||
debug!(target: "beefy", "🥩 Don't double vote for block number: {:?}", target_number); | ||
return | ||
} | ||
(rounds.validators_for(target_number), rounds.validator_set_id_for(target_number)) | ||
} else { | ||
debug!(target: "beefy", "🥩 Missing validator set - can't vote for: {:?}", target_hash); | ||
return | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fact that this implicitly votes with different validator sets is a bit confusing IMO, it also ties with the logic of vote_target
. But let's keep it as is for now and refactor in the future once we block until we vote for mandatory blocks.
#[cfg(test)] | ||
// behavior modifiers used in tests | ||
test_res: tests::TestModifiers, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really fond of mixing test-specific code with the general implementation. In this case it seems that we should instead allow changing the provider of validators (so we can mock it in tests), and also a provider for the payload (so we can create random mmr roots, potentially corrupted). I'm OK with fixing in a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened paritytech/grandpa-bridge-gadget#406 for it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andresilva addressed all comments, ptal
client/beefy/src/worker.rs
Outdated
@@ -22,20 +22,22 @@ use codec::{Codec, Decode, Encode}; | |||
use futures::{future, FutureExt, StreamExt}; | |||
use log::{debug, error, info, log_enabled, trace, warn}; | |||
use parking_lot::Mutex; | |||
use tokio::time::{sleep, Duration}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
client/beefy/Cargo.toml
Outdated
@@ -14,6 +14,7 @@ hex = "0.4.2" | |||
log = "0.4" | |||
parking_lot = "0.12.0" | |||
thiserror = "1.0" | |||
tokio = { version = "1.15", features = ["time"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -29,20 +29,25 @@ use sp_runtime::traits::MaybeDisplay; | |||
|
|||
#[derive(Default)] | |||
struct RoundTracker { | |||
self_vote: bool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened paritytech/grandpa-bridge-gadget#405 to not lose this.
#[cfg(test)] | ||
// behavior modifiers used in tests | ||
test_res: tests::TestModifiers, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened paritytech/grandpa-bridge-gadget#406 for it
client/beefy/src/worker.rs
Outdated
|
||
let public_keys = self.key_store.public_keys()?; | ||
let store: BTreeSet<&Public> = public_keys.iter().collect(); | ||
let store: BTreeSet<&AuthorityId> = public_keys.iter().collect(); | ||
|
||
let missing: Vec<_> = store.difference(&active).cloned().collect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the explanation, made the code and docs more clear based on it.
return | ||
}, | ||
}; | ||
if log_enabled!(target: "beefy", log::Level::Debug) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep it explicit for now (I'm not sure rustc reasons that far ahead). This will change anyway when we add non-authority nodes support (paritytech/grandpa-bridge-gadget#407).
// Vote if there's now a new vote target. | ||
if let Some(target_number) = self.current_vote_target() { | ||
self.do_vote(target_number); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
depending on min_block_delta
, vote target can be some not-yet-finalized block. In that case, current_vote_target
is None
(see last lines in fn vote_target
).
client/beefy/src/worker.rs
Outdated
/// Wait for BEEFY runtime pallet to be available. | ||
#[cfg(not(test))] | ||
async fn wait_for_runtime_pallet(&mut self) { | ||
loop { | ||
let at = BlockId::hash(self.best_grandpa_block_header.hash()); | ||
if let Some(active) = self.client.runtime_api().validator_set(&at).ok().flatten() { | ||
if active.id() == GENESIS_AUTHORITY_SET_ID { | ||
// When starting from genesis, there is no session boundary digest. | ||
// Just initialize `rounds` to Block #1 as BEEFY mandatory block. | ||
self.init_session_at(active, 1u32.into()); | ||
} | ||
// In all other cases, we just go without `rounds` initialized, meaning the worker | ||
// won't vote until it witnesses a session change. | ||
// Once we'll implement 'initial sync' (catch-up), the worker will be able to start | ||
// voting right away. | ||
break | ||
} else { | ||
info!(target: "beefy", "Waiting BEEFY pallet to become available..."); | ||
sleep(Duration::from_secs(5)).await; | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch!
I had meant to do let at = BlockId::number(client.info().finalized_number);
but your idea is even better since we don't do any sleeping.
What I don't like about using take_while
is that it consumes the finality stream so we can't keep using the same stream once we see pallet available. As a workaround, I just get a new stream that will provide new notifications from here on out (no older notifs). This is incorrect in the node catch-up scenario, but that's not implemented yet anyway and we'll tackle it when we get there.
primitives/beefy/Cargo.toml
Outdated
@@ -20,7 +25,7 @@ sp-std = { version = "4.0.0", path = "../std", default-features = false } | |||
[dev-dependencies] | |||
hex = "0.4.3" | |||
hex-literal = "0.3" | |||
sp-keystore = { version = "0.12.0", path = "../keystore" } | |||
sp-keystore = { version = "0.12.0", path = "../keystore", default-features = false } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a couple of dumb questions as a BEEFY newcomer. Overall everything seems fine. Thanks for the great job, Adrian! :)
client/beefy/src/gossip.rs
Outdated
/// This can be called once round is complete so we stop gossiping for it. | ||
pub(crate) fn conclude_round(&self, round: NumberFor<B>) { | ||
debug!(target: "beefy", "🥩 About to drop gossip round #{}", round); | ||
self.known_votes.write().remove(round); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are all rounds are guaranteed to be "concluded"? There's a test below - note_and_drop_round_works
. It "notes" rounds 1, 3, 7 and 10. And at some point 7 is concluded (this method is called). As a result - the btree map in known_votes
will contain entries for 1, 3 and 10. If 1 and 3 are never concluded, then the map will hold entries forever => "memory leak". Please, correct me if I'm wrong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! I was thinking since the round-set is completely recycled on each new session there's no problem with leftover map entries (because they are dropped at session end).
But I guess the code would be cleaner if we explicitly removed all rounds older than concluded
.
Also the gossip_validator rounds do not get recycled so these are in fact "leaked mem". Will fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
client/beefy/src/gossip.rs
Outdated
|
||
/// Create new round votes set if not already present. | ||
pub fn insert(&mut self, round: NumberFor<B>) { | ||
if !self.live.contains_key(&round) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd replace with self.live.entry(number).or_default();
:) But this version is fine too :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
client/beefy/src/round.rs
Outdated
.validators() | ||
.iter() | ||
.map(|authority_id| { | ||
signatures.iter().find_map(|(id, sig)| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking to suggest change type of RoundTracker::votes
to some map to avoid this loop - iiuc the number of BEEFY validators = number of GRANDPA validators, which is e.g. ~1K on Kusama && this loop is N^2
. But now I'm unsure if that's correct.
So I have a question - what if the same validator sends two different signatures. I don't see any checks in Rounds::add_vote
- is it handled somewhere else? What I'm worrying about is that the RoundTracker::votes
will have two different entries for the same validator. And then in try_conclude
we are just looking whether votes.len() >= threshold
. Is it possible? Sorry for possibly being dumb here - I'm looking at this code for the first time :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a valid question. Votes validation happens at the gossip level here, so validator public key
, beefy payload
and validator signature
are tied together and validated before accepting vote. Therefore, the Rounds
code doesn't need to worry about invalid votes (like same validator submitting multiple signature votes).
I will add doc comments explaining this better.
Good catch on the O(n^2)
loop, changed RoundTracker::votes
to a map to get that down to O(n)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
r.validator_set().clone() | ||
} else { | ||
// no previous rounds present use new validator set instead (genesis case) | ||
active.clone() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check my thoughts:
- we have some chain that has been started a year ago, so we're definitely not at genesis;
- authority node is starting and
BeefyWorker::new()
is called, which setsrounds
toNone
; - worker receives finality notification - i.e.
handle_finality_notification
is called; - the finalized header starts new session - i.e.
handle_finality_notification
callsinit_session_at
; - since
rounds
is still set toNone
, we're falling here and prev validator set is set to the same (new) set.
I don't know if I'm right and if I am, then whether it may lead to some issues or not. So just asking to recheck it. Thanks :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your reasoning is correct, but we plan on addressing the initial sync
part after Lean Beefy milestone - issue for it here paritytech/grandpa-bridge-gadget#112.
Within Lean Beefy simplifying assumptions, this happens when starting not at genesis.
The end result is that this validator will not be able to vote for current session (doesn't have the right validator set), but will be able to vote for future ones.
Simplified BEEFY worker logic based on the invariant that GRANDPA will always finalize 1st block of each new session, meaning BEEFY worker is guaranteed to receive finality notification for the BEEFY mandatory blocks. Under these conditions the current design is as follows: - session changes are detected based on BEEFY Digest present in BEEFY mandatory blocks, - on each new session new `Rounds` of voting is created, with old rounds being dropped (for gossip rounds, last 3 are still alive so votes are still being gossiped), - after processing finality for a block, the worker votes if a new voting target has become available as a result of said block finality processing, - incoming votes as well as self-created votes are processed and signed commitments are created for completed BEEFY voting rounds, - the worker votes if a new voting target becomes available once a round successfully completes. On worker startup, the current validator set is retrieved from the BEEFY pallet. If it is the genesis validator set, worker starts voting right away considering Block #1 as session start. Otherwise (not genesis), the worker will vote starting with mandatory block of the next session. Later on when we add the BEEFY initial-sync (catch-up) logic, the worker will sync all past mandatory blocks Signed Commitments and will be able to start voting right away. BEEFY mandatory block is the block with header containing the BEEFY `AuthoritiesChange` Digest, this block is guaranteed to be finalized by GRANDPA. This session-boundary block is signed by the ending-session's validator set. Next blocks will be signed by the new session's validator set. This behavior is consistent with what GRANDPA does as well. Also drop the limit N on active gossip rounds. In an adversarial network, a bad actor could create and gossip N invalid votes with round numbers larger than the current correct round number. This would lead to votes for correct rounds to no longer be gossiped. Add unit-tests for all components, including full voter consensus tests. Signed-off-by: Adrian Catangiu <[email protected]> Co-authored-by: Tomasz Drwięga <[email protected]> Co-authored-by: David Salami <Wizdave97>
Simplified BEEFY worker logic based on the invariant that GRANDPA will always finalize 1st block of each new session, meaning BEEFY worker is guaranteed to receive finality notification for the BEEFY mandatory blocks. Under these conditions the current design is as follows: - session changes are detected based on BEEFY Digest present in BEEFY mandatory blocks, - on each new session new `Rounds` of voting is created, with old rounds being dropped (for gossip rounds, last 3 are still alive so votes are still being gossiped), - after processing finality for a block, the worker votes if a new voting target has become available as a result of said block finality processing, - incoming votes as well as self-created votes are processed and signed commitments are created for completed BEEFY voting rounds, - the worker votes if a new voting target becomes available once a round successfully completes. On worker startup, the current validator set is retrieved from the BEEFY pallet. If it is the genesis validator set, worker starts voting right away considering Block #1 as session start. Otherwise (not genesis), the worker will vote starting with mandatory block of the next session. Later on when we add the BEEFY initial-sync (catch-up) logic, the worker will sync all past mandatory blocks Signed Commitments and will be able to start voting right away. BEEFY mandatory block is the block with header containing the BEEFY `AuthoritiesChange` Digest, this block is guaranteed to be finalized by GRANDPA. This session-boundary block is signed by the ending-session's validator set. Next blocks will be signed by the new session's validator set. This behavior is consistent with what GRANDPA does as well. Also drop the limit N on active gossip rounds. In an adversarial network, a bad actor could create and gossip N invalid votes with round numbers larger than the current correct round number. This would lead to votes for correct rounds to no longer be gossiped. Add unit-tests for all components, including full voter consensus tests. Signed-off-by: Adrian Catangiu <[email protected]> Co-authored-by: Tomasz Drwięga <[email protected]> Co-authored-by: David Salami <Wizdave97>
Simplified BEEFY worker logic based on the invariant that GRANDPA will always finalize 1st block of each new session, meaning BEEFY worker is guaranteed to receive finality notification for the BEEFY mandatory blocks. Under these conditions the current design is as follows: - session changes are detected based on BEEFY Digest present in BEEFY mandatory blocks, - on each new session new `Rounds` of voting is created, with old rounds being dropped (for gossip rounds, last 3 are still alive so votes are still being gossiped), - after processing finality for a block, the worker votes if a new voting target has become available as a result of said block finality processing, - incoming votes as well as self-created votes are processed and signed commitments are created for completed BEEFY voting rounds, - the worker votes if a new voting target becomes available once a round successfully completes. On worker startup, the current validator set is retrieved from the BEEFY pallet. If it is the genesis validator set, worker starts voting right away considering Block #1 as session start. Otherwise (not genesis), the worker will vote starting with mandatory block of the next session. Later on when we add the BEEFY initial-sync (catch-up) logic, the worker will sync all past mandatory blocks Signed Commitments and will be able to start voting right away. BEEFY mandatory block is the block with header containing the BEEFY `AuthoritiesChange` Digest, this block is guaranteed to be finalized by GRANDPA. This session-boundary block is signed by the ending-session's validator set. Next blocks will be signed by the new session's validator set. This behavior is consistent with what GRANDPA does as well. Also drop the limit N on active gossip rounds. In an adversarial network, a bad actor could create and gossip N invalid votes with round numbers larger than the current correct round number. This would lead to votes for correct rounds to no longer be gossiped. Add unit-tests for all components, including full voter consensus tests. Signed-off-by: Adrian Catangiu <[email protected]> Co-authored-by: Tomasz Drwięga <[email protected]> Co-authored-by: David Salami <Wizdave97>
Simplified BEEFY worker logic based on the invariant that GRANDPA will always finalize 1st block of each new session, meaning BEEFY worker is guaranteed to receive finality notification for the BEEFY mandatory blocks. Under these conditions the current design is as follows: - session changes are detected based on BEEFY Digest present in BEEFY mandatory blocks, - on each new session new `Rounds` of voting is created, with old rounds being dropped (for gossip rounds, last 3 are still alive so votes are still being gossiped), - after processing finality for a block, the worker votes if a new voting target has become available as a result of said block finality processing, - incoming votes as well as self-created votes are processed and signed commitments are created for completed BEEFY voting rounds, - the worker votes if a new voting target becomes available once a round successfully completes. On worker startup, the current validator set is retrieved from the BEEFY pallet. If it is the genesis validator set, worker starts voting right away considering Block #1 as session start. Otherwise (not genesis), the worker will vote starting with mandatory block of the next session. Later on when we add the BEEFY initial-sync (catch-up) logic, the worker will sync all past mandatory blocks Signed Commitments and will be able to start voting right away. BEEFY mandatory block is the block with header containing the BEEFY `AuthoritiesChange` Digest, this block is guaranteed to be finalized by GRANDPA. This session-boundary block is signed by the ending-session's validator set. Next blocks will be signed by the new session's validator set. This behavior is consistent with what GRANDPA does as well. Also drop the limit N on active gossip rounds. In an adversarial network, a bad actor could create and gossip N invalid votes with round numbers larger than the current correct round number. This would lead to votes for correct rounds to no longer be gossiped. Add unit-tests for all components, including full voter consensus tests. Signed-off-by: Adrian Catangiu <[email protected]> Co-authored-by: Tomasz Drwięga <[email protected]> Co-authored-by: David Salami <Wizdave97>
Simplified BEEFY worker logic based on the invariant that GRANDPA will always finalize 1st block of each new session, meaning BEEFY worker is guaranteed to receive finality notification for the BEEFY mandatory blocks. Under these conditions the current design is as follows: - session changes are detected based on BEEFY Digest present in BEEFY mandatory blocks, - on each new session new `Rounds` of voting is created, with old rounds being dropped (for gossip rounds, last 3 are still alive so votes are still being gossiped), - after processing finality for a block, the worker votes if a new voting target has become available as a result of said block finality processing, - incoming votes as well as self-created votes are processed and signed commitments are created for completed BEEFY voting rounds, - the worker votes if a new voting target becomes available once a round successfully completes. On worker startup, the current validator set is retrieved from the BEEFY pallet. If it is the genesis validator set, worker starts voting right away considering Block #1 as session start. Otherwise (not genesis), the worker will vote starting with mandatory block of the next session. Later on when we add the BEEFY initial-sync (catch-up) logic, the worker will sync all past mandatory blocks Signed Commitments and will be able to start voting right away. BEEFY mandatory block is the block with header containing the BEEFY `AuthoritiesChange` Digest, this block is guaranteed to be finalized by GRANDPA. This session-boundary block is signed by the ending-session's validator set. Next blocks will be signed by the new session's validator set. This behavior is consistent with what GRANDPA does as well. Also drop the limit N on active gossip rounds. In an adversarial network, a bad actor could create and gossip N invalid votes with round numbers larger than the current correct round number. This would lead to votes for correct rounds to no longer be gossiped. Add unit-tests for all components, including full voter consensus tests. Signed-off-by: Adrian Catangiu <[email protected]> Co-authored-by: Tomasz Drwięga <[email protected]> Co-authored-by: David Salami <Wizdave97>
Simplified BEEFY worker logic based on the invariant that GRANDPA will always finalize 1st block of each new session, meaning BEEFY worker is guaranteed to receive finality notification for the BEEFY mandatory blocks. Under these conditions the current design is as follows: - session changes are detected based on BEEFY Digest present in BEEFY mandatory blocks, - on each new session new `Rounds` of voting is created, with old rounds being dropped (for gossip rounds, last 3 are still alive so votes are still being gossiped), - after processing finality for a block, the worker votes if a new voting target has become available as a result of said block finality processing, - incoming votes as well as self-created votes are processed and signed commitments are created for completed BEEFY voting rounds, - the worker votes if a new voting target becomes available once a round successfully completes. On worker startup, the current validator set is retrieved from the BEEFY pallet. If it is the genesis validator set, worker starts voting right away considering Block #1 as session start. Otherwise (not genesis), the worker will vote starting with mandatory block of the next session. Later on when we add the BEEFY initial-sync (catch-up) logic, the worker will sync all past mandatory blocks Signed Commitments and will be able to start voting right away. BEEFY mandatory block is the block with header containing the BEEFY `AuthoritiesChange` Digest, this block is guaranteed to be finalized by GRANDPA. This session-boundary block is signed by the ending-session's validator set. Next blocks will be signed by the new session's validator set. This behavior is consistent with what GRANDPA does as well. Also drop the limit N on active gossip rounds. In an adversarial network, a bad actor could create and gossip N invalid votes with round numbers larger than the current correct round number. This would lead to votes for correct rounds to no longer be gossiped. Add unit-tests for all components, including full voter consensus tests. Signed-off-by: Adrian Catangiu <[email protected]> Co-authored-by: Tomasz Drwięga <[email protected]> Co-authored-by: David Salami <Wizdave97>
Description
This PR implements Lean BEEFY as described here.
Simplified BEEFY worker logic based on the invariant that GRANDPA will always finalize 1st block of each new session, meaning BEEFY worker is guaranteed to receive finality notification for the BEEFY mandatory blocks.
Under these conditions the current design is as follows:
Rounds
of voting is created, with old rounds being dropped (for gossip rounds, last 3 are still alive so votes are still being gossiped),On worker startup, the current validator set is retrieved from the BEEFY pallet. If it is the genesis validator set, worker starts voting right away considering Block #1 as session start.
Otherwise (not genesis), the worker will vote starting with mandatory block of the next session.
Later on when we add the BEEFY initial-sync (catch-up) logic, the worker will sync all past mandatory blocks Signed Commitments and will be able to start voting right away.
Fixes
Fixes paritytech/grandpa-bridge-gadget#16
Fixes paritytech/grandpa-bridge-gadget#159
Fixes paritytech/grandpa-bridge-gadget#162
Fixes paritytech/grandpa-bridge-gadget#182
Fixes paritytech/grandpa-bridge-gadget#254
Fixes paritytech/grandpa-bridge-gadget#256