Skip to content

Commit

Permalink
Filter votes from disabled validators in BackedCandidates in proces…
Browse files Browse the repository at this point in the history
…s_inherent_data (#2889)

Backport of #1863 to
master

Extend candidate sanitation in paras_inherent by removing backing votes
from disabled validators. Check
#1592 for more details.

This change is related to the disabling strategy implementation
(#2226).

---------

Co-authored-by: ordian <[email protected]>
Co-authored-by: ordian <[email protected]>
Co-authored-by: Maciej <[email protected]>
  • Loading branch information
4 people authored Jan 18, 2024
1 parent f574868 commit f895409
Show file tree
Hide file tree
Showing 21 changed files with 506 additions and 64 deletions.
32 changes: 32 additions & 0 deletions polkadot/roadmap/implementers-guide/src/runtime/parainherent.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,3 +60,35 @@ processing it, so the processed inherent data is simply dropped.
This also means that the `enter` function keeps data around for no good reason. This seems acceptable though as the size
of a block is rather limited. Nevertheless if we ever wanted to optimize this we can easily implement an inherent
collector that has two implementations, where one clones and stores the data and the other just passes it on.

## Sanitization

`ParasInherent` with the entry point of `create_inherent` sanitizes the input data, while the `enter` entry point
enforces already sanitized input data. If unsanitized data is provided the module generates an error.

Disputes are included in the block with a priority for a security reasons. It's important to include as many dispute
votes onchain as possible so that disputes conclude faster and the offenders are punished. However if there are too many
disputes to include in a block the dispute set is trimmed so that it respects max block weight.

Dispute data is first deduplicated and sorted by block number (older first) and dispute location (local then remote).
Concluded and ancient (disputes initiated before the post conclusion acceptance period) disputes are filtered out.
Votes with invalid signatures or from unknown validators (not found in the active set for the current session) are also
filtered out.

All dispute statements are included in the order described in the previous paragraph until the available block weight is
exhausted. After the dispute data is included all remaining weight is filled in with candidates and availability
bitfields. Bitfields are included with priority, then candidates containing code updates and finally any backed
candidates. If there is not enough weight for all backed candidates they are trimmed by random selection. Disputes are
processed in three separate functions - `deduplicate_and_sort_dispute_data`, `filter_dispute_data` and
`limit_and_sanitize_disputes`.

Availability bitfields are also sanitized by dropping malformed ones, containing disputed cores or bad signatures. Refer
to `sanitize_bitfields` function for implementation details.

Backed candidates sanitization removes malformed ones, candidates which have got concluded invalid disputes against them
or candidates produced by unassigned cores. Furthermore any backing votes from disabled validators for a candidate are
dropped. This is part of the validator disabling strategy. After filtering the statements from disabled validators a
backed candidate may end up with votes count less than `minimum_backing_votes` (a parameter from `HostConfiguiration`).
In this case the whole candidate is dropped otherwise it will be rejected by `process_candidates` from pallet inclusion.
All checks related to backed candidates are implemented in `sanitize_backed_candidates` and
`filter_backed_statements_from_disabled_validators`.
4 changes: 3 additions & 1 deletion polkadot/runtime/common/src/assigned_slots/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -746,7 +746,9 @@ mod tests {
type AssignCoretime = ();
}

impl parachains_shared::Config for Test {}
impl parachains_shared::Config for Test {
type DisabledValidators = ();
}

parameter_types! {
pub const LeasePeriod: BlockNumber = 3;
Expand Down
4 changes: 3 additions & 1 deletion polkadot/runtime/common/src/integration_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,9 @@ impl configuration::Config for Test {
type WeightInfo = configuration::TestWeightInfo;
}

impl shared::Config for Test {}
impl shared::Config for Test {
type DisabledValidators = ();
}

impl origin::Config for Test {}

Expand Down
4 changes: 3 additions & 1 deletion polkadot/runtime/common/src/paras_registrar/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -799,7 +799,9 @@ mod tests {
type MaxFreezes = ConstU32<1>;
}

impl shared::Config for Test {}
impl shared::Config for Test {
type DisabledValidators = ();
}

impl origin::Config for Test {}

Expand Down
27 changes: 26 additions & 1 deletion polkadot/runtime/parachains/src/mock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,22 @@ impl crate::configuration::Config for Test {
type WeightInfo = crate::configuration::TestWeightInfo;
}

impl crate::shared::Config for Test {}
pub struct MockDisabledValidators {}
impl frame_support::traits::DisabledValidators for MockDisabledValidators {
/// Returns true if the given validator is disabled.
fn is_disabled(index: u32) -> bool {
disabled_validators().iter().any(|v| *v == index)
}

/// Returns a hardcoded list (`DISABLED_VALIDATORS`) of disabled validators
fn disabled_validators() -> Vec<u32> {
disabled_validators()
}
}

impl crate::shared::Config for Test {
type DisabledValidators = MockDisabledValidators;
}

impl origin::Config for Test {}

Expand Down Expand Up @@ -564,6 +579,8 @@ thread_local! {

pub static AVAILABILITY_REWARDS: RefCell<HashMap<ValidatorIndex, usize>>
= RefCell::new(HashMap::new());

pub static DISABLED_VALIDATORS: RefCell<Vec<u32>> = RefCell::new(vec![]);
}

pub fn backing_rewards() -> HashMap<ValidatorIndex, usize> {
Expand All @@ -574,6 +591,10 @@ pub fn availability_rewards() -> HashMap<ValidatorIndex, usize> {
AVAILABILITY_REWARDS.with(|r| r.borrow().clone())
}

pub fn disabled_validators() -> Vec<u32> {
DISABLED_VALIDATORS.with(|r| r.borrow().clone())
}

parameter_types! {
pub static Processed: Vec<(ParaId, UpwardMessage)> = vec![];
}
Expand Down Expand Up @@ -713,3 +734,7 @@ pub(crate) fn deregister_parachain(id: ParaId) {
pub(crate) fn try_deregister_parachain(id: ParaId) -> crate::DispatchResult {
frame_support::storage::transactional::with_storage_layer(|| Paras::schedule_para_cleanup(id))
}

pub(crate) fn set_disabled_validators(disabled: Vec<u32>) {
DISABLED_VALIDATORS.with(|d| *d.borrow_mut() = disabled)
}
181 changes: 160 additions & 21 deletions polkadot/runtime/parachains/src/paras_inherent/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ use crate::{
metrics::METRICS,
paras,
scheduler::{self, FreedReason},
shared, ParaId,
shared::{self, AllowedRelayParentsTracker},
ParaId,
};
use bitvec::prelude::BitVec;
use frame_support::{
Expand All @@ -42,8 +43,8 @@ use frame_support::{
use frame_system::pallet_prelude::*;
use pallet_babe::{self, ParentBlockRandomness};
use primitives::{
BackedCandidate, CandidateHash, CandidateReceipt, CheckedDisputeStatementSet,
CheckedMultiDisputeStatementSet, CoreIndex, DisputeStatementSet,
effective_minimum_backing_votes, BackedCandidate, CandidateHash, CandidateReceipt,
CheckedDisputeStatementSet, CheckedMultiDisputeStatementSet, CoreIndex, DisputeStatementSet,
InherentData as ParachainsInherentData, MultiDisputeStatementSet, ScrapedOnChainVotes,
SessionIndex, SignedAvailabilityBitfields, SigningContext, UncheckedSignedAvailabilityBitfield,
UncheckedSignedAvailabilityBitfields, ValidatorId, ValidatorIndex, ValidityAttestation,
Expand Down Expand Up @@ -142,6 +143,8 @@ pub mod pallet {
DisputeStatementsUnsortedOrDuplicates,
/// A dispute statement was invalid.
DisputeInvalid,
/// A candidate was backed by a disabled validator
BackedByDisabled,
}

/// Whether the paras inherent was included within this block.
Expand Down Expand Up @@ -378,6 +381,7 @@ impl<T: Config> Pallet<T> {
let bitfields_weight = signed_bitfields_weight::<T>(&bitfields);
let disputes_weight = multi_dispute_statement_sets_weight::<T>(&disputes);

// Weight before filtering/sanitization
let all_weight_before = candidates_weight + bitfields_weight + disputes_weight;

METRICS.on_before_filter(all_weight_before.ref_time());
Expand Down Expand Up @@ -587,17 +591,19 @@ impl<T: Config> Pallet<T> {

METRICS.on_candidates_processed_total(backed_candidates.len() as u64);

let backed_candidates = sanitize_backed_candidates::<T, _>(
backed_candidates,
|candidate_idx: usize,
backed_candidate: &BackedCandidate<<T as frame_system::Config>::Hash>|
-> bool {
let para_id = backed_candidate.descriptor().para_id;
let prev_context = <paras::Pallet<T>>::para_most_recent_context(para_id);
let check_ctx = CandidateCheckContext::<T>::new(prev_context);

// never include a concluded-invalid candidate
current_concluded_invalid_disputes.contains(&backed_candidate.hash()) ||
let SanitizedBackedCandidates { backed_candidates, votes_from_disabled_were_dropped } =
sanitize_backed_candidates::<T, _>(
backed_candidates,
&allowed_relay_parents,
|candidate_idx: usize,
backed_candidate: &BackedCandidate<<T as frame_system::Config>::Hash>|
-> bool {
let para_id = backed_candidate.descriptor().para_id;
let prev_context = <paras::Pallet<T>>::para_most_recent_context(para_id);
let check_ctx = CandidateCheckContext::<T>::new(prev_context);

// never include a concluded-invalid candidate
current_concluded_invalid_disputes.contains(&backed_candidate.hash()) ||
// Instead of checking the candidates with code upgrades twice
// move the checking up here and skip it in the training wheels fallback.
// That way we avoid possible duplicate checks while assuring all
Expand All @@ -607,12 +613,19 @@ impl<T: Config> Pallet<T> {
check_ctx
.verify_backed_candidate(&allowed_relay_parents, candidate_idx, backed_candidate)
.is_err()
},
&scheduled,
);
},
&scheduled,
);

METRICS.on_candidates_sanitized(backed_candidates.len() as u64);

// In `Enter` context (invoked during execution) there should be no backing votes from
// disabled validators because they should have been filtered out during inherent data
// preparation (`ProvideInherent` context). Abort in such cases.
if context == ProcessInherentDataContext::Enter {
ensure!(!votes_from_disabled_were_dropped, Error::<T>::BackedByDisabled);
}

// Process backed candidates according to scheduled cores.
let inclusion::ProcessedCandidates::<<HeaderFor<T> as HeaderT>::Hash> {
core_indices: occupied,
Expand Down Expand Up @@ -900,7 +913,19 @@ pub(crate) fn sanitize_bitfields<T: crate::inclusion::Config>(
bitfields
}

/// Filter out any candidates that have a concluded invalid dispute.
// Result from `sanitize_backed_candidates`
#[derive(Debug, PartialEq)]
struct SanitizedBackedCandidates<Hash> {
// Sanitized backed candidates. The `Vec` is sorted according to the occupied core index.
backed_candidates: Vec<BackedCandidate<Hash>>,
// Set to true if any votes from disabled validators were dropped from the input.
votes_from_disabled_were_dropped: bool,
}

/// Filter out:
/// 1. any candidates that have a concluded invalid dispute
/// 2. all backing votes from disabled validators
/// 3. any candidates that end up with less than `effective_minimum_backing_votes` backing votes
///
/// `scheduled` follows the same naming scheme as provided in the
/// guide: Currently `free` but might become `occupied`.
Expand All @@ -910,15 +935,17 @@ pub(crate) fn sanitize_bitfields<T: crate::inclusion::Config>(
/// `candidate_has_concluded_invalid_dispute` must return `true` if the candidate
/// is disputed, false otherwise. The passed `usize` is the candidate index.
///
/// The returned `Vec` is sorted according to the occupied core index.
/// Returns struct `SanitizedBackedCandidates` where `backed_candidates` are sorted according to the
/// occupied core index.
fn sanitize_backed_candidates<
T: crate::inclusion::Config,
F: FnMut(usize, &BackedCandidate<T::Hash>) -> bool,
>(
mut backed_candidates: Vec<BackedCandidate<T::Hash>>,
allowed_relay_parents: &AllowedRelayParentsTracker<T::Hash, BlockNumberFor<T>>,
mut candidate_has_concluded_invalid_dispute_or_is_invalid: F,
scheduled: &BTreeMap<ParaId, CoreIndex>,
) -> Vec<BackedCandidate<T::Hash>> {
) -> SanitizedBackedCandidates<T::Hash> {
// Remove any candidates that were concluded invalid.
// This does not assume sorting.
backed_candidates.indexed_retain(move |candidate_idx, backed_candidate| {
Expand All @@ -936,6 +963,13 @@ fn sanitize_backed_candidates<
scheduled.get(&desc.para_id).is_some()
});

// Filter out backing statements from disabled validators
let dropped_disabled = filter_backed_statements_from_disabled_validators::<T>(
&mut backed_candidates,
&allowed_relay_parents,
scheduled,
);

// Sort the `Vec` last, once there is a guarantee that these
// `BackedCandidates` references the expected relay chain parent,
// but more importantly are scheduled for a free core.
Expand All @@ -946,7 +980,10 @@ fn sanitize_backed_candidates<
scheduled[&x.descriptor().para_id].cmp(&scheduled[&y.descriptor().para_id])
});

backed_candidates
SanitizedBackedCandidates {
backed_candidates,
votes_from_disabled_were_dropped: dropped_disabled,
}
}

/// Derive entropy from babe provided per block randomness.
Expand Down Expand Up @@ -1029,3 +1066,105 @@ fn limit_and_sanitize_disputes<
(checked, checked_disputes_weight)
}
}

// Filters statements from disabled validators in `BackedCandidate`, non-scheduled candidates and
// few more sanity checks. Returns `true` if at least one statement is removed and `false`
// otherwise.
fn filter_backed_statements_from_disabled_validators<T: shared::Config + scheduler::Config>(
backed_candidates: &mut Vec<BackedCandidate<<T as frame_system::Config>::Hash>>,
allowed_relay_parents: &AllowedRelayParentsTracker<T::Hash, BlockNumberFor<T>>,
scheduled: &BTreeMap<ParaId, CoreIndex>,
) -> bool {
let disabled_validators =
BTreeSet::<_>::from_iter(shared::Pallet::<T>::disabled_validators().into_iter());

if disabled_validators.is_empty() {
// No disabled validators - nothing to do
return false
}

let backed_len_before = backed_candidates.len();

// Flag which will be returned. Set to `true` if at least one vote is filtered.
let mut filtered = false;

let minimum_backing_votes = configuration::Pallet::<T>::config().minimum_backing_votes;

// Process all backed candidates. `validator_indices` in `BackedCandidates` are indices within
// the validator group assigned to the parachain. To obtain this group we need:
// 1. Core index assigned to the parachain which has produced the candidate
// 2. The relay chain block number of the candidate
backed_candidates.retain_mut(|bc| {
// Get `core_idx` assigned to the `para_id` of the candidate
let core_idx = match scheduled.get(&bc.descriptor().para_id) {
Some(core_idx) => *core_idx,
None => {
log::debug!(target: LOG_TARGET, "Can't get core idx of a backed candidate for para id {:?}. Dropping the candidate.", bc.descriptor().para_id);
return false
}
};

// Get relay parent block number of the candidate. We need this to get the group index assigned to this core at this block number
let relay_parent_block_number = match allowed_relay_parents
.acquire_info(bc.descriptor().relay_parent, None) {
Some((_, block_num)) => block_num,
None => {
log::debug!(target: LOG_TARGET, "Relay parent {:?} for candidate is not in the allowed relay parents. Dropping the candidate.", bc.descriptor().relay_parent);
return false
}
};

// Get the group index for the core
let group_idx = match <scheduler::Pallet<T>>::group_assigned_to_core(
core_idx,
relay_parent_block_number + One::one(),
) {
Some(group_idx) => group_idx,
None => {
log::debug!(target: LOG_TARGET, "Can't get the group index for core idx {:?}. Dropping the candidate.", core_idx);
return false
},
};

// And finally get the validator group for this group index
let validator_group = match <scheduler::Pallet<T>>::group_validators(group_idx) {
Some(validator_group) => validator_group,
None => {
log::debug!(target: LOG_TARGET, "Can't get the validators from group {:?}. Dropping the candidate.", group_idx);
return false
}
};

// Bitmask with the disabled indices within the validator group
let disabled_indices = BitVec::<u8, bitvec::order::Lsb0>::from_iter(validator_group.iter().map(|idx| disabled_validators.contains(idx)));
// The indices of statements from disabled validators in `BackedCandidate`. We have to drop these.
let indices_to_drop = disabled_indices.clone() & &bc.validator_indices;
// Apply the bitmask to drop the disabled validator from `validator_indices`
bc.validator_indices &= !disabled_indices;
// Remove the corresponding votes from `validity_votes`
for idx in indices_to_drop.iter_ones().rev() {
bc.validity_votes.remove(idx);
}

// If at least one statement was dropped we need to return `true`
if indices_to_drop.count_ones() > 0 {
filtered = true;
}

// By filtering votes we might render the candidate invalid and cause a failure in
// [`process_candidates`]. To avoid this we have to perform a sanity check here. If there
// are not enough backing votes after filtering we will remove the whole candidate.
if bc.validity_votes.len() < effective_minimum_backing_votes(
validator_group.len(),
minimum_backing_votes

) {
return false
}

true
});

// Also return `true` if a whole candidate was dropped from the set
filtered || backed_len_before != backed_candidates.len()
}
Loading

0 comments on commit f895409

Please sign in to comment.