Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better approval voting paramters #640

Open
1 of 3 tasks
Tracked by #26 ...
eskimor opened this issue May 9, 2023 · 12 comments
Open
1 of 3 tasks
Tracked by #26 ...

Better approval voting paramters #640

eskimor opened this issue May 9, 2023 · 12 comments

Comments

@eskimor
Copy link
Member

eskimor commented May 9, 2023

Existing relayVrfModuloSamples don't make too much sense

Current approval related values on Polkadot:

noShowSlots: 2
  nDelayTranches: 89
  zerothDelayTrancheWidth: 0
  neededApprovals: 30
  relayVrfModuloSamples: 40

relayVrfModuloSamples should be way less. With 200 valdiators and 40 parachains it should be around 6.

Existing parameters don't make too much sense

What is a good number for relayVrfModuloSamples depends on the number of validators and parachains. This parameter would need to be adjusted on any change in those numbers. It would be better to make relayVrfModuloSamples a function of those parameters and a different paramer to fine tune this relationship.

Further tuning

For our threat model it should be sufficient to have around 30 approvals in expectation. Therefore it might be sensible to reduce the number of neededApprovals a bit to avoid higher tranches kicking in too often to cover for variation. This might be helping in reducing load on approval voting. Other strategies like increasing the tick value should be done first though.

Tasks

  • Pass motion on Kusama and Polkadot to reduce relayVrfModuloSamples to something more sensible.
  • Make sure whenever we are increasing maxValidators or the number of cores we also adjust relayVrfModuloSamples accordingly.
  • Make the second task obsolete by replacing relayVrfModuloSamples config with a tuning knob and derive the actual value based on that and the number of validators and parachains/cores.

@Sophia-Gold
@burdges - expected variance would be interesting.
@sandreim

@sandreim
Copy link
Contributor

IMO increasing tick from 500ms to 1s would help in reducing the amount of higher tranches kicking in because higher ToF of approval distribution messages. However as we scale up, the ToF increases as well so we would need to bump it even higher, which would eventually increase the approval checking lag. We should implement the other improvements needed to process these messages faster and lower the ToF.

@burdges
Copy link

burdges commented May 10, 2023

noShowSlots = 2 means 12 tranches before you no-show. We should choose if we adjust this when we go to 1 second tranches.

We have roughly nDelayTranches = delay_tranche_width * num_para_validators - E where E is the expected total tranche zero assignments per core. We want delay_tranche_width = 2.25 or at least strictly between 2 and 3. As delay_tranche_width is quite a sensitive parameter, I'd suggest nDelayTranches be computed from delay_tranche_width, num_cores, num_para_validators, and relayVrfModuloSamples.

We want "zerothDelayTrancheWidth = -1" in the sense that we should modify the code to produce no delay tranche assignments in tranche zero.

We should rerun the simulator more carefully and finally do closed form analysis, but neededApprovals: 30 sound fine for now.

We'll adopt sampling without replacement for the new relay vrf tranche zero scheme, roughly like

fn tranche_zero_assignments(
    relayVrfModuloSamples: u16,
    num_cores: u16,
    io: schnorrkel::VrfInOut
) -> Vec<CoreIndex> {
    assert!(num_tranche_zero_assignments_per_validator < num_cores);
    let mut rng = vrf_io.make_rng::<rand_chacha::ChaChaRng>(b"relay_vrf:tranche_zero");
    let mut dummy: Vec<CoreIndex> = (0..num_cores).collect();
    rand::seq::SliceRandom::partial_shuffle(
        &mut dummy,
        &mut rng,
        relayVrfModuloSamples,
    ).0.to_owned()
}

In this way, relayVrfModuloSamples has the semantics of num_tranche_zero_assignments_per_validator.

We'd now be sampling without replacement so we're seeing hypergeometric distributions from validators perspective. Now a validator has { num_cores \choose relayVrfModuloSamples } assignment possibilities and odds p of validating a particular core where

p = {num_cores - 1 \choose relayVrfModuloSamples - 1} / { num_cores \choose relayVrfModuloSamples }
  = relayVrfModuloSamples / num_cores

Now validators are independent Bernoulli trials from the cores' perspectives, so the core observes a Binomial distribution of checkers, and thus expects E tranche zero assignments with variance V where

E = p * num_validators = relayVrfModuloSamples * num_validators / num_cores
V = num_validators * p * (1-p) = E * (1-p)
  = num_validators * relayVrfModuloSamples / num_cores * (1 - relayVrfModuloSamples / num_cores)
sigma = sqrt(V) is the standard deviation

I didn't expect this all worked out so simply..

Assume num_validators=1000 so p = E/1000. If E=30 then p=0.03 and sigma = 5.394. If E=25 then p=0.025 and sigma = 4.937. Also if E=35 then p=0.035 and sigma=5.8116. If E=40 the p=0.04 and sigma=6.19677. All independent of the number of cores.

We'll likely keep the standard deviation sigma between 4.937 and 5.8116. I think normal approximation might work given our num_validators=1000 so maybe our normal intuition for sigma roughly holds, aka 68.2% of cores get within ± sigma and 95.4% within ± 2 sigma

We could set relayVrfModuloSamples automatically but under some maximum value too I guess.

@rphmeier
Copy link
Contributor

If we are adjusting the approval-checking parameters, we should also take into account a modification that I think will be useful: allowing for dynamic amounts of parachains to be included within a block.

The motivation is that we may want to allow more parachain blocks to be backed than the number of cores. This is desirable in the case that:

  • "extra" blocks correspond to previous missed opportunities, i.e. the total amount of blocks backed for a parachain is limited
  • the maximum amount of parablocks backed within a specific relay chain block is capped, to e.g. 1.5x the number of cores

Especially when we look forward to features like parachains sharing cores, but also just with asynchronous backing, it seems reasonable to allow "missed" blocks to be made up for later, as long as it doesn't overwhelm the relay chain runtime.

@burdges
Copy link

burdges commented May 13, 2023

These parameters would mostly not change if we include more or fewer parachains. We'd still choose neededApprovals and E and delay_tranche_width for security, so nDelayTranches stays fixed too. We'd adjust num_cores of course, which changes relayVrfModuloSamples, but likely hardware/ISP specs determine some maximum.

@Overkillus
Copy link
Contributor

Mostly in reply to @burdges post above:

  1. nDelayTranches:

With regards to the nDelayTranches formula:

nDelayTranches = delay_tranche_width * num_para_validators - E,
where E is the expected total tranche zero assignments per core.

shouldn't it be:
nDelayTranches = (num_para_validators - E) / delay_tranches_width
so we simply divide the remaining validators between non-zero tranches?


  1. delay_tranche_width

You state that the want the width to be between 2 and 3 and suggest:

We want delay_tranche_width = 2.25 or at least strictly between 2 and 3.

What's the justification between this (2,3) range? Our implementers guide suggests that we operate between 1 and 2:

We require expected checkers per tranche to be less than three because otherwise an adversary with 1/3 stake could force all nodes into checking all blocks. We strongly recommend expected checkers per tranche to be less than two, which helps avoid both accidental and intentional explosions. We also suggest expected checkers per tranche be larger than one, which helps prevent adversaries from predicting than advancing one tranche adds only their own validators.

Is this constraint no longer relevant? If so why?


  1. zerothDelayTrancheWidth

Due to the way relayVRFModulo is implemented even if zerothDelayTrancheWidth = 0 it still produces some zeroth tranche assignments (single tranche worth of validators). At least based on my current understanding.

That also means we cannot simply set it to -1 with zerothDelayTrancheWidth = -1 and forget about it.

fn relay_vrf_delay_tranche(
	vrf_in_out: &VRFInOut,
	num_delay_tranches: u32,
	zeroth_delay_tranche_width: u32,
) -> DelayTranche {
	let bytes: [u8; 4] = vrf_in_out.make_bytes(approval_types::TRANCHE_RANDOMNESS_CONTEXT);
	
	// interpret as little-endian u32 and reduce by the number of tranches.
	let wide_tranche =
		u32::from_le_bytes(bytes) % (num_delay_tranches + zeroth_delay_tranche_width);
		
	// Consolidate early results to tranche zero so tranche zero is extra wide.
	wide_tranche.saturating_sub(zeroth_delay_tranche_width)
}

From what I know zerothDelayTrancheWidth would be no longer relevant as a setting. Could we refactor and remove it completely?


  1. relayVrfModuloSamples

In this way, relayVrfModuloSamples has the semantics of num_tranche_zero_assignments_per_validator.

Think it's a good change that gives us more direct control, but considering how later on you focus on changing E around:

Assume num_validators=1000 so p = E/1000. If E=30 then p=0.03 and sigma = 5.394. If E=25 then [...]

wouldn't setting E directly be a nicer approach? I feel like controlling E directly makes the most sense as what we care about is the actual size of those tranches. Then we directly control tranche sizes with:

  • E for the zeroth tranche (assuming we decouple relayVRFDelayTranche with zeroth tranche)
  • tranche_width for all other tranches

  1. E = neededApprovals

I'll also piggyback and ask what is the rationale behind E = neededApprovals? Do we have and sources explaining this decision?


  1. Sigma based on E

All the math checks out 👍 + Graph visualising the different sigmas Jeff used above:
Figure_1

@burdges
Copy link

burdges commented May 19, 2023

shouldn't it be:
nDelayTranches = (num_para_validators - E) / delay_tranches_width
so we simply divide the remaining validators between non-zero tranches?

Yes oops.

What's the justification between this (2,3) range? Our implementers guide suggests that we operate between 1 and 2:

We've discussed smaller delay_tranche_widths before, Alistair likes them somewhat, so maybe this text survived from then. We want delay_tranche_width below 3 as described. We want 2/3 * delay_tranche_width > 1 aka delay_tranche_width > 1.5, but really this still leaves quite high extinction odds.

I picked 2.25 as half way between 1.5 and 3, but then I ran the simulator using it and others. I felt like it left low enough extinction odds.

That also means we cannot simply set it to -1 with zerothDelayTrancheWidth = -1 and forget about it.

I wrote " .. " around the -1 because I meant remove zerothDelayTrancheWidth entirely and number delay tranches starting form 1.

Yes. It's nice to compute relayVrfModuloSamples from E like I said at the end. E is the parameter we tweak for security concerns, but relayVrfModuloSamples is an integer, while E need not be a integer, although E being an integer is not so bad.

We could adjust the VRF output code for non-integer relayVrfModuloSamples maybe. We'd compute f = floor(relayVrfModuloSamples), sample bool-like extra = 0,1 with odds relayVrfModuloSamples - f, and then run the above loop for relayVrfModuloSamples + extra iterations.

We should also consider a maxRelayVrfModuloSamples which causes an error if E * num_cores / num_validators = relayVrfModuloSamples > maxRelayVrfModuloSamples, or else reduced num_cores somehow if Rob's flexible block space ideas ever permit doing so.

We could compute max_cores = maxRelayVrfModuloSamples * num_validators / E and enforce that num_cores < max_cores, but then also compute relayVrfModuloSamples from num_cores.

E = neededApprovals is not required, but E gives more security than delay tranches, so E being close to neededApprovals makes sense. I'd prefer E > neededApprovals - sigma I guess.

@Overkillus
Copy link
Contributor

We could adjust the VRF output code for non-integer relayVrfModuloSamples maybe. We'd compute f = floor(relayVrfModuloSamples), sample bool-like extra = 0,1 with odds relayVrfModuloSamples - f, and then run the above loop for relayVrfModuloSamples + extra iterations.

I definitely think we should allow for relayVrfModuloSamples to be non-integer. We know that in most circumstances we'll be aiming for values of around 3-6 so the extra precision will be very useful.

For instance with E = 30, num_cores = 46 and num_validators = 296 the computed relayVrfModuloSamples is 4.64. In that case rounding to an integer makes a very significant difference. If we use the floor func E = 26 would still give use same relayVrfModuloSamples values of floor(4.04) = 4 = floor(4.64) . This will make adjusting E a very unintuitive process because of the sudden jumps.


What is the purpose of setting maxRelayVrfModuloSamples? Not sure I see any huge benefits, security or other.

For some potential maxRelayVrfModuloSamples values:

  • naive case where maxRelayVrfModuloSamples = num_validators as tranche 0 can contain at most everyone
  • maxRelayVrfModuloSamples = 1/3 * num_validators + 1 in this case the tranche 0 will have at least one honest node so it is certain that no malicious block will ever pass. Further increasing the maxRelayVrfModuloSamples wouldn't give any security benefits and it would slow down the system so this can be a boundary as well.

With regards to the max_cores the suggested formula: max_cores = maxRelayVrfModuloSamples * num_validators / E with maxRelayVrfModuloSamples substituted would give us:
max_cores = [ 1/3 * num_validators^2 + num_validators ] / E

Firstly, the num_validators^2 is not too convincing and based on the current state (num_validators=297, E=30) it would give us max_core = 990. This is definitely too high considering that num_validators=297.

I assume we should maintain num_validators >= num_cores if needed backing is 1, if it's more than that (rn it's 2 in Polkadot) then the range can be tightened even more to ensure liveness. Then we have the maxValidatorsPerCore indirectly setting the lower core boundary and num_validators with min_backers (currently not parametrised) setting the upper core boundary.

Of course having num_validators == num_cores is not ideal, but those are hard error requiring boundaries in my opinion.


The above logic can be significantly altered if someone sees a different approach for setting maxRelayVrfModuloSamples.
And also do I understand correctly that as of now the number of cores is technically totally unbounded?

@burdges
Copy link

burdges commented May 22, 2023

Yeah sure, non-integer relayVrfModuloSamples works fine. It'd look roughly like:

pub struct TrancheZeroSamples { i: u32, r: u32 }

impl TrancheZeroSamples {
    fn compute(
        expected_tranche_zero: u32,  // E
        num_cores: u32,
        num_validators: u32,        
    ) -> Self {
        // expected_tranche_zero = relayVrfModuloSamples * num_validators / num_cores
        let n = expected_tranche_zero.checked_mul( num_cores ).expect("bad parameters!");
        let i = n.checked_div( num_validators ).expect("bad parameters!");
        let r = n.rem( num_validators ) * (u32::MAX / num_validators);
        Samples { i, r }
    }

    fn samples(&self, rng: &mut RngCore) -> u32 {
        let mut n = u32::MAX;
        while { n = rng.next_u32();  n == u32::MAX } { }
        self.i + if  n < self.f { 1 } else { 0 }
    }
}

maxRelayVrfModuloSamples cannot be computed from other parameters. It's a guesstimate based upon our validator and ISP specs. relayVrfModuloSamples says how much work each validator does in tranche zero, so maxRelayVrfModuloSamples sanity checks other parameters. Initially maxRelayVrfModuloSamples=10 or 8 sounds reasonable.

We'd enforce num_cores < maxRelayVrfModuloSamples * num_validators / expected_tranche_zero in parameter updates I guess. You could always raise maxRelayVrfModuloSamples but doing so becomes political, as doing so could represent some validators being forced to buy better hardware or change ISPs.

Yes, we should assume num_validators >= num_cores for now. If this ever changes we'll be quite happy and won't mind searching for all the divide by zeros or whatever. Also num_validators < num_cores * expected_tranche_zero or else we're wasting resources (or have very slow parachains).

Of course the rng here is returned by the VRF and samples returns num_tranche_zero_assignments_per_validator. Also relayVrfModuloSamples: TrancheZeroSamples and the above cost sets it so that relayVrfModuloSamples = E * num_cores / num_validators.

@Sophia-Gold Sophia-Gold transferred this issue from paritytech/polkadot Aug 24, 2023
claravanstaden pushed a commit to Snowfork/polkadot-sdk that referenced this issue Dec 8, 2023
* added logging and updated metadata

* separated tests which bootstrap the bridge

* tested with v0.9.23

* add channel variables to example

* revert example envrc
@Polkadot-Forum
Copy link

This issue has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/when-is-the-right-time-to-increase-validator-count-on-polkadot-kusama/685/4

@Polkadot-Forum
Copy link

This issue has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/the-new-polkadot-community-testnet/4956/22

@Polkadot-Forum
Copy link

This issue has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/elastic-scaling/7185/7

serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 8, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 8, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 8, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 8, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 10, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 10, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
bkchr pushed a commit that referenced this issue Apr 10, 2024
* Move justification module to header-chain primitives crate

* Get justification module compiling in new location

* Get justification module tests compiling

* Use justification code from `header-chain` crate

Mostly compiles, having issues with std/test feature flags across crates.

* Move some code around

* Move justification tests to integration testing crate

* Add `test-utils` crate

* Remove tests and test-helper module from justification code

* Use `test-utils` in Substrate bridge pallet tests

* Remove `sp-keyring` related code from `pallet-substrate-bridge`

* Remove `helpers` module from `pallet-substrate-bridge`

* Add some documentation

* Add more documentation

* Fix typo

Co-authored-by: Tomasz Drwięga <[email protected]>

Co-authored-by: Tomasz Drwięga <[email protected]>
@Polkadot-Forum
Copy link

This issue has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/rfc-should-we-launch-a-thousand-cores-program/7604/4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Backlog
Development

No branches or pull requests

6 participants