-
Notifications
You must be signed in to change notification settings - Fork 680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow parachains to place extra data in the availability store #885
Comments
Some motivation for this would be good. Is this the way we are going to implement XCMP? |
The motivation is that it would be helpful for implementing multi-level relay chains and other rollups on parachains. @gavofyork has a specific experiment in mind called 'Blitzchain' which will need this, but only in the later phases. |
It'll probably be useful for something.. We know two designs for multiple relay chains: Candy flipping (or elf flipping)We divide validators into relay chains of n validators per relay chain with n <= 3f+1 < 2^{k+2} and 2^k <= f+1, so likely f=511 and n=1534 or f=255 and n=765. We add new validators to random relay chains and "churn enough" validators among relay chains based upon the leaving and new validators, so that we can prove that if the global validators set is 80% honest then each relay chain has 2f+1 aka 66.667% honest validators. We need some tricky stuff for XCMP etc. but under this design relay chains can simply trust one another's beefy poofs, which makes it extremely efficient. As a simpler-but-not-really-correct variant, we could simply elect the validator set once and then assign exactly the same controller keys to all t relay chains, so that each controller key can run one validator on each relay chain. This only makes sense if you assume all validator operators would happily run all t of their validators themselves, not just loan out their one of their positions because they're too lazy to run an extra validator. Hierarchical turnstilesA polkadot-like chain runs as a polkadot parachain, so the parachain check both the child relay chain blocks, and the child beefy signatures. We might permit slashing and disputes via messages from polkadot too, although details remain unclear. We obtain polkadot soundness only for the child relay chain however, not for the child relay chain's own parachains. We make the child's own reversion protocols work by checking the child's beefy, and enforcing its disputes, slashing, etc. Although we lack polkadot soundness for the child relay chain's own parachains, we could enforce balance turnstiles so that unsoundness does not propagate, but arbitrary messages do not work of course. I believe @gavofyork's blitzchain ideas work like this hierarchical approach but pass availability data too. It's likely availability improves upon pure turnstiles for some transaction or message types, but the details become quite messy:
A priori, I'd suggest blitzchains initially focus upon doing turnstiles correctly, so purely a problem in token standard, warning users, etc. After we've discussed the problem more, then we'd later "lightly" involve the parachain team for messaging slashes, disputes, etc. We avoid validator-collators networking until someone demonstrates some concrete improvements over turnstiles however. @AlistairStewart has thought more about IBC, bridges, etc. so maybe he already knows cases. All that said, there could exist other reasons for doing this without additional pre-PVF builds and without complicating validator-collators networking. |
Anyways what does this really look like? In polkadot, we first do a data availability proof and then next do an interactive correctness proof under a byzantine threat model. As I said above, we know weaker threat model stories for which availability does not necessarily matter. We also know protocols that only require the data availability proof, like paying polkadot validators for initial super-seeding of contentious data before bittorrent takes over, meaning we want pure data availability chains eventually. In this, we're slaving multiple extra unchecked data only availability cores to a single parachain, but without involving them directly in approval checking, so maybe We could seemingly do this entirely in parachain code without complicating consensus, although likely this requires asynchronous backing on the child relay chain? Alright, I've now argued both that this sounds too complex, and that it's maybe trivial once we have asynchronous backing and unchecked data only availability cores, so.. ;) |
* Bump nixpkgs to use its geth package * Use source config in PolkadotListener * Match field order in struct def * Move info log to relay creation * Remove unused channel * Whitespace * Add context to errors * Add logs * Bump node & pnpm in workflow * Rename locals * Remove unused variable * Add Troubleshooting README section * Fix up .envrc-example files * Add note about pure shells
* Improve contracts * More improvements to contracts * Fix parachain build in proxy-contracts branch (paritytech#889) * Upgrade ssz_rs crate. (paritytech#880) * Upgrade ssz_rs crate. * Upgrade ssz_rs crate. --------- Co-authored-by: claravanstaden <Cats 4 life!> * Bump nixpkgs to use its geth package (paritytech#885) * Bump nixpkgs to use its geth package * Use source config in PolkadotListener * Match field order in struct def * Move info log to relay creation * Remove unused channel * Whitespace * Add context to errors * Add logs * Bump node & pnpm in workflow * Rename locals * Remove unused variable * Add Troubleshooting README section * Fix up .envrc-example files * Add note about pure shells * Update cumulus submodule (paritytech#886) * Inbound queue benchmarks (paritytech#876) * Start with inbound channel benchmarks. * Add method to set execution header storage for benchmark test. * Working on benchmarks * Basic working version * Cleanup * Removes cleanup. * Adds some comments for Alistair. * Adds branch name. * Makes note * Test transactions * Cleaning up beacon client deps. * Clean up comments. * Tests cleanup. * Fixes non-benchmark test runs. * Cleanup. * Update fixtures and generates benchmarks. * Revert relayer logs. * Cleanup BenchmarkHelper impl and inbound queue dependencies. * fmt * Cleanup imports. * Cleanup imports. * Touch * Adds weights in inbound queue pallet. * Fix tests. * Update cumulus. --------- Co-authored-by: claravanstaden <Cats 4 life!> * Fix parachain build * Move BalanceOf outside of pallet * remove benchmark for non existing method * downgrade cargo.lock to match cumulus * fix benchmarks --------- Co-authored-by: Clara van Staden <[email protected]> Co-authored-by: Alistair Singh <[email protected]> * forge install: openzeppelin-contracts v4.9.2 * sno-472 rebased (paritytech#890) * Halting & resuming bridge pallets * Ignore .env * Remove .env * Some polish * Set owner of bridge pallets * Upgrade ssz_rs crate. (paritytech#880) * Upgrade ssz_rs crate. * Upgrade ssz_rs crate. --------- Co-authored-by: claravanstaden <Cats 4 life!> * Update cumulus * Bump nixpkgs to use its geth package (paritytech#885) * Bump nixpkgs to use its geth package * Use source config in PolkadotListener * Match field order in struct def * Move info log to relay creation * Remove unused channel * Whitespace * Add context to errors * Add logs * Bump node & pnpm in workflow * Rename locals * Remove unused variable * Add Troubleshooting README section * Fix up .envrc-example files * Add note about pure shells * Relax RANDAO_COMMIT_DELAY for local setup * Update cumulus * Update cumulus submodule (paritytech#886) * Update cumulus * Inbound queue benchmarks (paritytech#876) * Start with inbound channel benchmarks. * Add method to set execution header storage for benchmark test. * Working on benchmarks * Basic working version * Cleanup * Removes cleanup. * Adds some comments for Alistair. * Adds branch name. * Makes note * Test transactions * Cleaning up beacon client deps. * Clean up comments. * Tests cleanup. * Fixes non-benchmark test runs. * Cleanup. * Update fixtures and generates benchmarks. * Revert relayer logs. * Cleanup BenchmarkHelper impl and inbound queue dependencies. * fmt * Cleanup imports. * Cleanup imports. * Touch * Adds weights in inbound queue pallet. * Fix tests. * Update cumulus. --------- Co-authored-by: claravanstaden <Cats 4 life!> * Fix test * Fix parachain build * Move BalanceOf outside of pallet * remove benchmark for non existing method * downgrade cargo.lock to match cumulus * fix benchmarks * Clara/sno 552 (paritytech#887) * Spacing * Spacing * Undo typo. * Minor updates. * Adds comment about IrrelevantUpdate. * One more comment. * Update error name. --------- Co-authored-by: claravanstaden <Cats 4 life!> * Halting & resuming bridge pallets (paritytech#883) * Halting & resuming bridge pallets * Ignore .env * Remove .env * Some polish * Set owner of bridge pallets * Update cumulus * Relax RANDAO_COMMIT_DELAY for local setup * Update cumulus * Update cumulus * Fix test * Fix Warnings * Fix test * Fix build & format * Fix benchmark test * Check for duplicate versions of substrate and polkadot (paritytech#891) * modified pre-commit * fixes * testing * testing * testing * fixed tests * Format * Some fix * Update cumulus * Update cumulus --------- Co-authored-by: Clara van Staden <[email protected]> Co-authored-by: David Dunn <[email protected]> Co-authored-by: Alistair Singh <[email protected]> * Proxy contracts Tests fixes (paritytech#892) * fixed tests * warnings and imports * rustfmt * updated cumulus * Revert rustfmt (paritytech#896) * Revert "rustfmt" This reverts commit b83cec7929cdcc6ac972a2b1cad3a0e1fde81870. * reverted xcm-builder * Create agent (paritytech#895) * base * removed location conversion * completed implementation * remove xcm-builder * update cumulus * update cumulus * use contains_key * Fix openzeppelin-contracts submodule --------- Co-authored-by: David Dunn <[email protected]> * improve API for sending tokens * Messy working version. * Cleanup. * More cleanup. * Rollback unnecessary changes. * Cleanup whitespace. * Fix tests. * Fuzz the submit method. * Revert rebase oopsies. * More fuzzing. * More fuzzing. * Last extrinsic. * Remove unnecessary dependency. * Revert readme. * Fix tests and feature issues. * Cleanup types. * More cleanup. * Call extrinsics directly. Adds readme. * Adds CI. * cd to correct dir * Update CI. * Correct nightly param. * Remove runs. * Own impl for SyncCommittee. * Remove rng. Cleans up workflow. * Last cleanup. * Update parachain/pallets/ethereum-beacon-client/fuzz/Cargo.toml Co-authored-by: David Dunn <[email protected]> * Adds cargo-fuzz to init script. * PR comment changes. * Update parachain/pallets/ethereum-beacon-client/fuzz/src/impls.rs Co-authored-by: David Dunn <[email protected]> * Update rust-toolchain.toml Co-authored-by: David Dunn <[email protected]> * Update .github/workflows/parachain.yml Co-authored-by: David Dunn <[email protected]> * Less runs for shorter Github actions. --------- Co-authored-by: Vincent Geddes <[email protected]> Co-authored-by: David Dunn <[email protected]> Co-authored-by: Alistair Singh <[email protected]> Co-authored-by: Ron <[email protected]> Co-authored-by: claravanstaden <Cats 4 life!>
Bumps [got](https://github.com/sindresorhus/got) to 12.1.0 and updates ancestor dependency [web3](https://github.com/ethereum/web3.js). These dependencies need to be updated together. Updates `got` from 7.1.0 to 12.1.0 - [Release notes](https://github.com/sindresorhus/got/releases) - [Commits](sindresorhus/got@v7.1.0...v12.1.0) Updates `web3` from 1.5.0 to 1.8.0 - [Release notes](https://github.com/ethereum/web3.js/releases) - [Changelog](https://github.com/web3/web3.js/blob/1.x/CHANGELOG.md) - [Commits](web3/web3.js@v1.5.0...v1.8.0) --- updated-dependencies: - dependency-name: got dependency-type: indirect - dependency-name: web3 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <[email protected]> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Each candidate descriptor would get another field:
indicating the hashes and lengths of preimages of extra data to be maintained by the validators of Polkadot. The sum of the lengths would not be allowed to exceed more than a certain protocol limit.
Collators would be responsible for providing the actual data here to backers, and each piece of data would be erasure-coded just as the PoV is and fetched by validators from backers during availability distribution.
The text was updated successfully, but these errors were encountered: