-
Notifications
You must be signed in to change notification settings - Fork 680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unbalanced and Balanced fungible conformance tests, and fungible fixes #1296
Merged
liamaharon
merged 34 commits into
master
from
liam/fungible-conformance-tests-balanced-unbalanced
Jan 15, 2024
Merged
Unbalanced and Balanced fungible conformance tests, and fungible fixes #1296
liamaharon
merged 34 commits into
master
from
liam/fungible-conformance-tests-balanced-unbalanced
Jan 15, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
liamaharon
added
T1-FRAME
This PR/Issue is related to core FRAME, the framework.
T10-tests
This PR/Issue is related to tests.
labels
Aug 30, 2023
muharem
reviewed
Sep 25, 2023
This was referenced Sep 25, 2023
Co-authored-by: Muharem <[email protected]>
…/fungible-conformance-tests-balanced-unbalanced
@muharem finally got around to addressing your comments. Thanks. |
muharem
reviewed
Oct 18, 2023
muharem
reviewed
Oct 18, 2023
…/fungible-conformance-tests-balanced-unbalanced
muharem
reviewed
Nov 3, 2023
liamaharon
changed the title
fungible conformance tests: Unbalanced and Balanced
Unbalanced and Balanced fungible conformance tests, and fungible fixes
Jan 15, 2024
liamaharon
deleted the
liam/fungible-conformance-tests-balanced-unbalanced
branch
January 15, 2024 13:20
ahmadkaouk
pushed a commit
to moonbeam-foundation/polkadot-sdk
that referenced
this pull request
Jan 21, 2024
paritytech#1296) Original PR paritytech/substrate#14655 --- Partial paritytech#225 - [x] Adds conformance tests for Unbalanced - [x] Adds conformance tests for Balanced - Several minor fixes to fungible default implementations and the Balances pallet - [x] `Unbalanced::decrease_balance` can reap account when `Preservation` is `Preserve` - [x] `Balanced::pair` can return pairs of imbalances which do not cancel each other out - [x] Balances pallet `active_issuance` 'underflow' - [x] Refactors the conformance test file structure to match the fungible file structure: tests for traits in regular.rs go into a test file named regular.rs, tests for traits in freezes.rs go into a test file named freezes.rs, etc. - [x] Improve doc comments - [x] Simplify macros `Preservation::Preserve` There is a potential issue in the default implementation of `Unbalanced::decrease_balance`. The implementation can delete an account even when it is called with `preservation: Preservation::Preserve`. This seems to contradict the documentation of `Preservation::Preserve`: ```rust /// The account may not be killed and our provider reference must remain (in the context of /// tokens, this means that the account may not be dusted). Preserve, ``` I updated `Unbalanced::decrease_balance` to return `Err(TokenError::BelowMinimum)` when a withdrawal would cause the account to be reaped and `preservation: Preservation::Preserve`. - [ ] TODO Confirm with @gavofyork that this is correct behavior Test for this behavior: https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L912-L937 `Balanced::pair` is supposed to create a pair of imbalances that cancel each other out. However this is not the case when the method is called with an amount greater than the total supply. In the existing default implementation, `Balanced::pair` creates a pair by first rescinding the balance, creating `Debt`, and then issuing the balance, creating `Credit`. When creating `Debt`, if the amount to create exceeds the `total_supply`, `total_supply` units of `Debt` are created *instead* of `amount` units of `Debt`. This can lead to non-canceling amount of `Credit` and `Debt` being created. To address this, I create the credit and debt directly in the method instead of calling `issue` and `rescind`. Test for this behavior: https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1323-L1346 This PR resolves an issue in the `Balances` pallet that can lead to odd behavior of `active_issuance`. Currently, the Balances pallet doesn't check if `InactiveIssuance` remains less than or equal to `TotalIssuance` when supply is deactivated. This allows `InactiveIssuance` to be greater than `TotalIssuance`, which can result in unexpected behavior from the perspective of the fungible API. `active_issuance` is derived from `TotalIssuance.saturating_sub(InactiveIssuance)`. If an `amount` is deactivated that causes `InactiveIssuance` to become greater TotalIssuance, `active_issuance` will return 0. However once in that state, reactivating an amount will not increase `active_issuance` by the reactivated `amount` as expected. Consider this test where the last assertion would fail due to this issue: https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1036-L1071 To address this, I've modified the `deactivate` function to ensure `InactiveIssuance` never surpasses `TotalIssuance`. --------- Co-authored-by: Muharem <[email protected]>
5 tasks
ahmadkaouk
pushed a commit
to moonbeam-foundation/polkadot-sdk
that referenced
this pull request
Jan 29, 2024
paritytech#1296) Original PR paritytech/substrate#14655 --- Partial paritytech#225 - [x] Adds conformance tests for Unbalanced - [x] Adds conformance tests for Balanced - Several minor fixes to fungible default implementations and the Balances pallet - [x] `Unbalanced::decrease_balance` can reap account when `Preservation` is `Preserve` - [x] `Balanced::pair` can return pairs of imbalances which do not cancel each other out - [x] Balances pallet `active_issuance` 'underflow' - [x] Refactors the conformance test file structure to match the fungible file structure: tests for traits in regular.rs go into a test file named regular.rs, tests for traits in freezes.rs go into a test file named freezes.rs, etc. - [x] Improve doc comments - [x] Simplify macros ## Fixes ### `Unbalanced::decrease_balance` can reap account when called with `Preservation::Preserve` There is a potential issue in the default implementation of `Unbalanced::decrease_balance`. The implementation can delete an account even when it is called with `preservation: Preservation::Preserve`. This seems to contradict the documentation of `Preservation::Preserve`: ```rust /// The account may not be killed and our provider reference must remain (in the context of /// tokens, this means that the account may not be dusted). Preserve, ``` I updated `Unbalanced::decrease_balance` to return `Err(TokenError::BelowMinimum)` when a withdrawal would cause the account to be reaped and `preservation: Preservation::Preserve`. - [ ] TODO Confirm with @gavofyork that this is correct behavior Test for this behavior: https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L912-L937 ### `Balanced::pair` returning non-canceling pairs `Balanced::pair` is supposed to create a pair of imbalances that cancel each other out. However this is not the case when the method is called with an amount greater than the total supply. In the existing default implementation, `Balanced::pair` creates a pair by first rescinding the balance, creating `Debt`, and then issuing the balance, creating `Credit`. When creating `Debt`, if the amount to create exceeds the `total_supply`, `total_supply` units of `Debt` are created *instead* of `amount` units of `Debt`. This can lead to non-canceling amount of `Credit` and `Debt` being created. To address this, I create the credit and debt directly in the method instead of calling `issue` and `rescind`. Test for this behavior: https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1323-L1346 ### `Balances` pallet `active_issuance` 'underflow' This PR resolves an issue in the `Balances` pallet that can lead to odd behavior of `active_issuance`. Currently, the Balances pallet doesn't check if `InactiveIssuance` remains less than or equal to `TotalIssuance` when supply is deactivated. This allows `InactiveIssuance` to be greater than `TotalIssuance`, which can result in unexpected behavior from the perspective of the fungible API. `active_issuance` is derived from `TotalIssuance.saturating_sub(InactiveIssuance)`. If an `amount` is deactivated that causes `InactiveIssuance` to become greater TotalIssuance, `active_issuance` will return 0. However once in that state, reactivating an amount will not increase `active_issuance` by the reactivated `amount` as expected. Consider this test where the last assertion would fail due to this issue: https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1036-L1071 To address this, I've modified the `deactivate` function to ensure `InactiveIssuance` never surpasses `TotalIssuance`. --------- Co-authored-by: Muharem <[email protected]> (cherry picked from commit 46090ff)
bgallois
pushed a commit
to duniter/duniter-polkadot-sdk
that referenced
this pull request
Mar 25, 2024
paritytech#1296) Original PR paritytech/substrate#14655 --- Partial paritytech#225 - [x] Adds conformance tests for Unbalanced - [x] Adds conformance tests for Balanced - Several minor fixes to fungible default implementations and the Balances pallet - [x] `Unbalanced::decrease_balance` can reap account when `Preservation` is `Preserve` - [x] `Balanced::pair` can return pairs of imbalances which do not cancel each other out - [x] Balances pallet `active_issuance` 'underflow' - [x] Refactors the conformance test file structure to match the fungible file structure: tests for traits in regular.rs go into a test file named regular.rs, tests for traits in freezes.rs go into a test file named freezes.rs, etc. - [x] Improve doc comments - [x] Simplify macros ## Fixes ### `Unbalanced::decrease_balance` can reap account when called with `Preservation::Preserve` There is a potential issue in the default implementation of `Unbalanced::decrease_balance`. The implementation can delete an account even when it is called with `preservation: Preservation::Preserve`. This seems to contradict the documentation of `Preservation::Preserve`: ```rust /// The account may not be killed and our provider reference must remain (in the context of /// tokens, this means that the account may not be dusted). Preserve, ``` I updated `Unbalanced::decrease_balance` to return `Err(TokenError::BelowMinimum)` when a withdrawal would cause the account to be reaped and `preservation: Preservation::Preserve`. - [ ] TODO Confirm with @gavofyork that this is correct behavior Test for this behavior: https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L912-L937 ### `Balanced::pair` returning non-canceling pairs `Balanced::pair` is supposed to create a pair of imbalances that cancel each other out. However this is not the case when the method is called with an amount greater than the total supply. In the existing default implementation, `Balanced::pair` creates a pair by first rescinding the balance, creating `Debt`, and then issuing the balance, creating `Credit`. When creating `Debt`, if the amount to create exceeds the `total_supply`, `total_supply` units of `Debt` are created *instead* of `amount` units of `Debt`. This can lead to non-canceling amount of `Credit` and `Debt` being created. To address this, I create the credit and debt directly in the method instead of calling `issue` and `rescind`. Test for this behavior: https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1323-L1346 ### `Balances` pallet `active_issuance` 'underflow' This PR resolves an issue in the `Balances` pallet that can lead to odd behavior of `active_issuance`. Currently, the Balances pallet doesn't check if `InactiveIssuance` remains less than or equal to `TotalIssuance` when supply is deactivated. This allows `InactiveIssuance` to be greater than `TotalIssuance`, which can result in unexpected behavior from the perspective of the fungible API. `active_issuance` is derived from `TotalIssuance.saturating_sub(InactiveIssuance)`. If an `amount` is deactivated that causes `InactiveIssuance` to become greater TotalIssuance, `active_issuance` will return 0. However once in that state, reactivating an amount will not increase `active_issuance` by the reactivated `amount` as expected. Consider this test where the last assertion would fail due to this issue: https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1036-L1071 To address this, I've modified the `deactivate` function to ensure `InactiveIssuance` never surpasses `TotalIssuance`. --------- Co-authored-by: Muharem <[email protected]>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Apr 4, 2024
Part of #226 Related #1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by #1296, needs the `Unbalanced::decrease_balance` fix
Ank4n
pushed a commit
that referenced
this pull request
Apr 9, 2024
Part of #226 Related #1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by #1296, needs the `Unbalanced::decrease_balance` fix
dharjeezy
pushed a commit
to dharjeezy/polkadot-sdk
that referenced
this pull request
Apr 9, 2024
Part of paritytech#226 Related paritytech#1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by paritytech#1296, needs the `Unbalanced::decrease_balance` fix
serban300
pushed a commit
to serban300/parity-bridges-common
that referenced
this pull request
Apr 9, 2024
Part of paritytech/polkadot-sdk#226 Related paritytech/polkadot-sdk#1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by paritytech/polkadot-sdk#1296, needs the `Unbalanced::decrease_balance` fix (cherry picked from commit bda4e75ac49786a7246531cf729b25c208cd38e6)
serban300
added a commit
to paritytech/parity-bridges-common
that referenced
this pull request
Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (#2292) Part of paritytech/polkadot-sdk#226 Related paritytech/polkadot-sdk#1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by paritytech/polkadot-sdk#1296, needs the `Unbalanced::decrease_balance` fix (cherry picked from commit bda4e75ac49786a7246531cf729b25c208cd38e6) * Upgrade `trie-db` from `0.28.0` to `0.29.0` (#3982) - What does this PR do? 1. Upgrades `trie-db`'s version to the latest release. This release includes, among others, an implementation of `DoubleEndedIterator` for the `TrieDB` struct, allowing to iterate both backwards and forwards within the leaves of a trie. 2. Upgrades `trie-bench` to `0.39.0` for compatibility. 3. Upgrades `criterion` to `0.5.1` for compatibility. - Why are these changes needed? Besides keeping up with the upgrade of `trie-db`, this specifically adds the functionality of iterating back on the leafs of a trie, with `sp-trie`. In a project we're currently working on, this comes very handy to verify a Merkle proof that is the response to a challenge. The challenge is a random hash that (most likely) will not be an existing leaf in the trie. So the challenged user, has to provide a Merkle proof of the previous and next existing leafs in the trie, that surround the random challenged hash. Without having DoubleEnded iterators, we're forced to iterate until we find the first existing leaf, like so: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } println!("RECONSTRUCTED TRIE {:#?}", trie); // Create an iterator over the leaf nodes. let mut iter = trie.iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. let mut prev_key = None; for element in &mut iter { if element.is_ok() { let (key, _) = element.unwrap(); prev_key = Some(key); break; } } assert!(prev_key.is_some()); // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. assert!(prev_key.unwrap() <= challenge_hash.to_vec()); // The next element should exist (meaning there is no other existing leaf between the // previous and next leaf) and it should be greater than the challenged hash. let next_key = iter.next().unwrap().unwrap().0; assert!(next_key >= challenge_hash.to_vec()); ``` With DoubleEnded iterators, we can avoid that, like this: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } // println!("RECONSTRUCTED TRIE {:#?}", trie); println!("\nChallenged key: {:?}", challenge_hash); // Create an iterator over the leaf nodes. let mut double_ended_iter = trie.into_double_ended_iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. double_ended_iter.seek(&challenge_hash.to_vec()).unwrap(); let next_key = double_ended_iter.next_back().unwrap().unwrap().0; let prev_key = double_ended_iter.next_back().unwrap().unwrap().0; // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. println!("Prev key: {:?}", prev_key); assert!(prev_key <= challenge_hash.to_vec()); println!("Next key: {:?}", next_key); assert!(next_key >= challenge_hash.to_vec()); ``` - How were these changes implemented and what do they affect? All that is needed for this functionality to be exposed is changing the version number of `trie-db` in all the `Cargo.toml`s applicable, and re-exporting some additional structs from `trie-db` in `sp-trie`. --------- Co-authored-by: Bastian Köcher <[email protected]> (cherry picked from commit 4e73c0fcd37e4e8c14aeb83b5c9e680981e16079) * Update polkadot-sdk refs * Fix Cargo.lock --------- Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Facundo Farall <[email protected]>
serban300
added a commit
to paritytech/parity-bridges-common
that referenced
this pull request
Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (#2292) Part of paritytech/polkadot-sdk#226 Related paritytech/polkadot-sdk#1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by paritytech/polkadot-sdk#1296, needs the `Unbalanced::decrease_balance` fix (cherry picked from commit bda4e75ac49786a7246531cf729b25c208cd38e6) * Upgrade `trie-db` from `0.28.0` to `0.29.0` (#3982) - What does this PR do? 1. Upgrades `trie-db`'s version to the latest release. This release includes, among others, an implementation of `DoubleEndedIterator` for the `TrieDB` struct, allowing to iterate both backwards and forwards within the leaves of a trie. 2. Upgrades `trie-bench` to `0.39.0` for compatibility. 3. Upgrades `criterion` to `0.5.1` for compatibility. - Why are these changes needed? Besides keeping up with the upgrade of `trie-db`, this specifically adds the functionality of iterating back on the leafs of a trie, with `sp-trie`. In a project we're currently working on, this comes very handy to verify a Merkle proof that is the response to a challenge. The challenge is a random hash that (most likely) will not be an existing leaf in the trie. So the challenged user, has to provide a Merkle proof of the previous and next existing leafs in the trie, that surround the random challenged hash. Without having DoubleEnded iterators, we're forced to iterate until we find the first existing leaf, like so: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } println!("RECONSTRUCTED TRIE {:#?}", trie); // Create an iterator over the leaf nodes. let mut iter = trie.iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. let mut prev_key = None; for element in &mut iter { if element.is_ok() { let (key, _) = element.unwrap(); prev_key = Some(key); break; } } assert!(prev_key.is_some()); // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. assert!(prev_key.unwrap() <= challenge_hash.to_vec()); // The next element should exist (meaning there is no other existing leaf between the // previous and next leaf) and it should be greater than the challenged hash. let next_key = iter.next().unwrap().unwrap().0; assert!(next_key >= challenge_hash.to_vec()); ``` With DoubleEnded iterators, we can avoid that, like this: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } // println!("RECONSTRUCTED TRIE {:#?}", trie); println!("\nChallenged key: {:?}", challenge_hash); // Create an iterator over the leaf nodes. let mut double_ended_iter = trie.into_double_ended_iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. double_ended_iter.seek(&challenge_hash.to_vec()).unwrap(); let next_key = double_ended_iter.next_back().unwrap().unwrap().0; let prev_key = double_ended_iter.next_back().unwrap().unwrap().0; // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. println!("Prev key: {:?}", prev_key); assert!(prev_key <= challenge_hash.to_vec()); println!("Next key: {:?}", next_key); assert!(next_key >= challenge_hash.to_vec()); ``` - How were these changes implemented and what do they affect? All that is needed for this functionality to be exposed is changing the version number of `trie-db` in all the `Cargo.toml`s applicable, and re-exporting some additional structs from `trie-db` in `sp-trie`. --------- Co-authored-by: Bastian Köcher <[email protected]> (cherry picked from commit 4e73c0fcd37e4e8c14aeb83b5c9e680981e16079) * Update polkadot-sdk refs * Fix Cargo.lock --------- Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Facundo Farall <[email protected]>
serban300
added a commit
to serban300/polkadot-sdk
that referenced
this pull request
Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292) Part of paritytech#226 Related paritytech#1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by paritytech#1296, needs the `Unbalanced::decrease_balance` fix (cherry picked from commit bda4e75) * Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982) - What does this PR do? 1. Upgrades `trie-db`'s version to the latest release. This release includes, among others, an implementation of `DoubleEndedIterator` for the `TrieDB` struct, allowing to iterate both backwards and forwards within the leaves of a trie. 2. Upgrades `trie-bench` to `0.39.0` for compatibility. 3. Upgrades `criterion` to `0.5.1` for compatibility. - Why are these changes needed? Besides keeping up with the upgrade of `trie-db`, this specifically adds the functionality of iterating back on the leafs of a trie, with `sp-trie`. In a project we're currently working on, this comes very handy to verify a Merkle proof that is the response to a challenge. The challenge is a random hash that (most likely) will not be an existing leaf in the trie. So the challenged user, has to provide a Merkle proof of the previous and next existing leafs in the trie, that surround the random challenged hash. Without having DoubleEnded iterators, we're forced to iterate until we find the first existing leaf, like so: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } println!("RECONSTRUCTED TRIE {:#?}", trie); // Create an iterator over the leaf nodes. let mut iter = trie.iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. let mut prev_key = None; for element in &mut iter { if element.is_ok() { let (key, _) = element.unwrap(); prev_key = Some(key); break; } } assert!(prev_key.is_some()); // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. assert!(prev_key.unwrap() <= challenge_hash.to_vec()); // The next element should exist (meaning there is no other existing leaf between the // previous and next leaf) and it should be greater than the challenged hash. let next_key = iter.next().unwrap().unwrap().0; assert!(next_key >= challenge_hash.to_vec()); ``` With DoubleEnded iterators, we can avoid that, like this: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } // println!("RECONSTRUCTED TRIE {:#?}", trie); println!("\nChallenged key: {:?}", challenge_hash); // Create an iterator over the leaf nodes. let mut double_ended_iter = trie.into_double_ended_iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. double_ended_iter.seek(&challenge_hash.to_vec()).unwrap(); let next_key = double_ended_iter.next_back().unwrap().unwrap().0; let prev_key = double_ended_iter.next_back().unwrap().unwrap().0; // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. println!("Prev key: {:?}", prev_key); assert!(prev_key <= challenge_hash.to_vec()); println!("Next key: {:?}", next_key); assert!(next_key >= challenge_hash.to_vec()); ``` - How were these changes implemented and what do they affect? All that is needed for this functionality to be exposed is changing the version number of `trie-db` in all the `Cargo.toml`s applicable, and re-exporting some additional structs from `trie-db` in `sp-trie`. --------- Co-authored-by: Bastian Köcher <[email protected]> (cherry picked from commit 4e73c0f) * Update polkadot-sdk refs * Fix Cargo.lock --------- Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Facundo Farall <[email protected]>
serban300
added a commit
to serban300/polkadot-sdk
that referenced
this pull request
Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292) Part of paritytech#226 Related paritytech#1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by paritytech#1296, needs the `Unbalanced::decrease_balance` fix (cherry picked from commit bda4e75) * Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982) - What does this PR do? 1. Upgrades `trie-db`'s version to the latest release. This release includes, among others, an implementation of `DoubleEndedIterator` for the `TrieDB` struct, allowing to iterate both backwards and forwards within the leaves of a trie. 2. Upgrades `trie-bench` to `0.39.0` for compatibility. 3. Upgrades `criterion` to `0.5.1` for compatibility. - Why are these changes needed? Besides keeping up with the upgrade of `trie-db`, this specifically adds the functionality of iterating back on the leafs of a trie, with `sp-trie`. In a project we're currently working on, this comes very handy to verify a Merkle proof that is the response to a challenge. The challenge is a random hash that (most likely) will not be an existing leaf in the trie. So the challenged user, has to provide a Merkle proof of the previous and next existing leafs in the trie, that surround the random challenged hash. Without having DoubleEnded iterators, we're forced to iterate until we find the first existing leaf, like so: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } println!("RECONSTRUCTED TRIE {:#?}", trie); // Create an iterator over the leaf nodes. let mut iter = trie.iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. let mut prev_key = None; for element in &mut iter { if element.is_ok() { let (key, _) = element.unwrap(); prev_key = Some(key); break; } } assert!(prev_key.is_some()); // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. assert!(prev_key.unwrap() <= challenge_hash.to_vec()); // The next element should exist (meaning there is no other existing leaf between the // previous and next leaf) and it should be greater than the challenged hash. let next_key = iter.next().unwrap().unwrap().0; assert!(next_key >= challenge_hash.to_vec()); ``` With DoubleEnded iterators, we can avoid that, like this: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } // println!("RECONSTRUCTED TRIE {:#?}", trie); println!("\nChallenged key: {:?}", challenge_hash); // Create an iterator over the leaf nodes. let mut double_ended_iter = trie.into_double_ended_iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. double_ended_iter.seek(&challenge_hash.to_vec()).unwrap(); let next_key = double_ended_iter.next_back().unwrap().unwrap().0; let prev_key = double_ended_iter.next_back().unwrap().unwrap().0; // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. println!("Prev key: {:?}", prev_key); assert!(prev_key <= challenge_hash.to_vec()); println!("Next key: {:?}", next_key); assert!(next_key >= challenge_hash.to_vec()); ``` - How were these changes implemented and what do they affect? All that is needed for this functionality to be exposed is changing the version number of `trie-db` in all the `Cargo.toml`s applicable, and re-exporting some additional structs from `trie-db` in `sp-trie`. --------- Co-authored-by: Bastian Köcher <[email protected]> (cherry picked from commit 4e73c0f) * Update polkadot-sdk refs * Fix Cargo.lock --------- Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Facundo Farall <[email protected]>
serban300
added a commit
to serban300/polkadot-sdk
that referenced
this pull request
Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292) Part of paritytech#226 Related paritytech#1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by paritytech#1296, needs the `Unbalanced::decrease_balance` fix (cherry picked from commit bda4e75) * Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982) - What does this PR do? 1. Upgrades `trie-db`'s version to the latest release. This release includes, among others, an implementation of `DoubleEndedIterator` for the `TrieDB` struct, allowing to iterate both backwards and forwards within the leaves of a trie. 2. Upgrades `trie-bench` to `0.39.0` for compatibility. 3. Upgrades `criterion` to `0.5.1` for compatibility. - Why are these changes needed? Besides keeping up with the upgrade of `trie-db`, this specifically adds the functionality of iterating back on the leafs of a trie, with `sp-trie`. In a project we're currently working on, this comes very handy to verify a Merkle proof that is the response to a challenge. The challenge is a random hash that (most likely) will not be an existing leaf in the trie. So the challenged user, has to provide a Merkle proof of the previous and next existing leafs in the trie, that surround the random challenged hash. Without having DoubleEnded iterators, we're forced to iterate until we find the first existing leaf, like so: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } println!("RECONSTRUCTED TRIE {:#?}", trie); // Create an iterator over the leaf nodes. let mut iter = trie.iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. let mut prev_key = None; for element in &mut iter { if element.is_ok() { let (key, _) = element.unwrap(); prev_key = Some(key); break; } } assert!(prev_key.is_some()); // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. assert!(prev_key.unwrap() <= challenge_hash.to_vec()); // The next element should exist (meaning there is no other existing leaf between the // previous and next leaf) and it should be greater than the challenged hash. let next_key = iter.next().unwrap().unwrap().0; assert!(next_key >= challenge_hash.to_vec()); ``` With DoubleEnded iterators, we can avoid that, like this: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } // println!("RECONSTRUCTED TRIE {:#?}", trie); println!("\nChallenged key: {:?}", challenge_hash); // Create an iterator over the leaf nodes. let mut double_ended_iter = trie.into_double_ended_iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. double_ended_iter.seek(&challenge_hash.to_vec()).unwrap(); let next_key = double_ended_iter.next_back().unwrap().unwrap().0; let prev_key = double_ended_iter.next_back().unwrap().unwrap().0; // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. println!("Prev key: {:?}", prev_key); assert!(prev_key <= challenge_hash.to_vec()); println!("Next key: {:?}", next_key); assert!(next_key >= challenge_hash.to_vec()); ``` - How were these changes implemented and what do they affect? All that is needed for this functionality to be exposed is changing the version number of `trie-db` in all the `Cargo.toml`s applicable, and re-exporting some additional structs from `trie-db` in `sp-trie`. --------- Co-authored-by: Bastian Köcher <[email protected]> (cherry picked from commit 4e73c0f) * Update polkadot-sdk refs * Fix Cargo.lock --------- Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Facundo Farall <[email protected]>
serban300
added a commit
to serban300/polkadot-sdk
that referenced
this pull request
Apr 10, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292) Part of paritytech#226 Related paritytech#1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by paritytech#1296, needs the `Unbalanced::decrease_balance` fix (cherry picked from commit bda4e75) * Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982) - What does this PR do? 1. Upgrades `trie-db`'s version to the latest release. This release includes, among others, an implementation of `DoubleEndedIterator` for the `TrieDB` struct, allowing to iterate both backwards and forwards within the leaves of a trie. 2. Upgrades `trie-bench` to `0.39.0` for compatibility. 3. Upgrades `criterion` to `0.5.1` for compatibility. - Why are these changes needed? Besides keeping up with the upgrade of `trie-db`, this specifically adds the functionality of iterating back on the leafs of a trie, with `sp-trie`. In a project we're currently working on, this comes very handy to verify a Merkle proof that is the response to a challenge. The challenge is a random hash that (most likely) will not be an existing leaf in the trie. So the challenged user, has to provide a Merkle proof of the previous and next existing leafs in the trie, that surround the random challenged hash. Without having DoubleEnded iterators, we're forced to iterate until we find the first existing leaf, like so: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } println!("RECONSTRUCTED TRIE {:#?}", trie); // Create an iterator over the leaf nodes. let mut iter = trie.iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. let mut prev_key = None; for element in &mut iter { if element.is_ok() { let (key, _) = element.unwrap(); prev_key = Some(key); break; } } assert!(prev_key.is_some()); // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. assert!(prev_key.unwrap() <= challenge_hash.to_vec()); // The next element should exist (meaning there is no other existing leaf between the // previous and next leaf) and it should be greater than the challenged hash. let next_key = iter.next().unwrap().unwrap().0; assert!(next_key >= challenge_hash.to_vec()); ``` With DoubleEnded iterators, we can avoid that, like this: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } // println!("RECONSTRUCTED TRIE {:#?}", trie); println!("\nChallenged key: {:?}", challenge_hash); // Create an iterator over the leaf nodes. let mut double_ended_iter = trie.into_double_ended_iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. double_ended_iter.seek(&challenge_hash.to_vec()).unwrap(); let next_key = double_ended_iter.next_back().unwrap().unwrap().0; let prev_key = double_ended_iter.next_back().unwrap().unwrap().0; // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. println!("Prev key: {:?}", prev_key); assert!(prev_key <= challenge_hash.to_vec()); println!("Next key: {:?}", next_key); assert!(next_key >= challenge_hash.to_vec()); ``` - How were these changes implemented and what do they affect? All that is needed for this functionality to be exposed is changing the version number of `trie-db` in all the `Cargo.toml`s applicable, and re-exporting some additional structs from `trie-db` in `sp-trie`. --------- Co-authored-by: Bastian Köcher <[email protected]> (cherry picked from commit 4e73c0f) * Update polkadot-sdk refs * Fix Cargo.lock --------- Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Facundo Farall <[email protected]>
serban300
added a commit
to serban300/polkadot-sdk
that referenced
this pull request
Apr 10, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292) Part of paritytech#226 Related paritytech#1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by paritytech#1296, needs the `Unbalanced::decrease_balance` fix (cherry picked from commit bda4e75) * Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982) - What does this PR do? 1. Upgrades `trie-db`'s version to the latest release. This release includes, among others, an implementation of `DoubleEndedIterator` for the `TrieDB` struct, allowing to iterate both backwards and forwards within the leaves of a trie. 2. Upgrades `trie-bench` to `0.39.0` for compatibility. 3. Upgrades `criterion` to `0.5.1` for compatibility. - Why are these changes needed? Besides keeping up with the upgrade of `trie-db`, this specifically adds the functionality of iterating back on the leafs of a trie, with `sp-trie`. In a project we're currently working on, this comes very handy to verify a Merkle proof that is the response to a challenge. The challenge is a random hash that (most likely) will not be an existing leaf in the trie. So the challenged user, has to provide a Merkle proof of the previous and next existing leafs in the trie, that surround the random challenged hash. Without having DoubleEnded iterators, we're forced to iterate until we find the first existing leaf, like so: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } println!("RECONSTRUCTED TRIE {:#?}", trie); // Create an iterator over the leaf nodes. let mut iter = trie.iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. let mut prev_key = None; for element in &mut iter { if element.is_ok() { let (key, _) = element.unwrap(); prev_key = Some(key); break; } } assert!(prev_key.is_some()); // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. assert!(prev_key.unwrap() <= challenge_hash.to_vec()); // The next element should exist (meaning there is no other existing leaf between the // previous and next leaf) and it should be greater than the challenged hash. let next_key = iter.next().unwrap().unwrap().0; assert!(next_key >= challenge_hash.to_vec()); ``` With DoubleEnded iterators, we can avoid that, like this: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } // println!("RECONSTRUCTED TRIE {:#?}", trie); println!("\nChallenged key: {:?}", challenge_hash); // Create an iterator over the leaf nodes. let mut double_ended_iter = trie.into_double_ended_iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. double_ended_iter.seek(&challenge_hash.to_vec()).unwrap(); let next_key = double_ended_iter.next_back().unwrap().unwrap().0; let prev_key = double_ended_iter.next_back().unwrap().unwrap().0; // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. println!("Prev key: {:?}", prev_key); assert!(prev_key <= challenge_hash.to_vec()); println!("Next key: {:?}", next_key); assert!(next_key >= challenge_hash.to_vec()); ``` - How were these changes implemented and what do they affect? All that is needed for this functionality to be exposed is changing the version number of `trie-db` in all the `Cargo.toml`s applicable, and re-exporting some additional structs from `trie-db` in `sp-trie`. --------- Co-authored-by: Bastian Köcher <[email protected]> (cherry picked from commit 4e73c0f) * Update polkadot-sdk refs * Fix Cargo.lock --------- Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Facundo Farall <[email protected]>
bkchr
pushed a commit
that referenced
this pull request
Apr 10, 2024
* Migrate fee payment from `Currency` to `fungible` (#2292) Part of #226 Related #1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by #1296, needs the `Unbalanced::decrease_balance` fix (cherry picked from commit bda4e75) * Upgrade `trie-db` from `0.28.0` to `0.29.0` (#3982) - What does this PR do? 1. Upgrades `trie-db`'s version to the latest release. This release includes, among others, an implementation of `DoubleEndedIterator` for the `TrieDB` struct, allowing to iterate both backwards and forwards within the leaves of a trie. 2. Upgrades `trie-bench` to `0.39.0` for compatibility. 3. Upgrades `criterion` to `0.5.1` for compatibility. - Why are these changes needed? Besides keeping up with the upgrade of `trie-db`, this specifically adds the functionality of iterating back on the leafs of a trie, with `sp-trie`. In a project we're currently working on, this comes very handy to verify a Merkle proof that is the response to a challenge. The challenge is a random hash that (most likely) will not be an existing leaf in the trie. So the challenged user, has to provide a Merkle proof of the previous and next existing leafs in the trie, that surround the random challenged hash. Without having DoubleEnded iterators, we're forced to iterate until we find the first existing leaf, like so: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } println!("RECONSTRUCTED TRIE {:#?}", trie); // Create an iterator over the leaf nodes. let mut iter = trie.iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. let mut prev_key = None; for element in &mut iter { if element.is_ok() { let (key, _) = element.unwrap(); prev_key = Some(key); break; } } assert!(prev_key.is_some()); // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. assert!(prev_key.unwrap() <= challenge_hash.to_vec()); // The next element should exist (meaning there is no other existing leaf between the // previous and next leaf) and it should be greater than the challenged hash. let next_key = iter.next().unwrap().unwrap().0; assert!(next_key >= challenge_hash.to_vec()); ``` With DoubleEnded iterators, we can avoid that, like this: ```rust // ************* VERIFIER (RUNTIME) ************* // Verify proof. This generates a partial trie based on the proof and // checks that the root hash matches the `expected_root`. let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap(); let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build(); // Print all leaf node keys and values. println!("\nPrinting leaf nodes of partial tree..."); for key in trie.key_iter().unwrap() { if key.is_ok() { println!("Leaf node key: {:?}", key.clone().unwrap()); let val = trie.get(&key.unwrap()); if val.is_ok() { println!("Leaf node value: {:?}", val.unwrap()); } else { println!("Leaf node value: None"); } } } // println!("RECONSTRUCTED TRIE {:#?}", trie); println!("\nChallenged key: {:?}", challenge_hash); // Create an iterator over the leaf nodes. let mut double_ended_iter = trie.into_double_ended_iter().unwrap(); // First element with a value should be the previous existing leaf to the challenged hash. double_ended_iter.seek(&challenge_hash.to_vec()).unwrap(); let next_key = double_ended_iter.next_back().unwrap().unwrap().0; let prev_key = double_ended_iter.next_back().unwrap().unwrap().0; // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly. println!("Prev key: {:?}", prev_key); assert!(prev_key <= challenge_hash.to_vec()); println!("Next key: {:?}", next_key); assert!(next_key >= challenge_hash.to_vec()); ``` - How were these changes implemented and what do they affect? All that is needed for this functionality to be exposed is changing the version number of `trie-db` in all the `Cargo.toml`s applicable, and re-exporting some additional structs from `trie-db` in `sp-trie`. --------- Co-authored-by: Bastian Köcher <[email protected]> (cherry picked from commit 4e73c0f) * Update polkadot-sdk refs * Fix Cargo.lock --------- Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Facundo Farall <[email protected]>
EgorPopelyaev
pushed a commit
that referenced
this pull request
May 27, 2024
Part of #226 Related #1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by #1296, needs the `Unbalanced::decrease_balance` fix
EgorPopelyaev
pushed a commit
that referenced
this pull request
May 27, 2024
Part of #226 Related #1833 - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter` - Deprecate `ToStakingPot` and replace usage with `ResolveTo` - Required creating a new `StakingPotAccountId` struct that implements `TypedGet` for the staking pot account ID - Update parachain common utils `DealWithFees`, `ToAuthor` and `AssetsToBlockAuthor` implementations to use `fungible` - Update runtime XCM Weight Traders to use `ResolveTo` instead of `ToStakingPot` - Update runtime Transaction Payment pallets to use `FungibleAdapter` instead of `CurrencyAdapter` - [x] Blocked by #1296, needs the `Unbalanced::decrease_balance` fix
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Original PR paritytech/substrate#14655
Partial #225
Unbalanced::decrease_balance
can reap account whenPreservation
isPreserve
Balanced::pair
can return pairs of imbalances which do not cancel each other outactive_issuance
'underflow'Fixes
Unbalanced::decrease_balance
can reap account when called withPreservation::Preserve
There is a potential issue in the default implementation of
Unbalanced::decrease_balance
. The implementation can delete an account even when it is called withpreservation: Preservation::Preserve
. This seems to contradict the documentation ofPreservation::Preserve
:I updated
Unbalanced::decrease_balance
to returnErr(TokenError::BelowMinimum)
when a withdrawal would cause the account to be reaped andpreservation: Preservation::Preserve
.Test for this behavior:
polkadot-sdk/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs
Lines 912 to 937 in e5c876d
Balanced::pair
returning non-canceling pairsBalanced::pair
is supposed to create a pair of imbalances that cancel each other out. However this is not the case when the method is called with an amount greater than the total supply.In the existing default implementation,
Balanced::pair
creates a pair by first rescinding the balance, creatingDebt
, and then issuing the balance, creatingCredit
.When creating
Debt
, if the amount to create exceeds thetotal_supply
,total_supply
units ofDebt
are created instead ofamount
units ofDebt
. This can lead to non-canceling amount ofCredit
andDebt
being created.To address this, I create the credit and debt directly in the method instead of calling
issue
andrescind
.Test for this behavior:
polkadot-sdk/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs
Lines 1323 to 1346 in e5c876d
Balances
palletactive_issuance
'underflow'This PR resolves an issue in the
Balances
pallet that can lead to odd behavior ofactive_issuance
.Currently, the Balances pallet doesn't check if
InactiveIssuance
remains less than or equal toTotalIssuance
when supply is deactivated. This allowsInactiveIssuance
to be greater thanTotalIssuance
, which can result in unexpected behavior from the perspective of the fungible API.active_issuance
is derived fromTotalIssuance.saturating_sub(InactiveIssuance)
.If an
amount
is deactivated that causesInactiveIssuance
to become greater TotalIssuance,active_issuance
will return 0. However once in that state, reactivating an amount will not increaseactive_issuance
by the reactivatedamount
as expected.Consider this test where the last assertion would fail due to this issue:
polkadot-sdk/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs
Lines 1036 to 1071 in e5c876d
To address this, I've modified the
deactivate
function to ensureInactiveIssuance
never surpassesTotalIssuance
.