Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ETF Milestone 2: Add ACSS Recovery Capability #2

Closed
wants to merge 1,544 commits into from
Closed

Conversation

driemworks
Copy link

  • adds the ability for bls keypairs to be used to run the async committee secret sharing scheme

BulatSaif and others added 30 commits April 10, 2024 07:53
The plain `people-kusama` and `coretime-kusama` chainspecs were uploaded
at paritytech#3961. Only binaries
with compatible runtime versions can run with plain chainspec. For
example:

One of the latest master builds fails:

```
docker run paritypr/polkadot-parachain-debug:master-216509db --chain coretime-kusama
...
Error: Service(Client(Storage("wasm call error Other: Exported method GenesisBuilder_get_preset is not found"))
```

Master build from 5 days ago:

```
docker run paritypr/polkadot-parachain-debug:master-68cdb126 --chain coretime-kusama
...
2024-04-08 16:51:02 [Parachain] 🔨 Initializing Genesis block/state (state: 0xc418…889c, header-hash: 0x638c…d050)
2024-04-08 16:51:03 [Relaychain] 🔨 Initializing Genesis block/state (state: 0xb000…ef6b, header-hash: 0xb0a8…dafe)
2024-04-08 16:51:03 [Relaychain] 👴 Loading GRANDPA authority set from genesis on what appears to be first startup.
2024-04-08 16:51:03 [Relaychain] 👶 Creating empty BABE epoch changes on what appears to be first startup.
2024-04-08 16:51:03 [Relaychain] 🏷 Local node identity is: 12D3KooWQEp2uPow3FnngGmy9dYQ3qxY1GkmumS5MqBWEQscwTyy
2024-04-08 16:51:03 [Relaychain] 📦 Highest known block at #0
...
```

## Changes:

1. Rename:
```
coretime-kusama.json -> coretime-kusama-genesis.json
people-kusama.json -> people-kusama-genesis.json
```

2. Generate raw chainspec:

```
docker run --rm -v $(pwd):/dir paritypr/polkadot-parachain-debug:master-68cdb126 build-spec --raw --chain /dir/people-kusama-genesis.json > people-kusama.json 
docker run --rm -v $(pwd):/dir paritypr/polkadot-parachain-debug:master-68cdb126 build-spec --raw --chain /dir/coretime-kusama-genesis.json > coretime-kusama.json
```

## Tests:

New chainspec can run on the latest master build:

```
docker run -it --rm -v $(pwd):/dir paritypr/polkadot-parachain-debug:master-216509db --chain /dir/coretime-kusama.json
...
2024-04-09 16:44:39 [Relaychain] ⚙️ Syncing, target=#22665488 (8 peers), best: paritytech#2371 (0x19f8…5f3a), finalized paritytech#2048 (0xede6…f879), ⬇ 388.6kiB/s ⬆ 87.0kiB/s
2024-04-09 16:44:39 [Parachain] 💤 Idle (6 peers), best: #0 (0x638c…d050), finalized #0 (0x638c…d050), ⬇ 6.3kiB/s ⬆ 2.8kiB/s
```

Plain chainspec will fail:

```
docker run -it --rm -v $(pwd):/dir paritypr/polkadot-parachain-debug:master-216509db --chain /dir/coretime-kusama-genesis.json
... 
Error: Service(Client(Storage("wasm call error Other: Exported method GenesisBuilder_get_preset is not found")))
```
…ech#3660)

Part of paritytech#3326 

@kianenigma @ggwpez 

polkadot address: 12poSUQPtcF1HUPQGY3zZu2P8emuW9YnsPduA4XG3oCEfJVp

---------

Signed-off-by: Matteo Muraca <[email protected]>
Co-authored-by: ordian <[email protected]>
[Weights
compare](https://weights.tasty.limo/compare?unit=weight&ignore_errors=true&threshold=10&method=asymptotic&repo=polkadot-sdk&old=master&new=pg%2Fbench_tweaks&path_pattern=substrate%2Fframe%2F**%2Fsrc%2Fweights.rs%2Cpolkadot%2Fruntime%2F*%2Fsrc%2Fweights%2F**%2F*.rs%2Cpolkadot%2Fbridges%2Fmodules%2F*%2Fsrc%2Fweights.rs%2Ccumulus%2F**%2Fweights%2F*.rs%2Ccumulus%2F**%2Fweights%2Fxcm%2F*.rs%2Ccumulus%2F**%2Fsrc%2Fweights.rs)

Note: Raw weights change does not mean much here, as this PR reduce the
scope of what is benchmarked, they are therefore decreased by a good
margin. One should instead print the Schedule using

cargo test --features runtime-benchmarks bench_print_schedule --
--nocapture
or following the instructions from the
[README](https://github.com/paritytech/polkadot-sdk/tree/pg/bench_tweaks/substrate/frame/contracts#schedule)
for looking at the Schedule of a specific runtime

---------

Co-authored-by: command-bot <>
…tech#4051)

This tiny PR extends the `on_validated_block_announce` log with the bad
PeerID.
Used to identify if the peerID is malicious by correlating with other
logs (ie peer-set).

While at it, have removed the `\n` from a multiline log, which did not
play well with
[sub-triage-logs](https://github.com/lexnv/sub-triage-logs/tree/master).

cc @paritytech/networking

---------

Signed-off-by: Alexandru Vasile <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
Reuse wasmi Module when validating.
Prepare the code for 0.32 and the addition of Module::new_unchecked
…ch#4072)

Dear team, dear @NachoPal @joepetrowski @bkchr @ggwpez,

This is a retry of paritytech#3957, after merging master as advised!,

Many thanks!

**_Milos_**

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Oliver Tale-Yazdi <[email protected]>
- Update internal logic so that the storage_base_deposit does not
include ED
- add v16 migration to update ContractInfo struct with this change

Before:
<img width="820" alt="Screenshot 2024-03-21 at 11 23 29"
src="https://github.com/paritytech/polkadot-sdk/assets/521091/a0a8df0d-e743-42c5-9e16-cf2ec1aa949c">

After:
![Screenshot 2024-03-21 at 11 23
42](https://github.com/paritytech/polkadot-sdk/assets/521091/593235b0-b866-4915-b653-2071d793228b)

---------

Co-authored-by: Cyrill Leutwiler <[email protected]>
Co-authored-by: command-bot <>
…ech#4065)

Same issue but about av-cores was fixed in
paritytech#1403

Signed-off-by: Andrei Sandu <[email protected]>
…ancient (paritytech#4070)

fixes paritytech#4067

Also add an early bail out for look ahead collator such that we don't
waste time if a CollatorFn is not set.

TODO:
- [x] add test.
- [x] Polkadot System Parachain burn-in.

---------

Signed-off-by: Andrei Sandu <[email protected]>
Closes paritytech#4041

Changes:
- Increase cache size and reduce retries.
- Ignore Substrate SE links :(
- Fix broken link.

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
paritytech#3630)

This is phase 2 of async backing enablement for the system parachains on
the production networks. ~~It should be merged after
polkadot-fellows/runtimes#228 is enacted. After it is released,~~ all
the system parachain collators should be upgraded, and then we can
proceed with phase 3, which will enable async backing in the runtimes.

UPDATE: Indeed, we don't need to wait for the runtime upgrade enactions.
The lookahead collator handles the transition by itself, so we can
upgrade ASAP.

## Scope of changes

Here, we eliminate the dichotomy of having "generic Aura collators" for
the production system parachains and "lookahead Aura collators" for the
testnet system parachains. Now, all the collators are started as
lookahead ones, preserving the logic of transferring from the shell node
to Aura-enabled collators for the asset hubs. So, indeed, it simplifies
the parachain service logic, which cannot but rejoice.
Currently `subsystem-regression-tests` job fails if the first benchmarks
fail and there is no result for the second benchmark. Also dividing the
job makes the pipeline faster (currently it's a longest job)

cc paritytech/ci_cd#969
cc @AndreiEres

---------

Co-authored-by: Andrei Eres <[email protected]>
Implements the idea from
paritytech#3899
- Removed latencies
- Number of runs reduced from 50 to 5, according to local runs it's
quite enough
- Network message is always sent in a spawned task, even if latency is
zero. Without it, CPU time sometimes spikes.
- Removed the `testnet` profile because we probably don't need that
debug additions.

After the local tests I can't say that it brings a significant
improvement in the stability of the results. However, I belive it is
worth trying and looking at the results over time.
Completes the removal of `try-runtime-cli` logic from `polkadot-sdk`.
One more try to make this test robust from a resource perspective.

---------

Signed-off-by: Andrei Sandu <[email protected]>
Co-authored-by: Javier Viola <[email protected]>
Implements polkadot-fellows/RFCs#82
Allow any parachain to have bidirectional channel with any system
parachains

Looking for initial feedbacks before continue finish it

TODOs:
- [x] docs
- [x] benchmarks
- [x] tests

---------

Co-authored-by: joe petrowski <[email protected]>
Co-authored-by: Adrian Catangiu <[email protected]>
Small adjustments which should make understanding what is going on much
easier for future readers.

Initialization is a bit messy, the very least we should do is adding
documentation to make it harder to use wrongly.

I was thinking about calling `request_core_count` right from
`start_sales`, but as explained in the docs, this is not necessarily
what you want.

---------

Co-authored-by: eskimor <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Dónal Murray <[email protected]>
…4027)

Fixes paritytech#3576

Required by elastic scaling collators.
Deprecates old API: `candidate_pending_availability`.

TODO:
- [x] PRDoc

---------

Signed-off-by: Andrei Sandu <[email protected]>
…transfer types (paritytech#3695)

# Description

Add `transfer_assets_using()` for transferring assets from local chain
to destination chain using explicit XCM transfer types such as:
- `TransferType::LocalReserve`: transfer assets to sovereign account of
destination chain and forward a notification XCM to `dest` to mint and
deposit reserve-based assets to `beneficiary`.
- `TransferType::DestinationReserve`: burn local assets and forward a
notification to `dest` chain to withdraw the reserve assets from this
chain's sovereign account and deposit them to `beneficiary`.
- `TransferType::RemoteReserve(reserve)`: burn local assets, forward XCM
to `reserve` chain to move reserves from this chain's SA to `dest`
chain's SA, and forward another XCM to `dest` to mint and deposit
reserve-based assets to `beneficiary`. Typically the remote `reserve` is
Asset Hub.
- `TransferType::Teleport`: burn local assets and forward XCM to `dest`
chain to mint/teleport assets and deposit them to `beneficiary`.

By default, an asset's reserve is its origin chain. But sometimes we may
want to explicitly use another chain as reserve (as long as allowed by
runtime `IsReserve` filter).

This is very helpful for transferring assets with multiple configured
reserves (such as Asset Hub ForeignAssets), when the transfer strictly
depends on the used reserve.

E.g. For transferring Foreign Assets over a bridge, Asset Hub must be
used as the reserve location.

# Example usage scenarios

## Transfer bridged ethereum ERC20-tokenX between ecosystem parachains.

ERC20-tokenX is registered on AssetHub as a ForeignAsset by the
Polkadot<>Ethereum bridge (Snowbridge). Its asset_id is something like
`(parents:2, (GlobalConsensus(Ethereum), Address(tokenX_contract)))`.
Its _original_ reserve is Ethereum (only we can't use Ethereum as a
reserve in local transfers); but, since tokenX is also registered on
AssetHub as a ForeignAsset, we can use AssetHub as a reserve.

With this PR we can transfer tokenX from ParaA to ParaB while using
AssetHub as a reserve.

## Transfer AssetHub ForeignAssets between parachains

AssetA created on ParaA but also registered as foreign asset on Asset
Hub. Can use AssetHub as a reserve.

And all of the above can be done while still controlling transfer type
for `fees` so mixing assets in same transfer is supported.

# Tests

Added integration tests for showcasing:
- transferring local (not bridged) assets from parachain over bridge
using local Asset Hub reserve,
- transferring foreign assets from parachain to Asset Hub,
- transferring foreign assets from Asset Hub to parachain,
- transferring foreign assets from parachain to parachain using local
Asset Hub reserve.

---------

Co-authored-by: Branislav Kontur <[email protected]>
Co-authored-by: command-bot <>
- Progresses paritytech#3155

### What's inside

A job, that will take each of the three
[templates](https://github.com/paritytech/polkadot-sdk/tree/master/templates),
yank them out of the monorepo workspace, and push to individual
repositories
([1](https://github.com/paritytech/polkadot-sdk-minimal-template),
[2](https://github.com/paritytech/polkadot-sdk-parachain-template),
[3](https://github.com/paritytech/polkadot-sdk-solochain-template)).

In case the build/test does not succeed, a PR such as [this
one](paritytech-stg/polkadot-sdk-solochain-template#2)
gets created instead.

I'm proposing a manual dispatch trigger for now - so we can test and
iterate faster - and change it to fully automatic triggered by releases
later.

The manual trigger looks like this:

<img width="340px"
src="https://github.com/paritytech/polkadot-sdk/assets/12039224/e87e0fda-23a3-4735-9035-af801e8417fc"/>

### How it works

The job replaces dependencies [referenced by
git](https://github.com/paritytech/polkadot-sdk/blob/d733c77ee2d2e8e2d5205c552a5efb2e5b5242c8/templates/minimal/pallets/template/Cargo.toml#L25)
with a reference to released crates using
[psvm](https://github.com/paritytech/psvm).

It creates a new workspace for the template, and adapts what's needed
from the `polkadot-sdk` workspace.

### See the results

The action has been tried out in staging, and the results can be
observed here:

- [minimal
stg](https://github.com/paritytech-stg/polkadot-sdk-minimal-template/)
- [parachain
stg](https://github.com/paritytech-stg/polkadot-sdk-parachain-template/)
- [solochain
stg](https://github.com/paritytech-stg/polkadot-sdk-solochain-template/)

These are based on the `1.9.0` release (using `release-crates-io-v1.9.0`
branch).
1. The `CustomFmtContext::ContextWithFormatFields` enum arm isn't
actually used and thus we don't need the enum anymore.

2. We don't do anything with most of the normalized metadata that's
created by calling `event.normalized_metadata();` - the `target` we can
get from `event.metadata.target()` and level we can get from
`event.metadata.level()` - let's just call them direct to simplify
things. (`event.metadata()` is just a field call under the hood)

Changelog: No functional changes, might run a tad faster with lots of
logging turned on.

---------

Co-authored-by: Bastian Köcher <[email protected]>
This PR mainly removes `xcm::v3` stuff from `assets-common` to make it
more generic and facilitate the transition to newer XCM versions. Some
of the implementations here used hard-coded `xcm::v3::Location`, but now
it's up to the runtime to configure according to its needs.

Additional/consequent changes:
- `penpal` runtime uses now `xcm::latest::Location` for `pallet_assets`
as `AssetId`, because we don't care about migrations here
- it pretty much simplify xcm-emulator integration tests, where we don't
need now a lots of boilerplate conversions:
      ```
      v3::Location::try_from(...).expect("conversion works")`
      ```
- xcm-emulator tests
- split macro `impl_assets_helpers_for_parachain` to the
`impl_assets_helpers_for_parachain` and
`impl_foreign_assets_helpers_for_parachain` (avoids using hard-coded
`xcm::v3::Location`)
…ytech#4080)

This PR introduces `BlockHashProvider` into `pallet_mmr::Config`
This type is used to get `block_hash` for a given `block_number` rather
than directly using `frame_system::Pallet::block_hash`

The `DefaultBlockHashProvider` uses `frame_system::Pallet::block_hash`
to get the `block_hash`

Closes: paritytech#4062
…ch#3694)

This workflow will automatically add issues related to async backing to
the Parachain team board, updating a custom "meta" field.

Requested by @the-right-joyce
bkchr and others added 19 commits May 17, 2024 11:35
Changes the Ethereum client storage scope to public, so it can be set in
a migration.

When merged, we should backport to the all other release branches:

- [ ] release-crates-io-v1.7.0 - patch release the fellows BridgeHubs
runtimes paritytech#4504
- [ ] release-crates-io-v1.8.0 -
paritytech#4505
- [ ] release-crates-io-v1.9.0 -
paritytech#4506
- [ ] release-crates-io-v1.10.0 -
paritytech#4507
- [ ] release-crates-io-v1.11.0 -
paritytech#4508
- [ ] release-crates-io-v1.12.0 (commit soon)
…ce on the pool account (paritytech#4503)

addresses paritytech#4440 (will
close once we have this in prod runtimes).
related: paritytech#2037.

An extra consumer reference is preventing pools to be destroyed. When a
pool is ready to be destroyed, we
can safely clear the consumer references if any. Notably, I only check
for one extra consumer reference since that is a known bug. Anything
more indicates possibly another issue and we probably don't want to
silently absorb those errors as well.

After this change, pools with extra consumer reference should be able to
destroy normally.
…aritytech#4512)

Replace usage of deprecated
`substrate_rpc_client::ChildStateApi::storage_keys` with
`substrate_rpc_client::ChildStateApi::storage_keys_paged`.

Required for successful scraping of Aleph Zero state.
…ritytech#4514)

As per paritytech#3326, removes pallet::getter macro usage from
pallet-fast-unstake. The syntax `StorageItem::<T, I>::get()` should be
used instead.

cc @muraca

---------

Co-authored-by: Liam Aharon <[email protected]>
…h#4471)

Implements paritytech#4429

Collators only need to maintain the implicit view for the paraid they
are collating on.
In this case, bypass prospective-parachains entirely. It's still useful
to use the GetMinimumRelayParents message from prospective-parachains
for validators, because the data is already present there.

This enables us to entirely remove the subsystem from collators, which
consumed resources needlessly

Aims to resolve paritytech#4167 

TODO:
- [x] fix unit tests
…ritytech#4533)

closes paritytech/parity-bridges-common#3000

Recently we've changed our bridge configuration for Rococo <> Westend
and our new relayer has started to submit transactions every ~ `30`
seconds. Eventually, it switches itself into limbo state, where it can't
submit more transactions - all `author_submitAndWatchExtrinsic` calls
are failing with the following error: `ERROR bridge Failed to send
transaction to BridgeHubRococo node: Call(ErrorObject { code:
ServerError(-32006), message: "Too many subscriptions on the
connection", data: Some(RawValue("Exceeded max limit of 1024")) })`.

Some links for those who want to explore:
- server side (node) has a strict limit on a number of active
subscriptions. It fails to open a new subscription if this limit is hit:
https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/server/src/middleware/rpc/layer/rpc_service.rs#L122-L132.
The limit is set to `1024` by default;
- internally this limit is a semaphore with `limit` permits:
https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L461-L485;
- semaphore permit is acquired in the first link;
- the permit is "returned" when the `SubscriptionSink` is dropped:
https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L310-L325;
- the `SubscriptionSink` is dropped when [this `polkadot-sdk`
function](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L58-L94)
returns. In other words - when the connection is closed, the stream is
finished or internal subscription buffer limit is hit;
- the subscription has the internal buffer, so sending an item contains
of two steps: [reading an item from the underlying
stream](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L125-L141)
and [sending it over the
connection](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L111-L116);
- when the underlying stream is finished, the `inner_pipe_from_stream`
wants to ensure that all items are sent to the subscriber. So it: [waits
until the current send operation
completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L146-L148)
and then [send all remaining items from the internal
buffer](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L150-L155).
Once it is done, the function returns, the `SubscriptionSink` is
dropped, semaphore permit is dropped and we are ready to accept new
subscriptions;
- unfortunately, the code just calls the `pending_fut.await.is_err()` to
ensure that [the current send operation
completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L146-L148).
But if there are no current send operation (which is normal), then the
`pending_fut` is set to terminated future and the `await` never
completes. Hence, no return from the function, no drop of
`SubscriptionSink`, no drop of semaphore permit, no new subscriptions
allowed (once number of susbcriptions hits the limit.

I've illustrated the issue with small test - you may ensure that if e.g.
the stream is initially empty, the
`subscription_is_dropped_when_stream_is_empty` will hang because
`pipe_from_stream` never exits.
…aritytech#4465)

closes paritytech/parity-bridges-common#2963

See issue above for rationale
I've been thinking about adding similar calls to other pallets, but:
- for parachains pallet I haven't been able to think of a case when we
will need that given how long referendum takes. I.e. if storage proof
format changes and we want to unstuck the bridge, it'll take a large a
few weeks to sync a single parachain header, then another weeks for
another and etc.
- for messages pallet I've made the similar call initially, but it just
changes a storage key (`OutboundLanes` and/or `InboundLanes`), so
there's no any logic here and it may be simply done using
`system.set_storage`.

---------

Co-authored-by: command-bot <>
…ritytech#4198)

This PR introduces custom types / substrate wrappers for `Multiaddr`,
`multiaddr::Protocol`, `Multihash`, `ed25519::*` and supplementary types
like errors and iterators.

This is needed to unblock `libp2p` upgrade PR
paritytech#1631 after
paritytech#2944 was merged.
`libp2p` and `litep2p` currently depend on different versions of
`multiaddr` crate, and introduction of this "common ground" types is
needed to support independent version upgrades of `multiaddr` and
dependent crates in `libp2p` & `litep2p`.

While being just convenient to not tie versions of `libp2p` & `litep2p`
dependencies together, it's currently not even possible to keep `libp2p`
& `litep2p` dependencies updated to the same versions as `multiaddr` in
`libp2p` depends on `libp2p-identity` that we can't include as a
dependency of `litep2p`, which has it's own `PeerId` type. In the
future, to keep things updated on `litep2p` side, we will likely need to
fork `multiaddr` and make it use `litep2p` `PeerId` as a payload of
`/p2p/...` protocol.

With these changes, common code in substrate uses these custom types,
and `litep2p` & `libp2p` backends use corresponding libraries types.
…h#4538)

This PR backports version bumps and reorganisation of the prdoc files
from the `1.12.0` release branch back to `master`
@juangirini juangirini closed this May 22, 2024
@juangirini juangirini deleted the tony/dev branch May 22, 2024 14:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.