-
Notifications
You must be signed in to change notification settings - Fork 2.6k
[NPoS] Unlimited number of nominators eligible for Rewards payout #13498
base: master
Are you sure you want to change the base?
Conversation
* Update pallet scheduler documentation, warning section, guidelines update * Update call filter note Co-authored-by: Kelvin Bonilla <[email protected]> * revert format cargo * Doc wording Co-authored-by: Keith Yeung <[email protected]> * Doc wording Co-authored-by: Keith Yeung <[email protected]> * Co-authored-by: Keith Yeung <[email protected]> Ammend comments related to documentation * Include additional warning section in `on_initialize` hook * Amend doc Co-authored-by: Sam Johnson <[email protected]> * Amend doc Co-authored-by: Sam Johnson <[email protected]> * Move no_std to appropriate place * Amend doc Co-authored-by: Nate Armstrong <[email protected]> * Amend comment Co-authored-by: Nate Armstrong <[email protected]> --------- Co-authored-by: Kelvin Bonilla <[email protected]> Co-authored-by: Keith Yeung <[email protected]> Co-authored-by: Sam Johnson <[email protected]> Co-authored-by: Nate Armstrong <[email protected]>
* [DNM] Debug clippy * add --workspace
* Make peer evictions less aggressive The original implementation of peer eviction prioritized aliveness over connection stability which made the peer count unstable for some users. As this may cause discomfort or infrastructure alerts if stability is tracked, adjust the eviction to be less aggressive by only evicting peers when the node has fully stalled. This causes the node to have some peers who are inactive and won't send any block announcements. These nodes are removed if the local node is able to receive at least one block announcement from one of its peers as the inactivity of the substream is detected when a notification is sent. If the node won't send or receive any block annoucements for 30 seconds, it's considered stalled and it will evict all peers, causing `ProtocolController` to accept and establish connections from new peers. * Update client/network/sync/src/engine.rs Co-authored-by: Dmitry Markin <[email protected]> * Track last send and received notification simultaneously --------- Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: parity-processbot <>
…dent of the pallet (#14773) * Make `storage_alias` more generic over the `prefix` * Make `UnlockAndUnreserveAllFunds` indepenend from the pallet * FMT * Fix error reporting * Rename prefix type * Add test * Apply suggestions from code review Co-authored-by: Sam Johnson <[email protected]> * ".git/.scripts/commands/fmt/fmt.sh" --------- Co-authored-by: Sam Johnson <[email protected]> Co-authored-by: command-bot <>
…items warnings (#14610) * add docs for impl_codec_bitflags * add missing docs for type aliases * add docs to transfer module * add docs for settings module * add docs to roles module * add docs to metadata module * add docs to migration module * add missing docs to feature library * methods not functions * add docs to lock module * add docs to attributes module * add docs to create_delete_item module * add docs for create_delete_collection module * add docs to buy_sell module * add missing doc for buy_sell module * add docs to atomic_swap module * add docs to atomic_swap module * add docs for approvals module * run cargo fmt * Fix issues with multi-line comments * Apply suggestions from code review Co-authored-by: Jegor Sidorenko <[email protected]> * update from review * fmt * update from review * remove bitflag example * ".git/.scripts/commands/fmt/fmt.sh" * Apply suggestions from code review Co-authored-by: Squirrel <[email protected]> * add note about pallet features --------- Co-authored-by: Jegor Sidorenko <[email protected]> Co-authored-by: Jegor Sidorenko <[email protected]> Co-authored-by: parity-processbot <> Co-authored-by: Squirrel <[email protected]>
… migrations (#14779) * standalone elections-phragmen migration * standalone tips migration * remove redundant comment
* return error on incorrect tuple usage of pre_upgrade and post_upgrade * add test * comment lint * Update frame/support/src/traits/hooks.rs Co-authored-by: Keith Yeung <[email protected]> * Update frame/support/src/traits/hooks.rs Co-authored-by: Keith Yeung <[email protected]> * address feedback * Update frame/support/src/traits/hooks.rs Co-authored-by: Keith Yeung <[email protected]> * Update frame/support/src/traits/hooks.rs Co-authored-by: Keith Yeung <[email protected]> * Update frame/support/src/traits/hooks.rs Co-authored-by: Keith Yeung <[email protected]> * muharem comments * Update frame/support/src/traits/hooks.rs Co-authored-by: Muharem Ismailov <[email protected]> * Update frame/support/src/traits/hooks.rs Co-authored-by: Muharem Ismailov <[email protected]> * remove useless type --------- Co-authored-by: Keith Yeung <[email protected]> Co-authored-by: Muharem Ismailov <[email protected]>
…#14731) * add deprecation notice * remove try-runtime-cli tests * update docs * add estimated removal date * deprecate and remove from node-cli and node-template * try fix build script * update comment * fix link * typo in build script * Move `try-runtime-cli` install step outside of `check_dependent_project.sh` execution scope * Update scripts/ci/gitlab/pipeline/build.yml Co-authored-by: Oliver Tale-Yazdi <[email protected]> * remove chain arg * build runtime with try-runtime feature * kick ci * kick ci * use main branch * specify sha in try-runtime-cli cargo install * kick ci * kick ci --------- Co-authored-by: Vladimir Istyufeev <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]>
* runtime: all signature test * test-utils: remove std duplication * runtime: add bls verify test
* Yep * Try to get it working everywhere * Make `from_raw_storage` start with an empty db * More fixes! * Make everything compile * Fix `child_storage_root` * Fix after merge * Cleanups * Update primitives/state-machine/src/overlayed_changes/mod.rs Co-authored-by: Davide Galassi <[email protected]> * Review comments * Fix issues * Silence warning * FMT * Clippy --------- Co-authored-by: Davide Galassi <[email protected]>
The CI pipeline was cancelled due to failure one of the required jobs. |
bot fmt |
@Ank4n https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/3416922 was started for your command Comment |
@Ank4n Command |
@@ -416,7 +416,7 @@ impl pallet_babe::Config for Runtime { | |||
type DisabledValidators = Session; | |||
type WeightInfo = (); | |||
type MaxAuthorities = MaxAuthorities; | |||
type MaxNominators = MaxNominatorRewardedPerValidator; | |||
type MaxNominators = MaxExposurePageSize; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is used for weight calculation for reporting equivocation. The older configuration (changed here) was incorrect since for slashing, the max nominators exposed could be larger than MaxNominatorRewardedPerValidator
. MaxNominatorRewardedPerValidator
is only used to limit nominators that can be rewarded but nominators eligible for slashing (per validator) is unbounded (though there is a hard count for all nominators => 22500).
Not sure the correct way to solve this and is most probably out of scope of this PR. Setting this to MaxExposurePageSize
is strictly same as before (though now its more apparent that this configuration is wrong).
helps paritytech/polkadot-sdk#439.
closes paritytech/polkadot-sdk#473.
polkadot companion: paritytech/polkadot#6765.
Original PR: #13059.
Context
Rewards payout is processed today in a single block and limited to
MaxNominatorRewardedPerValidator
. This number is currently512
on both Kusama and Polkadot.This PR tries to scale the nominators payout to an unlimited count in a multi-block fashion. Exposures are stored in pages, with each page capped to a certain number (
MaxExposurePageSize
). Starting out, this number would be the same asMaxNominatorRewardedPerValidator
, but eventually, this number can be lowered through new runtime upgrades to limit the rewardeable nominators per dispatched call instruction.The changes in the PR are backward compatible.
Note about the release
This PR introduces a new storage itemMaxExposurePageCount
which defaults to minimum value of1
when not set. This means functionally, the pallet will continue to work exactly as before after this release. To enable paged rewards payout,MaxExposurePageCount
can be increased to a bigger number by OpenGov for each runtime separately.Edit:
MaxExposurePageCount
limit is removed. The nominators are paged if they exceed theMaxExposurePageSize
limit per validator.How payouts would look like after this change
Staking exposes two calls, 1) the existing
payout_stakers
and 2)payout_stakers_by_page
.payout_stakers
This remains backward compatible with no signature change. If for a given era a validator has multiple pages, they can call
payout_stakers
multiple times. The pages are executed in an ascending sequence and the runtime takes care of preventing double claims.payout_stakers_by_page
Very similar to
payout_stakers
but also accepts an extra parampage_index
. An account can choose to payout rewards only for an explicitly passedpage
.Lets look at an example scenario.
Given an active validator on Kusama had 1100 nominators,
MaxExposurePageCount
set to 10 andMaxExposurePageSize
set to 512 for Erae
. In order to pay out rewards to all nominators, the caller would need to callpayout_stakers
3 times.payout_stakers(origin, stash, e)
=> will pay the first 512 nominators.payout_stakers(origin, stash, e)
=> will pay the second set of 512 nominators.payout_stakers(origin, stash, e)
=> will pay the last set of 76 nominators....
payout_stakers(origin, stash, e
) => calling it the 4th time would return an errorInvalidPage
.The above calls can also be replaced by
payout_stakers_by_page
and passing a page explicitly.Commission note
Validator commission is paid out in chunks across all the pages where each commission chunk is proportional to the total stake of the current page. This implies higher the total stake of a page, higher will be the commission. If all the pages of a validator's single era are paid out, the commission paid to the validator will be the same as if it were a non-paged exposure.
Migration Note
Strictly speaking, we did not need to bump our storage version since there is no migration of storage in this PR. But it is still useful to mark a storage upgrade for the following reasons
HistoryDepth
eras, the exposure would be incrementally migrated to its corresponding paged storage item.HistoryDepth
eras with current upgraded version (14). At some eraE
such thatE > era_at_which_V14_gets_into_effect + HistoryDepth
, we will upgrade to versionX
which will clean up/kill the deprecated storage items.In other words, it is a strict requirement that Ex - E14 >
HistoryDepth
, whereEx = Era at which deprecated storages are removed from runtime,
E14 = Era at which runtime is upgraded to version 14.
Storage Changes
Added
ErasStakersOverview
ClaimedRewards
ErasStakersPaged
Deprecated
The following can be cleaned up after 84 eras. Tracker issue for cleaning up.
ErasStakers
.ErasStakersClipped
.StakingLedger.claimed_rewards
, renamed toStakingLedger.legacy_claimed_rewards
.Config Changes
MaxNominatorRewardedPerValidator
toMaxExposurePageSize
.New itemMaxExposurePageCount
TODO
MaxNominatorRewardedPerValidator
->MaxExposurePageSize
.Followup