-
Notifications
You must be signed in to change notification settings - Fork 680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix nothing scheduled on session boundary #1403
Conversation
@@ -606,7 +606,6 @@ impl<T: Config> AssignmentProvider<BlockNumberFor<T>> for Pallet<T> { | |||
fn get_provider_config(_core_idx: CoreIndex) -> AssignmentProviderConfig<BlockNumberFor<T>> { | |||
let config = <configuration::Pallet<T>>::config(); | |||
AssignmentProviderConfig { | |||
availability_period: config.paras_availability_period, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No point in having this per assignment provider as the values have been unified.
} | ||
|
||
/// An entry tracking a paras | ||
#[derive(Clone, Encode, Decode, TypeInfo, PartialEq, RuntimeDebug)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These types are only used internally, no need to expose them via primitives.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wonderful. Thanks for refactoring this and reducing the API surface.
Error::<T>::InvalidAssignment | ||
})?; | ||
let group_vals = | ||
group_validators(group_idx).ok_or_else(|| Error::<T>::InvalidGroupIndex)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplification that fell out from changing the type of scheduled.
|
||
let time_out_at = |backed_in_number, availability_period| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't duplicate time out logic, but use same function as the scheduler uses to avoid this to diverge from reality.
}) | ||
.collect(); | ||
|
||
// This will overwrite only `Free` cores if the scheduler module is working as intended. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No longer true. Only worked with the None
hack.
* master: (28 commits) Adds base benchmark for do_tick in broker pallet (#1235) zombienet: use another collator image for the slashing test (#1386) Prevent a fail prdoc check to block (#1433) Fix nothing scheduled on session boundary (#1403) GHW for building and publishing docker images (#1391) pallet asset-conversion additional quote tests (#1371) Remove deprecated `pallet_balances`'s `set_balance_deprecated` and `transfer` dispatchables (#1226) Fix PRdoc check (#1419) Fix the wasm runtime substitute caching bug (#1416) Bump enumn from 0.1.11 to 0.1.12 (#1412) RFC 14: Improve locking mechanism for parachains (#1290) Add PRdoc check (#1408) fmt fixes (#1413) Enforce a decoding limit in MultiAssets (#1395) Remove dynamic dispatch using `Ext` (#1399) Remove redundant calls to `borrow()` (#1393) Get rid of polling in `WarpSync` (#1265) Bump actions/checkout from 3 to 4 (#1398) Bump thiserror from 1.0.47 to 1.0.48 (#1396) Move Relay-Specific Shared Code to One Place (#1193) ...
* Fix scheduled state at session boundaries. * Cleanup + better docs. * More cleanup and fixes. * Remove 12s hack. * Add dep. * Make clippy happy --------- Co-authored-by: eskimor <[email protected]>
* Fix scheduled state at session boundaries. * Cleanup + better docs. * More cleanup and fixes. * Remove 12s hack. * Add dep. * Make clippy happy --------- Co-authored-by: eskimor <[email protected]>
Same issue but about av-cores was fixed in #1403 Signed-off-by: Andrei Sandu <[email protected]>
We clear out claim queues at the end of the session. This leads to nodes not seeing anything scheduled for the next block. This is fixed by explicitly updating the claim queues in the
availability_cores
runtime API.This ended up including a few further fixes and cleanups. Code is still not clean, but cleaner.