-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storagefsm: Trigger input processing when below limits #5801
Conversation
shouldUpdateInput := m.stats.updateSector(cfg, m.minerSectorID(state.SectorNumber), state.State) | ||
|
||
// trigger more input processing when we've dipped below max sealing limits | ||
if shouldUpdateInput { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changing this to shouldUpdateInput && false
makes TestBatchDealInput
fail reliably
// update stats early, fsm planner would do that async | ||
m.stats.updateSector(cfg, m.minerSectorID(sid), UndefinedSectorState) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without this we could end up creating a bunch of sectors at once, and start more parallel work than specified by the max sealing sector config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems a bit brittle. Can we have a m.createSector
function that calls NewSector
and updates the stats?
// update stats early, fsm planner would do that async | ||
m.stats.updateSector(cfg, m.minerSectorID(sid), UndefinedSectorState) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems a bit brittle. Can we have a m.createSector
function that calls NewSector
and updates the stats?
Currently when a miner starts processing more deals in parallel than the configured sealing limits, we wouldn't start processing more deals after the first batch of sectors got sealed.
This should fix that case