Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch ClaimAllocations during ProveCommitAggregate #1304

Merged
merged 10 commits into from
Jun 5, 2023

Conversation

alexytsu
Copy link
Contributor

@alexytsu alexytsu commented May 30, 2023

Work towards #1278.

TODOs

  • Make existing tests pass with new expectations
  • Fix return values so that a followup can support partially successful batches of DealActivation

Batching of claim allocations saves both messaging overheads and state mutation costs by only updating the DataCap actor once. The full stack traces can be found here.

The total gas costs for the PCA message drops approximately ~25%

Before:

Span[Root, self: {sum=0, none=0}, total: {sum=322,973,688, OnVerifyAggregateSeals=115,956,010, OnBlockLink=101,446,488, wasm_exec=43,387,915, OnBlockOpenBase=31,302,480, OnSyscall=14,112,000, wasm_memory_init=12,504,269, OnMethodInvocation=2,100,000, wasm_memory_grow=760,218, OnActorUpdate=475,000, OnBlockOpenPerByte=382,410, OnGetRandomness=344,160, OnBlockCreate=171,410, OnBlockRead=15,907, OnHashing=9,422, OnValueTransfer=6,000, OnGetActorCodeCid=0, OnBlockStat=0, OnNetworkContext=0, OnSelfBalance=0, OnGetBuiltinActorType=0, OnMessageContext=0, none=0}]

After:

Span[Root, self: {sum=0, none=0}, total: {sum=239,773,152, OnVerifyAggregateSeals=115,956,010, OnBlockLink=51,931,891, wasm_exec=32,281,749, OnBlockOpenBase=22,117,920, OnSyscall=8,526,000, wasm_memory_init=6,265,242, OnMethodInvocation=1,050,000, OnActorUpdate=475,000, wasm_memory_grow=393,216, OnGetRandomness=344,160, OnBlockOpenPerByte=308,670, OnBlockCreate=95,210, OnBlockRead=12,859, OnHashing=9,226, OnValueTransfer=6,000, none=0, OnGetBuiltinActorType=0, OnGetActorCodeCid=0, OnNetworkContext=0, OnBlockStat=0, OnMessageContext=0, OnSelfBalance=0}]

@alexytsu alexytsu force-pushed the alexytsu/1278-batch-allocation-claim branch from 943c209 to 7cc207e Compare May 30, 2023 04:31
@alexytsu alexytsu force-pushed the alexytsu/1278-batch-allocation-claim branch from 7cc207e to b92f4a6 Compare May 30, 2023 06:50
@alexytsu alexytsu marked this pull request as ready for review May 30, 2023 09:44
@alexytsu alexytsu requested a review from anorth May 30, 2023 09:44
actors/miner/src/ext.rs Show resolved Hide resolved
#[serde(with = "bigint_ser")]
pub claimed_space: BigInt,
pub sector: SectorNumber,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Thinking out loud) I see how we need some change here: when batching across sectors, we need some way to return how much claimed space is associated with each sector, because the caller doesn't know.

The parameters are not grouped by sector, so it makes sense that the result is not grouped by sector either: there could be multiple claims for a single sector.

We don't need to return the sector number though, and there's now some redundancy between the BatchReturn and the list of results. I think the simplest thing would be to drop the BatchReturn, and return a vector of claim results that is parallel to (so the same length as) the parameter and contains only the claimed space. The space is zero for failures. The caller can line this up with the parameters to group by sector number.

(A similar but more complicated scheme would keep the batch return but drop the failed results, but then iterating to line up is hard. I don't think we should optimise for the failure case).

@ZenGround0 what do you think here?

actors/miner/src/lib.rs Outdated Show resolved Hide resolved
actors/miner/src/lib.rs Show resolved Hide resolved
actors/miner/src/lib.rs Outdated Show resolved Hide resolved
actors/verifreg/src/lib.rs Outdated Show resolved Hide resolved
actors/verifreg/src/lib.rs Outdated Show resolved Hide resolved
actors/miner/src/lib.rs Outdated Show resolved Hide resolved
actors/verifreg/src/lib.rs Outdated Show resolved Hide resolved
actors/verifreg/src/lib.rs Outdated Show resolved Hide resolved
@alexytsu alexytsu force-pushed the alexytsu/1278-batch-allocation-claim branch from df56d14 to 4108592 Compare May 31, 2023 15:15
@alexytsu alexytsu requested a review from anorth May 31, 2023 23:45
Copy link
Member

@anorth anorth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks this looks pretty good.

I am very reluctant to even temporarily introduce this duplication of the single/batch calls though. Thanks for attempting to keep this PR small. Can you try to do the refactor against master first though, so that both call sites are using a path that will allow batching, then add the batching?

actors/miner/src/lib.rs Outdated Show resolved Hide resolved
actors/miner/src/lib.rs Outdated Show resolved Hide resolved
actors/miner/src/lib.rs Outdated Show resolved Hide resolved
actors/miner/src/lib.rs Show resolved Hide resolved
actors/miner/src/lib.rs Outdated Show resolved Hide resolved
actors/miner/src/lib.rs Outdated Show resolved Hide resolved
actors/miner/src/types.rs Outdated Show resolved Hide resolved
ret_gen.add_success();
sector_claims
.push(SectorAllocationClaimResult { claimed_space: claim_alloc.size.0.into() });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pattern makes me quite nervous. I wish we had try_map in stable.

For now please explicitly check that sector_claims ends up the right size, and abort (USR_ASSERTION_FAILED) if not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be missing something here, but wouldn't map over params.allocations be sufficient to ensure that the result array stays parallel?

Also, given BatchReturn is no longer part of ClaimAllocationsReturn, I'll remove the ret_gen tracking, though this drops the nuance of specific error codes from the error message (it's already missing from the return value now).

Copy link
Contributor Author

@alexytsu alexytsu Jun 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if this is valid. Is it possible to have a valid but zero-sized claim?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think map is hard to use if you want to bubble a result with ?, like at 433 (unless you want to deal with a Map<Result<>>. That's why try_map.

This logic of checking for zero size is valid at the moment, because the minimum Allocation size is positive. It's probably safe forever, just nervous.

actors/verifreg/tests/verifreg_actor_test.rs Outdated Show resolved Hide resolved
ret_gen.add_success();
sector_claims
.push(SectorAllocationClaimResult { claimed_space: claim_alloc.size.0.into() });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think map is hard to use if you want to bubble a result with ?, like at 433 (unless you want to deal with a Map<Result<>>. That's why try_map.

This logic of checking for zero size is valid at the moment, because the minimum Allocation size is positive. It's probably safe forever, just nervous.

@alexytsu alexytsu enabled auto-merge June 5, 2023 00:21
@alexytsu alexytsu added this pull request to the merge queue Jun 5, 2023
Merged via the queue into master with commit 3825cc1 Jun 5, 2023
@alexytsu alexytsu deleted the alexytsu/1278-batch-allocation-claim branch June 5, 2023 01:11
@alexytsu alexytsu mentioned this pull request Jun 8, 2023
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

2 participants