-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement verified claim term extension by client #669
Conversation
2ced655
to
9594cf4
Compare
Codecov Report
@@ Coverage Diff @@
## decouple-fil+ #669 +/- ##
================================================
Coverage ? 84.69%
================================================
Files ? 95
Lines ? 19057
Branches ? 0
================================================
Hits ? 16141
Misses ? 2916
Partials ? 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really want to get aligned on how we do gas optimization with mapmap and one testing concern.
let mut batch_gen = BatchReturnGen::new(params.terms.len()); | ||
rt.transaction(|st: &mut State, rt| { | ||
let mut st_claims = st.load_claims(rt.store())?; | ||
// Group consecutive term extensions with the same provider so we can batch update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Were you making this decision keeping in mind mapmap caching? I agree this saves checking the mapmap's cache which is something but this points to redundancy. If the mapmap's cache handling was slow enough that you didn't want to rely on it then 1) we should remove that complexity from the data type and accept caller level complexity like this, or 2) we should improve mapmap caching to be useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not have mapmap caching specifically in mind. My motivation was somewhat abstract toward the idea that batch updating a H/AMT should be more efficient due to only calculating intermediate nodes once, even if many values are on the same path. I am aware that the underlying HAMT doesn't actually implement this yet, but felt compelled to do this in the "right" way anyway, so it would immediately benefit if that lower-level improvement is made.
It would be possible to improve either MapMap
, or just the Map
or HAMT, to cache all updates in memory and only serialize once at the end. That would be more extensive and tricky caching than the present approach, but remove all such considerations and complexity from caller-level code like this. I'm hesitant to push on it due to difficulty of getting it right at that layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cache all updates in memory and only serialize once at the end
Doesn't internal HAMT node caching solve this problem already? I think it does and I don't think you need the caller complexity added here to achieve that. You already get this with HAMT + MapMap caching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. If the HAMT does do that, it's not very obvious or at all documented. I tried to quickly benchmark this to see, but discovered much pain trying to replace the mock runtime's blockstore with a TrackingBlockstore (which comes from a different repo (😠 see #678)). I'll revert it on the principle of simplicity first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not surprised if the rust library has no mention of this but this used to be a consensus thing before fvm so this was documented here: https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0007.md
9594cf4
to
cd9a58c
Compare
This was fairly straightforward.
One possibly controversial decision was to allow extension of claims that have already expired. I couldn't find a good reason to deny it. I think this will harmonise with my intentions to (in the future) allow re-committing a verified piece if the original sector fails within a claim's lifetime. The "failure" in the case I've implemented is the client forgetting to extend their claim term in time, so the sector was forced to drop it and the provider will need to re-seal.
Closes #550.