-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storage collateral lock-up and recovery #386
Comments
@dignifiedquire for awareness; @anorth, do you have a particular question in here? |
still very in flux, will be changed and updated with the upcoming faults updates |
I have a solution concept for the storage collateral lock-up problem. Assume storage collateral per sector is a fixed (per-miner) amount,
This second case incentivises the miner to set a sector expiry date (and hence a collateral lock-up period) that is no shorter than the longest deal; in case they try to under-commit, the miner can be immediately penalised by a deal client. The cost of this mechanism is an additional bigint on chain associated with each sector commitment. For deals which are successfully challenged, some additional state must remember this fact until the deal expires, to prevent repeated penalisation by the same client (which is probably needed by any such scheme). Note that the scheme doesn't care about the difference between a fault and a miner declaring a sector "done", though we might want some grace period to allow a miner to recover from a transient fault before being exposed to arbitration. |
Also the somewhat-confusing |
@sternhenri , how does this solution look? any comments? |
Sorry for the lag. The state-space of faults has sort of blown up so I find myself a bit confused about what the current status quo is. Solution seems decent to me, my issue simply being that it incentivizes miners not to take on deals, given the extra complexity associated with them (ie needing to pledge storage over time upfront). that said, I agree with alex that there's no other good way to ensure collateral sticks around long enough to be arbitrated otherwise (lest we fall back to some sort of on-chain deal system etc etc)... |
Think this is done, storage deal collateral is locked up for the duration of the deal and returned to client and provider at the end of the deal. If a sector is terminated, client deal collateral is returned to the client but provider collateral is sent to the BurntFundsActor. |
Recent changes have introduced an
owedStorageCollateral
miner actor state variable. It appears to be increased only when a miner experiences a storage fault (i.e. no PoSt by generation attack time) and claimed by a client in ArbitrateDeal.In the case of a storage fault, it appears that the collateral is locked up forever. This is basically the equivalent of burning the funds, except in the case that a client can claim some of it. As I have raised in #407, burning significant funds in the case of failing to submit a PoSt on time poses a huge operational risk to miners, and I was under the impression that work was being done to change this. In particular, it makes no sense when a miner is proving storage that does not have an associated deal and hence no client will ever make a claim.
Secondly, storage collateral must be available for clients to claim regardless of how a sector was dropped. Storage collateral must be posted and locked up whenever a sector is committed (the chain can't tell if it has any deals). If a miner declares a sector "done", a client must still be able to make a claim if their deal period has not yet expired, so collateral must remain pinned even when there is no fault. The length of time that a client can make such claim should be limited so that the collateral is not locked up forever. For each committed sector, some time after it becomes no-longer-committed, the associated storage collateral should be released back to the miner.
I'm aware that this is not a simple issue to address, and might require more bookkeeping data than the miner actor currently maintains.
See also #60, #385.
The text was updated successfully, but these errors were encountered: