-
Notifications
You must be signed in to change notification settings - Fork 333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: slashing release #679
base: custom-errors
Are you sure you want to change the base?
Conversation
f2a7515
to
f0873d1
Compare
9f91f03
to
b9fe3e5
Compare
This comment was marked as spam.
This comment was marked as spam.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review on alloc/dealloc
Would it be bad / hard / undesirable to enforce this ordering on the checkpoints history level? I would suggest that if an attempt is made to push an entry with a timestamp that is earlier than the timestamp of the previous entry, either (a) it's simply not allowed or (b) the new entry has its timestamp modified to match (or be 1 higher? not sure if the ascending ordering you have is strict or non-strict). Perhaps some option (c) could work where you keep the original timestamp but overwrite the intentions of the other entry? IDK if that would be incompatible with other aspects of the storage model though. I was initially thinking option (b) was the most logical but modifying entries before pushing them can be messy, especially if the entry or its contents is used elsewhere (e.g. if the timestamp is emitted in events, its hard to make sure the event emits the correct timestamp when you have conditional logic for modifying it). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor slashing comments
678f212
to
e70e85d
Compare
* feat: add getMinimumSlashableStake * fix: only account for deallocation when calculating minimum slashable
* fix: correctly update dsf when increasing delegation * fix: fix bug where we modify an array while iterating over it * chore: address review nits * refactor: minor refactors for codesize * refactor(nit): function ordering * fix: only check input lengths when receiving tokens * refactor: remove callstack indirection when completing a withdrawal * chore: update comment
/// In this window, deallocations still remain slashable by the operatorSet they were allocated to. | ||
uint32 public immutable DEALLOCATION_DELAY; | ||
|
||
/// @notice Delay before alloaction delay modifications take effect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: Delay before alloaction delay modifications take effect.
=> Delay before allocation modifications take effect.
also there's some 'edge case' behavior here, right? I recall this not being enforced on deallocations if the Operator isn't slashable by the AVS? can we be any more specific in this definition to reflect this?
* test(wip): todos * chore: remove lcov * test(wip): remaining alm todos * test: final todos * test(wip): todos * chore: remove lcov
emit AllocationUpdated( | ||
params.operator, | ||
operatorSet, | ||
strategy, | ||
_addInt128(allocation.currentMagnitude, allocation.pendingDiff), | ||
allocation.effectBlock | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found this confusing because _updateAllocationInfo
also emits the same type of event, but I see that this is specifically for the now-modified pending deallocation.
I think there may be semantic inconsistency with using this event in this way in addition to the other usage -- you may want to consider a separate event for this case, or at least clarify that events of both semantics will be processed smoothly?
uint64 slashedMagnitude = uint64(uint256(allocation.currentMagnitude).mulWadRoundUp(params.wadToSlash)); | ||
uint256 sharesWadSlashed = uint256(slashedMagnitude).divWad(info.maxMagnitude); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should sharesWadSlashed
also be rounded up? or is this not rounded up to ensure that delegated shares always >= sum of staker shares ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sharesWadSlashed
is not as important in terms of rounding since it's just passed into the event. The DM.burnOperatorShares
call has been changed back to use prevMaxMagnitude
and newMaxMagnitude
. That being said, yes we do round down the amount of shares to decrement from operatorShares to ensure that property you outlined. Here's the excerpt in the SlashingLib
function calcSlashedAmount(
uint256 operatorShares,
uint256 prevMaxMagnitude,
uint256 newMaxMagnitude
) internal pure returns (uint256) {
// round up mulDiv so we don't overslash
return operatorShares - operatorShares.mulDiv(newMaxMagnitude, prevMaxMagnitude, Math.Rounding.Up);
}
src/test/mocks/AVSDirectoryMock.sol
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the mocks and harnesses, can we cover all the contracts here for both
I think the RewardsCoordinator is missing a mock and we're missing a bunch of harnesses. This will help with some of the parameter changes downstream
Allocation memory allocation = allocations[operator][operatorSetKey][strategy]; | ||
|
||
// If we've reached a pending deallocation that isn't completable yet, | ||
// we can stop. Any subsequent modificaitons will also be uncompletable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: modificaitons
=> modifications
/// @notice Delay before deallocations are clearable and can be added back into freeMagnitude | ||
/// In this window, deallocations still remain slashable by the operatorSet they were allocated to. | ||
uint32 public immutable DEALLOCATION_DELAY; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably worth noting that DEALLOCATION_DELAY
and ALLOCATION_CONFIGURATION_DELAY
are in terms of blocks, specifically
// Given the max magnitudes of the operator the staker is now delegated to, calculate the current | ||
// slashing factors to apply to each withdrawal if it is received as shares. | ||
address newOperator = delegatedTo[withdrawal.staker]; | ||
uint256[] memory newSlashingFactors = _getSlashingFactors(withdrawal.staker, newOperator, withdrawal.strategies); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bit of an optimization nit, but I think this can be moved inside the if...else
block where receiveAsToken = false
delete queuedWithdrawals[withdrawalRoot]; | ||
delete pendingWithdrawals[withdrawalRoot]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
possible bug: moving this down in the function breaks the CEI pattern, maybe opening the path to reentrancy via a malicious token. Possible attack would look like (1) initiate withdrawal containing multiple tokens (2) once callstack makes it to malicious token, initiate the same withdrawal a second time via reentrant call (3) get 2x tokens
I highly recommend moving these deletions as far up in this function as possible, particularly before any calls external to the protocol may occur.
It's super possible there's no actual threat here, but this reordering should be purely safer either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do have nonReentrant
on all completeQueuedWithdrawal interfaces so this shouldn't be an issue but I do agree on following CEI more closely.
* refactor: pull beacon chain slashing out of slashing lib * feat: initial draft for sync slashing * fix: existing tests * fix: cumulative shares and fmt * fix: missing operator key in mapping * refactor: cumulative scaled shares * chore: cleanup * chore: storage report * fix: rename and beacon strategy * fix: rebase * fix: rounding * test: happy path test cases for burn erc20s * fix: address comments * test: timing regression test and comments * fix: slashable shares in queue amount * refactor: burn refactor (#897) * refactor: remove unused return values from _insert * also removes safe cast * refactor: pull unrelated operations out and condense library method usage * test: additional unit test with multiple withdrawals --------- Co-authored-by: wadealexc <[email protected]> Co-authored-by: Alex <[email protected]>
* feat: initial draft for sync slashing * refactor: cumulative scaled shares * feat: initial draft for sync slashing * fix: cumulative shares and fmt * chore: cleanup src/test * fix: delegation tests * test: rebased and refactored tests fix: rebase tests test: delegation unit refactoring fix: rounding tests fix: continue fixing delegation tests * test: include fuzz underflow tests * fix: tests and rebase * chore: comment nit * fix: failing ci
* fix: remove env required * fix: use envOr * fix: remove env from CI for being required
* feat: local deploy * fix: transfer ownership * fix: comment
* feat: add `AVS` user * test(wip): slashing integration * test(wip): slashing integration * test(wip): slashing integration * test(wip): slashing integration * fix: make tracing useful * test(wip): slashing integration * fix: toStringWad * fix: eigenpods * test(wip): slashing integration * refactor: revert change * test(review): changes * fix: compile * test(review): changes * refactor: improve logging * refactor: review changes * fix: roll in `modifyAllocations` * fix: roll in `modifyAllocations` * refactor: review changes * refactor: add back pause constants --------- Co-authored-by: Yash Patil <[email protected]>
* refactor: eigenpod and beacon chain slashing * checkpoints are not deleted on completion, saving gas when creating a new checkpoint * refactor: pull bcsf out of delegationManager * chore: formatting * refactor: rename withdrawableRestakedExecutionLayerGwei * maintains old interface, only state variable is renamed * this is to reduce line length when using this variable * refactor: remove branching and slashing math from eigenpod * fix: clean up balance update conditions and ensure shares==0 is handled * refactor: remove input validation and explicitly document assumptions * fix: tests and roundup (#901) * chore: address feedback * chore: address feedback again * chore: cleanup EPM withdrawSharesAsTokens --------- Co-authored-by: Michael Sun <[email protected]>
|
Updated 11/08/24
Contract Descriptions
DONT REVIEW THE REWARDS COORDINATOR
AVSDirectory
New Core contract: AllocationManager! (basically the Slasher contract)
modifyAllocations
can be called by an operator to configure for a given operatorSet, the slashable proportions for each Strategies in the operatorSet. Ex. If an AVS operatorSet has stETH and rETH Strategies in their operatorSet, as an operator I can allocate 50% of my delegated stETH shares and 25% of my delegated rETH shares to be slashable by this operatorSet. An allocation can be thought of as the slashable proportion defined over the tuple (operator, operatorSet, Strategy).slashOperator
, called by an AVS, slashes an operator for the specific operatorSet they are in as well as the wadsToSlash(percentage slashed). For all Strategies that are in the operatorSet, the operator will be slashed a percentage according to how much allocations they have for each of those strategies. This function slashes from current magnitude as well as queued deallocations. Whatever magnitude is slashed is also decremented from the maxMagnitude from the (operator, Strategy) tuple.encumberedMagnitude
value, increasing on any allocations, which can go up to a maximum of your maxMagnitude(by default maxMagnitude is 1e18, but decreases every time slashed).StrategyManager/EigenPodManager/DelegationManager - Changes to Deposits/Withdrawals
Shares are changed in a lot of ways with the introduction of Slashing with a lot more complex accounting, most of which can be found in SlashingLib.sol and the deposit/withdrawal flows. Below is an explanation on the types of 'shares' in the system.
depositShares
: These are the shares representing the amount of Strategy shares a staker has added to the system, either through deposits in the StrategyManager or positive shares increases in the EigenPodManager. Note that these can be compared 1:1 with the shares of the underlying Strategies. If you are delegated and deposit or you delegate with existing shares, then your operator shares will increase by the exact amount of these depositShares.The amount you can withdraw is dependent however on
withdrawableShares
, see more below...Location: StrategyManager -
stakerDepositShares
mappingEigenPodManager -
podOwnerDepositShares
mappingoperatorShares
: This is the delegated stake an operator has from all their delegated stakers; this can also be termed 'delegated' shares. An operator's operatorShares is also the summation of all their delegated stakers withdrawable shares. This is because operatorShares increase on each delegated staker deposit and decreases from withdrawals and slashed shares. The amount decreased from withdrawals are also scaled down depending on if any slashing has applied to the staker and their depositShares.Location: DelegationManager -
operatorShares
mappingwithdrawableShares
: This is the amount of withdrawable shares a staker can queue withdraw. Now if a staker is not delegated, then the following is truewithdrawableShares == depositShares
because they cannot be slashed. However if they are delegated to an operator and their operator got slashed while they had depositShares delegated to them, then their withdrawableShares are less than their depositShares. This value is not in storage but read by taking the staker's depositShares and scaling it down depending on if any slashing has affected their stake.Location: DelegationManager -
getWithdrawableShares()
function3a.
scaledShares
: You can see this instruct Withdrawal
when queuing a withdrawal. We calculatewithdrawableShares
by some clever scaling factors and accounting but one thing we want to ensure is that withdrawals are still slashable while in the queue. This is done by dividingwithdrawableShares
at queue time by the delegated operator's maxMagnitude, and upon completion multiplying it by the maxMagnitude at the earliest withdrawal completion time. This will account for the proportional amount of shares slashed and decrement from withdrawn shares accordingly.Since we read the maxMagnitude while queuing the withdrawal, this design is more for convenience and optimizing reads.
Note: If the strategy is the beaconChainStrategy this is also accounted for in a similar way.
Location: IDelegationManager -
struct Withdrawal
Additional Notes:
IShareManager
interfacedepositScalingFactor
is the k value used in the accounting docsstakerDepositShares
is soperatorShares
is opmaxMagnitude
is m read from the AllocationManagerthirdPartyTransfersForbidden
is removed entirely. This mapping will be deprecated and never read from again.TODOs:
Deposits/Withdraws of Shares [WIP]
Allocator Configuration [WIP]