Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added withdrawals with accumulator #354

Closed
wants to merge 11 commits into from
Closed

Conversation

vbuterin
Copy link
Contributor

@vbuterin vbuterin commented Dec 22, 2018

Validators withdraw a minimum of ~27 hours after exiting, with a maximum of 4 per epoch. When a validator withdraws, we add a record of this into a merkle accumulator.

Possible extension/simplification: if we have this data structure, then we can also reuse it to store the list of validator deposits and exits, reducing the need for separate logic for the validator set change hash list. This will slightly decrease efficiency of light-syncing the validator change set (in the long run by ~1.5x data-wise as anyone looking for validator set changes will also have to deal with one withdrawal for each entry+exit, and 3x computation-wise because Merkle trees are harder than hash chains), but it would simplify consensus code so it could be worth it. This would let us get rid of ValidatorRegistryDeltaBlock, get_new_validator_registry_delta_chain_tip, state.validator_registry_delta_chain_tip

specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
# Slot created
'slot_created': 'uint64',
# ETH to withdraw
'value': 'uint64'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename value => balance

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do that, we should do the same for DepositData. Should we?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have three data structures for deposits: Deposit, DepositData and DepositInput. Definitely an opportunity for simplification/cleanup :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe leave that for a separate patch at some point down the line? 😄

specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
@JustinDrake
Copy link
Collaborator

Should MAX_WITHDRAWALS_PER_EPOCH be MAX_PENALISED_WITHDRAWALS_PER_EPOCH? That is, the withdrawal throttling would apply to EXITED_WITH_PENALTY only. Validators with status EXITED_WITHOUT_PENALTY can withdraw immediately, and therefore we may not need EXITED_WITHOUT_PENALTY.

@vbuterin
Copy link
Contributor Author

Should MAX_WITHDRAWALS_PER_EPOCH be MAX_PENALISED_WITHDRAWALS_PER_EPOCH? That is, the withdrawal throttling would apply to EXITED_WITH_PENALTY only. Validators with status EXITED_WITHOUT_PENALTY can withdraw immediately, and therefore we may not need EXITED_WITHOUT_PENALTY.

No, because the point of the throttling is to give more time for validators to get penalized if they actually did something wrong.

@JustinDrake
Copy link
Collaborator

Questions regarding withdrawals:

  1. What happens if, say, a custody challenge is started right after an exit event? Specifically, do we have logic to delay the withdrawal, by how much can the withdrawal be delayed by, and is this a DoS vector for anyone looking to withdraw? (IDEA: make custody challenges and/or responses non-interactive with SNARKs once we have a SNARK-friendly hash function.)
  2. I guess we need to be careful to make sure that the beacon chain has enough bandwidth in the worst of conditions to penalise everyone that needs penalising. A conservative assumption is that every 128 slots (2 epochs) there is at least 1 honest validator (see safety argument here). The smallest Max operations per block is 16, which seems to suggest that we may be able to increase the MAX_WITHDRAWALS_PER_EPOCH closer to 16/2. Does that make sense?

@djrtwo
Copy link
Contributor

djrtwo commented Dec 23, 2018

For (2), it is important to ensure that we can catch the validators faster than they can withdraw, but it is also important to ensure that if some sizable cohort of validators plan on committing a slashable offense in the future that we bound the time it would take for them to fully withdraw (thus the weak subjectivity online time synchrony assumption).

If 1/3 of the validators plan on committing a slashable offense, we want to bound the time in which they could fully withdraw at which point they could freely create a conflicting chain (which couldn't fool online nodes, but could fool nodes that have been offline for at least the time it took for the malicious cohort to withdraw).

At MAX_WITHDRAWALS_PER_EPOCH == 4 and validator_count == 300k (~10M eth deposited), this would take the 100k cartel ~111 days to fully withdraw which is close to the 4 month withdrawal synchrony assumption originally in the ffg paper. If we double to 8 per epoch then this reduces the time for full withdrawal to ~55 days.

Due to the length of time for a sizable cartel to withdraw being a function of the size of the validator set, we also need to consider what this looks like in times of lower validation say 30k validators (~1M eth), This would take the 1/3 cartel of 10k validators only ~11 days to withdraw which puts an even stricter node online assumption to remain synced.

I think the 4 is reasonable and that increasing by 2x would start to increasingly degrade the node syncing experience for more casual consumer users especially in the early days of low validation participation by moving the frequency of coming online requirement in the less than a week range.

Note: that to safely remain synced after being offline for greater than the times specified, a user would need to supply their node with some sort of recent information from the network -- recent finalized hash or root of the validator set.

EDIT
Oh just saw this in the PR

  • 4 withdrawals per epoch means that eg. if a 1/3 subset of 10 million ETH equivocate to cause a safety failure and all withdraw immediately, they will be able to leave in 3333333 / 32 / 4 = 26042 epochs, or 10 million seconds ~= 4 months.

Just note that this number scales linearly with size of the vset so we need to make sure its still reasonable in early days

specs/core/0_beacon-chain.md Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
@vbuterin
Copy link
Contributor Author

Yeah, I think the MAX_WITHDRAWALS_PER_EPOCH parameter should be set based on how long we want the weak subjectivity period to be, and the limits should be calculated at the end to make sure people can be slashed quickly enough.

What happens if, say, a custody challenge is started right after an exit event? Specifically, do we have logic to delay the withdrawal, by how much can the withdrawal be delayed by, and is this a DoS vector for anyone looking to withdraw? (IDEA: make custody challenges and/or responses non-interactive with SNARKs once we have a SNARK-friendly hash function.)

First of all, we don't necessarily need to decide the final answer right now, as it's a phase 1 matter and we can just add a filter later.

But to actually answer the question, yes we do need to delay withdrawals if they get challenged, in order to facilitate multi-round challenge games such as those needed to discover which specific branch a client computed incorrectly if it computed a bad custody root.

So far the best idea I can come up with is to come up with a way that while there are extant custody challenges on a validator, it does not progress toward withdrawal, and progress resumes as soon as the validator responds. As for how to do this, one solution would be that when a validator has extant custody challenges, we stop storing their minimum exit slot and exit count, and switch to storing their remaining time until exit and exit queue position, which we don't change until all extant custody challenges are resolved, and at that point we switch back to storing their exit slot and exit count. We don't even need to modify phase 0 as any such information could be stored in the custody challenge objects themselves.

@hwwhww
Copy link
Contributor

hwwhww commented Dec 23, 2018

Updated diagram:
(edited)
spec diagram

@hwwhww hwwhww mentioned this pull request Dec 23, 2018
credentials=validator.withdrawal_credentials,
slot=state.slot,
value=state.validator_balances[index],
)
Copy link
Contributor

@hwwhww hwwhww Dec 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the third parameter of update_merkle_accumulator should be a Hash32 object?

    new_accumulator_object = hash_tree_root(
        WithdrawalData(
            credentials=validator.withdrawal_credentials,
            slot=state.slot,
            value=state.validator_balances[index],
        )
    )

#### `update_merkle_accumulator`

```python
def update_merkle_accumulator(accumulator: List[Hash32],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe rename to get_updated_merkle_accumulator?

@@ -943,13 +964,31 @@ def get_beacon_proposer_index(state: BeaconState,
#### `merkle_root`

```python
def merkle_root(values):
def merkle_root(values: List[Hash32]) -> Hash32:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe rename to get_merkle_root?

@JustinDrake JustinDrake restored the vbuterin-patch-15 branch February 14, 2019 22:54
@djrtwo djrtwo deleted the vbuterin-patch-15 branch May 20, 2020 23:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants