Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weekly release of dev into master #591

Merged
merged 81 commits into from
Feb 8, 2019
Merged

Weekly release of dev into master #591

merged 81 commits into from
Feb 8, 2019

Conversation

djrtwo
Copy link
Contributor

@djrtwo djrtwo commented Feb 8, 2019

changelog will be available in tagged release

JustinDrake and others added 30 commits January 31, 2019 07:55
The current spec is arguably inconsistent, in that if a set of N values gets chunked into M chunks where M is not an exact power of 2, the chunks between M and next_power_of_2(M) are filled with SSZ_CHUNK_SIZE zero bytes each, but the last chunk is not padded, and could be arbitrarily short (eg. if the values are 4 bytes and there are 257 of them, then that gets serialized into eight chunks chunks where the first four are 64 values each, the fifth is 4 bytes corresponding to the last value, and the last three chunks are SSZ_CHUNK_SIZE zero bytes). This PR fills every chunk up to exactly SSZ_CHUNK_SIZE bytes for consistency.
Clarifies the distinction between "internal" hash roots (may be < 32 bytes for trivial objects) and "external" ones (zpadded to 32).
This change allows for easier maintenance of the code and the spec by
uncoupling them. Before any edit to either document resulted in having to
synchronize the other. By adding a reference to the canonical repo for the code
we avoid having to maintain a duplicate copy here.
Comments for get_next_epoch_committee_count
`shard` -> `attestation.data.shard`
clarify minor eth1_data processing point
Hash_tree_root -> hash_tree_root_internal
hwwhww and others added 13 commits February 8, 2019 05:24
Continue message hash changes on non-bls part of the specs
… data

The reason to do this is that it makes it calculable from inside an attestation how many epochs the attestation spans over, which is needed for proof of custody reasons. It's a relatively small change and so arguably easier to do now than to do as a patch in phase 1.

Note that this changes the meaning of latest_crosslink.epoch, from the epoch when the latest crosslink was included to the epoch that the latest crosslink was for. This affects the line:

* `state.latest_crosslinks[shard].epoch > state.validator_registry_update_epoch` for every shard number `shard` in `[(state.current_epoch_start_shard + i) % SHARD_COUNT for i in range(get_current_epoch_committee_count(state))]` (that is, for every shard in the current committees)

But this may actually make it _more_ correct, as it means that in the case where >512 shards are processed per epoch, and so a committee from the previous epoch could get finalized in the current epoch, that would no longer count toward every shard having received a "new" crosslink.
convert int_to_bytes to little endian
Attestation data contains latest crosslink, not just latest crosslink…
@djrtwo djrtwo requested review from JustinDrake, hwwhww and vbuterin and removed request for JustinDrake February 8, 2019 16:43
Copy link
Contributor

@hwwhww hwwhww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

break

return potential_assignments
```

`get_next_epoch_crosslink_committees` should be called at the beginning of each epoch to plan for the next epoch. A validator should always plan for both values of `registry_change` as a possibility unless the validator can concretely eliminate one of the options. Planning for a future shuffling involves noting at which slot one might have to attest and propose and also which shard one should begin syncing (in phase 1+).
`get_next_epoch_committee_assignments` should be called at the beginning of each epoch to plan for the next epoch. A validator should always plan for both values of `registry_change` as a possibility unless the validator can concretely eliminate one of the options. Planning for a future shuffling involves noting at which slot one might have to attest and propose and also which shard one should begin syncing (in phase 1+).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you clarify this bit beginning of each epoch to plan for the next epoch?

Does this mean that I call get_next_epoch_committee_assignments at the first slot in epoch N and it returns the assignments for epoch N+1?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just read over the details in line 347, is it is clear there. But would helpful for clarity in this language here

validator_index: ValidatorIndex) -> List[Tuple[List[ValidatorIndex], ShardNumber, SlotNumber, bool]]:
"""
Return a list of the two possible committee assignments for ``validator_index`` at the next epoch.
Possible committee ``assignment`` is of the form (List[ValidatorIndex], ShardNumber, SlotNumber, bool).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible? Can we use more assertive language and a structured object?

It would be easier for understanding to have something like this (JSON as example):

{
 "assignments": [{
    "validator_indices": [0, 1, 2],
    "shard": 5,
    "slot": 100,
    "is_proposer": true
  }]
}

This is much more clear than

[[0,1,2], 5, 100, true]]

* ``assignment[0]`` is the list of validators in the committee
* ``assignment[1]`` is the shard to which the committee is assigned
* ``assignment[2]`` is the slot at which the committee is assigned
* ``assignment[3]`` is a bool signalling if the validator is expected to propose
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of the validators in the committee are proposers for the same shard at the same slot?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function is for a particular validator_index

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why return a list of validators in the committee? If you see my JSON example above, you can see how this is a confusing response from this method.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to know the entire list to perform certain actions such as aggregating. Arguably it could be left out here, but it will need to be gathered eventually.

is_proposer = first_committee_at_slot[slot % len(first_committee_at_slot)] == validator_index
assignment += (is_proposer,)

potential_assignments.append(assignment)
break

return potential_assignments
Copy link
Contributor

@prestonvanloon prestonvanloon Feb 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it "potential"? Aren't these the assignments?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above section explains that at the start of the current epoch, there are two potential assignments for a validator for the next -- if a validator registry change occurs or not. This function gives you both.

You are a in a super-position of both futures until either the epoch change or you locally figure out if registry_change happens (which you can check by eagerly processing the next epoch transition). In the normal case, you can collapse this super position about the next epoch within MIN_ATTESTATION_INCLUSION_DELAY slots into the current epoch.

This section is not new. This release just has some slightly modified functionality and bug fixes. We can address these comments and clarifications in a separate issue/pr

Copy link
Contributor

@vbuterin vbuterin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@djrtwo djrtwo merged commit ab55020 into master Feb 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.