Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify sync protocol and update to calculate optimistic heads #2746

Merged
merged 15 commits into from
Dec 15, 2021

Conversation

vbuterin
Copy link
Contributor

  1. Simplify valid_updates to best_valid_update so the LightClientStore only needs to store O(1) data
  2. Track an optimistic head, by looking for the highest-slot header which passes a safety threshold

1. Simplify `valid_updates` to `best_valid_update` so the `LightClientStore` only needs to store O(1) data
2. Track an optimistic head, by looking for the highest-slot header which passes a safety threshold
Copy link
Collaborator

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great 👍

1. Replace `header` and `finality_header` with `attested_header` (always the header signed by the committee) and `finailzed_header` (always the header verified by the Merkle branch)
2. Remove `LightClientSnapshot`, fold its fields into `LightClientStore` for simplicity
specs/altair/sync-protocol.md Show resolved Hide resolved
specs/altair/sync-protocol.md Outdated Show resolved Hide resolved
specs/altair/sync-protocol.md Outdated Show resolved Hide resolved
specs/altair/sync-protocol.md Outdated Show resolved Hide resolved
| Name | Value | Notes |
| - | - | - |
| `MIN_SYNC_COMMITTEE_PARTICIPANTS` | `1` | |
| `SAFETY_THRESHOLD_CALCULATION_PERIOD` | `4096` | ~13.6 hours |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A full sync committee period is 256 Epochs * 32 Slots / Epoch = 8192 Slots. To reliably keep track of *_period_max_attendance the client needs to receive multiple updates during each period. If a client fetches an update early in sync committee period N, and then fetches another update late in the next sync committee period N + 1, it may even end up in a situation where *_period_max_attendance both are 0. How was 4096 determined?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really have a very principled way to choose the SAFETY_THRESHOLD_CALCULATION_PERIOD yet. As far as I can tell, it's a responsiveness/vulnerability tradeoff. A SAFETY_THRESHOLD_CALCULATION_PERIOD of eg. 1 epoch would mean that if the chain suddenly loses >50% of participants, light clients will only experience a 2 epoch delay, but this means that an attacker need only eclipse a client for 2 epochs to convince them of anything. Setting SAFETY_THRESHOLD_CALCULATION_PERIOD = UPDATE_TIMEOUT (~1 day) pushes safety to the maximum, but at the cost of minimum adaptability.

Though one path we could take is to set SAFETY_THRESHOLD_CALCULATION_PERIOD = UPDATE_TIMEOUT and then just assert that any desired faster responsiveness should come from clients implementing custom logic in the safety factor function (eg. max // 2 normally but max // 4 after two epochs of the optimistic head not updating). I'm open to any option here.

specs/altair/sync-protocol.md Outdated Show resolved Hide resolved
Comment on lines 138 to 140
if current_slot % SAFETY_THRESHOLD_PERIOD == 0:
store.previous_max_active_participants = store.current_max_active_participants
store.current_max_active_participants = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should apply_light_client_update also be triggered, in case current_slot > store.finalized_header.slot + UPDATE_TIMEOUT gets fulfilled?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the condition to update optimistic_header may also be fulfilled after changes to *_max_active_participants.

specs/altair/sync-protocol.md Outdated Show resolved Hide resolved
if update_period == finalized_period + 1:
store.current_sync_committee = store.next_sync_committee
store.next_sync_committee = update.next_sync_committee
store.finalized_header = active_header
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the optimistic_header was older, I guess it should also be updated here (to finalized_header).

if update_period == finalized_period + 1:
store.current_sync_committee = store.next_sync_committee
store.next_sync_committee = update.next_sync_committee
store.finalized_header = active_header

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that it could be the case the store.finalized_header is not actually a finalized header, when the apply_light_client_update is called through update timeout?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was the case in the old version as well, but it was called just header there. finalized_header here seems to have a different meaning than in other contexts, it's just finalized for the light client (it won't revert it anymore). Agree that the naming is suboptimal. Likewise, the optimistic_header also seems to have a different meaning from the one discussed as part of the merge effort.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm...if this is the intended "finalization" for the light-client, that is not great.

In the case of timeout, why not just go to the network and ask for a committee changing update? I know that in this spec, we have not specify how to get that information. In any implementation, the light client is going to have to be able to ask for historic updates corresponding to some "sync-committee". If that is available, the finalization of just taking the store.best_valid_update is not great. I doubt that real client implementation is going to take this route.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If sync committee participation is low, and none of the blocks exceeds the 2/3 majority for a day, there still needs to be a way to proceed though. Not sure how realistic that is for mainnet.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that is fine. If that indeed happens once in blue moon, the light client would stop working syncing. The manual fix for light client operator is to use a newly acquired, trusted starting point. The code owner could also update their client's hard coded starting point. In a way, these manual interventions should be considered desirable because we have unexpected level of participation.

However, if that happens a lot, I think that is more of an incentive design issue. We should consider how to fix that at the protocol level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Light clients are intended to be able to follow the chain in as similar a way to regular clients as possible. And one of the ethereum staking protocol's core design goals has all along been to have some path to be able to continue making progress under >1/3 offline conditions. So the light client protocol should include some way to do that.

(I'm assuming light clients are going to be used in a lot of contexts, including automated ones, where manual intervention is hard and should be left to resolving 51% attacks)

What is a better alternative to taking store.best_valid_update? The regular ethereum protocol advances during the non-finalization case by using the LMD GHOST fork choice rule, which follows the chain that has the most validators supporting it. store.best_valid_update approximates that. Is there a better approximation?

Copy link

@jinfwhuang jinfwhuang Dec 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest that the light-client's ability to continue could just rely on the "data source", which are invariably backed by full nodes. The exact meaning of data source is not well defined yet because the networking layer could be portal-network, a LES-like p2p network, or a server-client RPC pairing.

When the light-client experiences a timeout or falls behind the current sync-comm, i.e. the incoming Updates are not good enough to advance its finalized_header, the client would revert to a skip-sync mode. In a skip-sync mode, the client asks the "data source" for an update that would advance its sync-committee. A light client does not advance until it somehow find a way to access "finality". Because finality is guaranteed to be found in some data sources, a light client is stuck because it couldn't access the correct data sources (i.e. correct updates).

The guarantee of a light client would find a way to advance should depends on a light client having a way to find the right updates. Again, networking is not defined yet; once it is defined, we can evaluate at what conditions the light-client might not be able to find the appropriate updates.

snapshot_period = compute_epoch_at_slot(snapshot.header.slot) // EPOCHS_PER_SYNC_COMMITTEE_PERIOD
update_period = compute_epoch_at_slot(update.header.slot) // EPOCHS_PER_SYNC_COMMITTEE_PERIOD
assert update_period in (snapshot_period, snapshot_period + 1)
finalized_period = compute_epoch_at_slot(store.finalized_header.slot) // EPOCHS_PER_SYNC_COMMITTEE_PERIOD

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does that mean the light-client should fail if there is a skip period? This seems to be a fairly normal path when a client stops running for a few das.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The light-client can request historic LightClientUpdate from the network. It needs at least one update per period to follow along, as it only knows current_sync_committee and next_sync_committee and can only verify LightClientUpdate from those periods.

However, what is still suboptimal is the case where finalized_update is in a different period than attested_update, but this is not a problem introduced by this PR. Another tricky case to tackle for the case where attested_update is in a different period than the committee which signed it, which probably even requires some heuristics to figure out (as this case depends on there being missed slots at the start of an epoch). For now, these edge cases are all ignored, and the updates are only accepted if all of finalized_update, attested_update, and the sync committee signing it come from the same sync committee period.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. agree on the updating path will trace sync-comm linkage. Also agree that this is not the issue raised by this PR.

With regard to the edge case...it could cause some weird behaviors temporarily. For example, apply_light_client_update is called due to timeout. Then, when there are valid update.finalized_header arrives, they will get rejected.

Again, this behavior could be better handled if we assume that the light-client can make request for specific LightClientUpdates when it times out or fall out of sync with the current stream of updates. The sync logic would becomes a lot cleaner to be separated into two sync mode: skip-sync mode and normal sync mode.

@jinfwhuang
Copy link

jinfwhuang commented Dec 8, 2021

I will make one big picture comment that is slightly out of scope for this PR. It is relevant because this PR attempts to fix the sync logic when there is a client timeout. It is even more relevant if a light client has to get updates that skips a period.

As it stands right now, the spec does not provide a mechanism to skip-sync for a light-client that has an outdated view of the sync-committees. This is somewhat addressed by the timeout mechanism, but not fully. Furthermore, without an explicit skip-sync mechanism, it is hard to address the cold start problem.

def get_safety_threshold(store: LightClientStore) -> uint64:
return max(
store.previous_max_active_participants,
store.current_max_active_participants

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason why the threshold is half of the max(previous, current)? This is just a heuristic check, correct? Can we add a note stating as such?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related comment:
#2746 (comment)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it.

Copy link
Collaborator

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR accomplishes the scope described in it's body successfully 👍

Other topics brought up in this PR description should be tackled in new PRs:

  • p2p networking
  • improved data structures

Copy link
Contributor

@hwwhww hwwhww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well done in simplification. 👍

Agreed with @dapplion. FYI I'd like to move the file paths with other PRs. Let's merge this PR now and then propose suggestions & add other designs with other PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants