Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redefine attestation propogation condition #1706

Merged
merged 3 commits into from
Apr 8, 2020
Merged

Conversation

paulhauner
Copy link
Contributor

I've redefined an attestation propagation condition in the networking spec. I assume this was the intention of the spec, keen to hear if I was wrong.

We could expand this to say that attestations from any source (block, local, etc) will stop propagation.. I can't imagine it being much more difficult for clients and perhaps it reduces a little traffic on the subnet?

I've marked this as draft until we confirm the intentions of this statement.

@djrtwo
Copy link
Contributor

djrtwo commented Apr 6, 2020

Yes, that is the original intended statement.

As we discussed briefly in another chat, we can expand this to be "per epoch" instead of "per slot" to safely reduce the size of the map from V*32 bits to V bits.

@paulhauner
Copy link
Contributor Author

As we discussed briefly in another chat, we can expand this to be "per epoch" instead of "per slot" to safely reduce the size of the map from V*32 bits to V bits.

Sounds good, I have made this change.

A minor note is that due to the gossip clock disparity allowance (and whilst that allowance is less than half a slot) , the sizes are V*33 or V*2.

@paulhauner paulhauner marked this pull request as ready for review April 6, 2020 21:45
@paulhauner
Copy link
Contributor Author

Oh actually it was always V * 2 for an epoch map, since you need a map for the previous epoch too.

If clock disparity ever gets to be > 1/2 a slot duration then we'd need V * 3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants