Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delete all trailing whitespace & add CI check #3977

Open
wants to merge 4 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: Publish docs
on:
push:
branches:
- master
- master
permissions:
contents: write
jobs:
Expand Down
12 changes: 12 additions & 0 deletions .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,18 @@ jobs:
- name: Run linter for test generators
run: make lint_generators

whitespace:
runs-on: [self-hosted-ghr-custom, size-l-x64, profile-consensusSpecs]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Check for trailing whitespace
run: |
if git grep -n '[[:blank:]]$'; then
echo "Trailing whitespace found. Please fix it."
exit 1
fi

pyspec-tests:
runs-on: [self-hosted-ghr-custom, size-xl-x64, profile-consensusSpecs]
needs: [lint,codespell,table_of_contents]
Expand Down
4 changes: 2 additions & 2 deletions configs/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Configurations

This directory contains a set of configurations used for testing, testnets, and mainnet.
A client binary may be compiled for a specific `PRESET_BASE`,
A client binary may be compiled for a specific `PRESET_BASE`,
and then load different configurations around that preset to participate in different networks or tests.

Standard configs:
Expand All @@ -24,7 +24,7 @@ In this case, the suffix on the new variable may be removed, and the old variabl

A previous iteration of forking made use of "timelines", but this collides with the definitions used in the spec (variables for special forking slots, etc.), and was not integrated sufficiently in any of the spec tools or implementations.
Instead, the config essentially doubles as fork definition now, e.g. changing the value for `ALTAIR_FORK_EPOCH` changes the fork.

## Format

Each preset and configuration is a key-value mapping.
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/templates/beacon-chain-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,9 +67,9 @@ class CONTAINER_NAME(Container):

### Block processing






## Testing

*Note*: The function `initialize_beacon_state_from_eth1` is modified for pure <FORK_NAME> testing only.
Expand Down
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ theme:
- scheme: default
primary: black
toggle:
icon: material/brightness-7
icon: material/brightness-7
name: Switch to dark mode
- scheme: slate
primary: black
Expand Down
2 changes: 1 addition & 1 deletion presets/mainnet/eip6800.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Misc
# ---------------------------------------------------------------
# `uint64(2**16)` (= 65,536)
# `uint64(2**16)` (= 65,536)
MAX_STEMS: 65536
# `uint64(33)`
MAX_COMMITMENTS_PER_STEM: 33
Expand Down
2 changes: 1 addition & 1 deletion presets/minimal/eip6800.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Execution
# ---------------------------------------------------------------
# `uint64(2**16)` (= 65,536)
# `uint64(2**16)` (= 65,536)
MAX_STEMS: 65536
# `uint64(33)`
MAX_COMMITMENTS_PER_STEM: 33
Expand Down
6 changes: 3 additions & 3 deletions specs/_features/custody_game/beacon-chain.md
Original file line number Diff line number Diff line change
Expand Up @@ -573,7 +573,7 @@ def process_custody_slashing(state: BeaconState, signed_custody_slashing: Signed

# Any signed custody-slashing should result in at least one slashing.
# If the custody bits are valid, then the claim itself is slashed.
malefactor = state.validators[custody_slashing.malefactor_index]
malefactor = state.validators[custody_slashing.malefactor_index]
whistleblower = state.validators[custody_slashing.whistleblower_index]
domain = get_domain(state, DOMAIN_CUSTODY_BIT_SLASHING, get_current_epoch(state))
signing_root = compute_signing_root(custody_slashing, domain)
Expand All @@ -596,7 +596,7 @@ def process_custody_slashing(state: BeaconState, signed_custody_slashing: Signed
# Verify existence and participation of claimed malefactor
attesters = get_attesting_indices(state, attestation)
assert custody_slashing.malefactor_index in attesters

# Verify the malefactor custody key
epoch_to_sign = get_randao_epoch_for_custody_period(
get_custody_period_for_validator(custody_slashing.malefactor_index, attestation.data.target.epoch),
Expand All @@ -619,7 +619,7 @@ def process_custody_slashing(state: BeaconState, signed_custody_slashing: Signed
for attester_index in attesters:
if attester_index != custody_slashing.malefactor_index:
increase_balance(state, attester_index, whistleblower_reward)
# No special whisteblower reward: it is expected to be an attester. Others are free to slash too however.
# No special whisteblower reward: it is expected to be an attester. Others are free to slash too however.
else:
# The claim was false, the custody bit was correct. Slash the whistleblower that induced this work.
slash_validator(state, custody_slashing.whistleblower_index)
Expand Down
16 changes: 8 additions & 8 deletions specs/_features/das/p2p-interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
## Introduction

For an introduction about DAS itself, see [the DAS participation spec](sampling.md#data-availability-sampling).
This is not a pre-requisite for the network layer, but will give you valuable context.
This is not a pre-requisite for the network layer, but will give you valuable context.

For sampling, all nodes need to query for `k` random samples each slot.

Expand All @@ -55,13 +55,13 @@ The push model does not aim to serve "historical" queries (anything older than t
Historical queries are still required for the unhappy case, where messages are not pushed quick enough,
and missing samples are not reconstructed by other nodes on the horizontal subnet quick enough.

The main challenge in supporting historical queries is to target the right nodes,
The main challenge in supporting historical queries is to target the right nodes,
without concentrating too many requests on a single node, or breaking the network/consensus identity separation.

## DAS Subnets

On a high level, the push-model roles are divided into:
- Sources: create blobs of shard block data, and transformed into many tiny samples.
- Sources: create blobs of shard block data, and transformed into many tiny samples.
- Sinks: continuously look for samples

At full operation, the network has one proposer, per shard, per slot.
Expand Down Expand Up @@ -93,15 +93,15 @@ Peers on the horizontal subnet are expected to at least perform regular propagat
Nodes on this same subnet can replicate the sampling efficiently (including a proof for each sample),
and distribute it to any vertical networks that are available to them.

Since the messages are content-addressed (instead of origin-stamped),
multiple publishers of the same samples on a vertical subnet do not hurt performance,
Since the messages are content-addressed (instead of origin-stamped),
multiple publishers of the same samples on a vertical subnet do not hurt performance,
but actually improve it by shortcutting regular propagation on the vertical subnet, and thus lowering the latency to a sample.


### Vertical subnets

Vertical subnets propagate the samples to every peer that is interested.
These interests are randomly sampled and rotate quickly: although not perfect,
These interests are randomly sampled and rotate quickly: although not perfect,
sufficient to avoid any significant amount of nodes from being 100% predictable.

As soon as a sample is missing after the expected propagation time window,
Expand Down Expand Up @@ -166,7 +166,7 @@ The [DAS participation spec](sampling.md#horizontal-subnets) outlines when and w

#### Vertical subnets: `das_sample_{subnet_index}`

Shard blob samples can be verified with just a 48 byte KZG proof (commitment quotient polynomial),
Shard blob samples can be verified with just a 48 byte KZG proof (commitment quotient polynomial),
against the commitment to blob polynomial, specific to that `(shard, slot)` key.

The following validations MUST pass before forwarding the `sample` on the vertical subnet.
Expand All @@ -192,7 +192,7 @@ This is to serve other peers that may have missed it.

To pull samples from nodes, in case of network instability when samples are unavailable, a new query method is added to the Req-Resp domain.

This builds on top of the protocol identification and encoding spec which was introduced in [the Phase0 network spec](../../phase0/p2p-interface.md).
This builds on top of the protocol identification and encoding spec which was introduced in [the Phase0 network spec](../../phase0/p2p-interface.md).

Note that DAS networking uses a different protocol prefix: `/eth2/das/req`

Expand Down
8 changes: 4 additions & 4 deletions specs/_features/eip7594/fork-choice.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ This is the modification of the fork choice accompanying EIP-7594.
```python
def is_data_available(beacon_block_root: Root) -> bool:
# `retrieve_column_sidecars` is implementation and context dependent, replacing
# `retrieve_blobs_and_proofs`. For the given block root, it returns all column
# sidecars to sample, or raises an exception if they are not available.
# The p2p network does not guarantee sidecar retrieval outside of
# `MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS` epochs.
# `retrieve_blobs_and_proofs`. For the given block root, it returns all column
# sidecars to sample, or raises an exception if they are not available.
# The p2p network does not guarantee sidecar retrieval outside of
# `MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS` epochs.
column_sidecars = retrieve_column_sidecars(beacon_block_root)
return all(
verify_data_column_sidecar(column_sidecar)
Expand Down
2 changes: 1 addition & 1 deletion specs/_features/eip7594/peer-sampling.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# EIP-7594 -- Peer Sampling
# EIP-7594 -- Peer Sampling

**Notice**: This document is a work-in-progress for researchers and implementers.

Expand Down
Loading