Skip to content

Commit

Permalink
publish v0.14.0 (#148)
Browse files Browse the repository at this point in the history
* replace v0.13.0 with v0.14.0

* add prysm support link

* update charon cli reference

* add versioned_docs/version-v0.14.0

* replace docker-compose with docker compose

* Update docs/int/quickstart/quickstart-alone.md

Co-authored-by: Oisín Kyne <[email protected]>

* Update docs/charon/charon-cli-reference.md

Co-authored-by: Oisín Kyne <[email protected]>

* fix create cluster command

* add link to guide for --withdrawal-address flag

---------

Co-authored-by: Oisín Kyne <[email protected]>
  • Loading branch information
xenowits and OisinKyne authored Mar 10, 2023
1 parent 2b11a14 commit f8587e0
Show file tree
Hide file tree
Showing 46 changed files with 3,253 additions and 131 deletions.
134 changes: 42 additions & 92 deletions docs/charon/charon-cli-reference.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/dvl/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ In order to activate an Ethereum validator, 32 ETH must be deposited into the of

The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.

To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network developed and maintains a website that enables a group of users to come together and create these threshold keys: [**The DV Launchpad**](https://bia.launchpad.obol.tech/).
To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network developed and maintains a website that enables a group of users to come together and create these threshold keys: [**The DV Launchpad**](https://goerli.launchpad.obol.tech/).

## Getting started

Expand Down
2 changes: 1 addition & 1 deletion docs/int/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Similar to how roll-up technology laid the foundation for L2 scaling implementat

The Obol Network consists of four core public goods:

- The [Distributed Validator Launchpad](../dvl/intro), a [User Interface](https://bia.launchpad.obol.tech/) for bootstrapping Distributed Validators
- The [Distributed Validator Launchpad](../dvl/intro), a [User Interface](https://goerli.launchpad.obol.tech/) for bootstrapping Distributed Validators
- [Charon](../charon/intro), a middleware client that enables validators to run in a fault-tolerant, distributed manner
- [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
- [Obol Testnets](../testnet.md), a set of on-going public incentivized testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
Expand Down
10 changes: 5 additions & 5 deletions docs/int/faq/errors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ description: Errors & Resolutions
All operators should try to restart their nodes and should check if they are on the latest stable version before attempting anything other configuration change as we are still in beta and frequently releasing fixes. You can restart and update with the following commands:

```
docker-compose down
docker compose down
git pull
docker-compose up
docker compose up
```

You can check your logs using

```
docker-compose logs
docker compose logs
```
<details open className="details">
<summary>
Expand Down Expand Up @@ -344,15 +344,15 @@ docker-compose logs
</details>
<details className="details">
<summary>
<h4 id="running-docker-compose-up-error"> I see a lot of errors after running <code>docker-compose up</code>
<h4 id="running-docker-compose-up-error"> I see a lot of errors after running <code>docker compose up</code>
</h4>
</summary> It's because both geth and lighthouse start syncing and so there's connectivity issues among the containers. Simply let the containers run for a while. You won't observe frequent errors when geth finishes syncing. You can also add a second beacon node endpoint for something like infura by adding a comma separated API URL to the end of <code>CHARON_BEACON_NODE_ENDPOINTS</code> in the docker-compose(./docker-compose.yml#84).
</details>
<details className="details">
<summary>
<h4 id="loki-plugin-not-found-error"> How do I fix the <code>plugin "loki" not found</code> error?
</h4>
</summary> If you get the following error when calling `docker-compose up`:<br/>
</summary> If you get the following error when calling `docker compose up`:<br/>
<code>Error response from daemon: error looking up logging plugin loki: plugin "loki" not found</code>.<br/>Then it probably means that the Loki docker driver isn't installed. In that case, run the following command to install loki:<br/>
<code>docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions </code>
</details>
Expand Down
4 changes: 2 additions & 2 deletions docs/int/faq/general.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ By the way, the more operators, the longer the DKG, but don't worry, there is no

## Debugging Errors in Logs

You can check if the containers on your node are outputting errors by running `docker-compose logs` on a machine with a running cluster.
You can check if the containers on your node are outputting errors by running `docker compose logs` on a machine with a running cluster.

Diagnose some common errors and view their resolutions [here](./errors.mdx).

Expand All @@ -80,7 +80,7 @@ cd charon-distributed-validator-node
nano bootnode/docker-compose.yml
docker-compose -f bootnode/docker-compose.yml up
docker compose -f bootnode/docker-compose.yml up
```

Test whether the bootnode is publicly accessible. This should return an ENR:
Expand Down
4 changes: 2 additions & 2 deletions docs/int/quickstart/group/quickstart-group-leader-creator.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Before starting the cluster creation, you will need to collect one Ethereum addr
cd charon-distributed-validator-node

# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.13.0 create enr
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.14.0 create enr
```

You should expect to see a console output like
Expand All @@ -78,7 +78,7 @@ Please make sure to create a backup of the private key at `.charon/charon-enr-pr
You will prepare the configuration file for the distributed key generation ceremony using the launchpad.
1. Go to the [DV Launchpad](https://bia.launchpad.obol.tech)
1. Go to the [DV Launchpad](https://goerli.launchpad.obol.tech)
2. Connect your wallet
![Connect your Wallet](/img/Guide01.png)
Expand Down
20 changes: 10 additions & 10 deletions docs/int/quickstart/group/quickstart-group-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
cd charon-distributed-validator-node

# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.13.0 create enr
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.14.0 create enr
```

You should expect to see a console output like
Expand Down Expand Up @@ -112,7 +112,7 @@ With the DKG ceremony over, the last phase before activation is to prepare your

Before completing these instructions, you should assign a static local IP address to your device (extending the DHCP reservation indefinitely or removing the device from the DCHP pool entirely if you prefer), and port forward the TCP protocol on the public port `:3610` on your router to your device's local IP address on the same port. This step is different for every person's home internet, and can be complicated by the presence of dynamic public IP addresses. We are currently working on making this as easy as possible, but for the time being, a distributed validator cluster isn't going to work very resiliently if all charon nodes cannot talk directly to one another and instead need to have an intermediary node forwarding traffic to them.

**Caution**: If you manually update `docker-compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.
**Caution**: If you manually update `docker compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.

**Note**: If you have a `geth` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/geth`. This makes everything faster since you start from a synced geth node.

Expand All @@ -121,7 +121,7 @@ Before completing these instructions, you should assign a static local IP addres
rm -r ./data/lighthouse
# Spin up a Distributed Validator Node with a Validator Client
docker-compose up
docker compose up
# Open Grafana dashboard
open http://localhost:3000/d/singlenode/
Expand All @@ -140,7 +140,7 @@ If at any point you need to turn off your node, you can run:

```
# Shut down the currently running distributed validator node
docker-compose down
docker compose down
```

## Step 5. Activate the deposit data
Expand Down Expand Up @@ -214,7 +214,7 @@ A threshold of operators in the cluster need to perform this task to exit a vali
- `compose-volutary-exit.yml` is configured with `--epoch=112260` which is the latest Bellatrix fork on Prater.
- If the Charon cluster is running on a different chain, **ALL** operators must update `--epoch` to the same latest fork version returned by `curl $BEACON_NODE/eth/v1/config/fork_schedule`.
- Run the command to submit this node's partially signed voluntary exit:
- `docker-compose -f compose-voluntary-exit.yml up`
- `docker compose -f compose-voluntary-exit.yml up`
- Confirm the logs: `Exit for validator XXXXX submitted`
- Exit the container: `Ctrl-C`
- The charon metric `core_parsigdb_exit_total` will be incremented each time a voluntary exit partial signature is received, either from this node or from peers.
Expand All @@ -238,7 +238,7 @@ There are some additional compose files in this repository, `compose-debug.yml`,

- `compose-debug.yml` contains some additional containers that developers can use for debugging, like `jaeger`. To achieve this, you can run:
```
docker-compose -f docker-compose.yml -f compose-debug.yml up
docker compose -f docker-compose.yml -f compose-debug.yml up
```

- `docker-compose.override.yml.sample` is intended to override the default configuration provided in `docker-compose.yml`. This is useful when, for example, you wish to add port mappings or want to disable a container.
Expand All @@ -247,16 +247,16 @@ docker-compose -f docker-compose.yml -f compose-debug.yml up
```
cp docker-compose.override.yml.sample docker-compose.override.yml
# Tweak docker-compose.override.yml and then run docker-compose up
docker-compose up
# Tweak docker-compose.override.yml and then run docker compose up
docker compose up
```

- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
```
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
```

- To run [mev-boost](https://boost.flashbots.net/), run:
```
docker-compose -f docker-compose.yml -f mevboost-compose.yml up
docker compose -f docker-compose.yml -f mevboost-compose.yml up
```
14 changes: 8 additions & 6 deletions docs/int/quickstart/quickstart-alone.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,12 +41,13 @@ Run the following command:

```sh
# Create a distributed validator cluster
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.13.0 create cluster --withdrawal-address="0x000000000000000000000000000000000000dead" --nodes 6 --threshold 5
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.14.0 create cluster --name="mycluster" --withdrawal-addresses="0x000000000000000000000000000000000000dead" --fee-recipient-addresses="0x000000000000000000000000000000000000dead" --nodes 6 --threshold 5
```

This command will create a subdirectory `.charon/cluster`. In it are six folders, one for each charon node created. Each folder contains partial private keys that together make up the distributed validator described in `.charon/cluster/cluster-lock.json`.
This command will create a subdirectory `.charon/cluster`. In it are six folders, one for each charon node created. Each folder contains partial private keys that together make up the distributed validator described in `.charon/cluster/cluster-lock.json`. Note
that charon versions prior to `v0.14.0` had a single `--withdrawal-address` flag which was changed to the `--withdrawal-addresses` flag in the [v0.14.0 release](https://github.com/ObolNetwork/charon/releases/tag/v0.14.0).

This guide will launch all six charon clients in separate containers along with an execution client and consensus client. To distribute your cluster physically, copy each directory with one (or several) private keys within it to the other machines you want to use. Consider using the single node [docker-compose](https://github.com/ObolNetwork/charon-distributed-validator-node), the kubernetes [manifests](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node), or the [helm chart](https://github.com/ObolNetwork/helm-charts) example repos to get your nodes up and connected.
This guide will launch all six charon clients in separate containers along with an execution client and consensus client. To distribute your cluster physically, copy each directory with one (or several) private keys within it to the other machines you want to use. Consider using the single node [docker compose](https://github.com/ObolNetwork/charon-distributed-validator-node), the kubernetes [manifests](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node), or the [helm chart](https://github.com/ObolNetwork/helm-charts) example repos to get your nodes up and connected.

### Distributed Validator Cluster

Expand All @@ -69,7 +70,7 @@ Run this command from each machine containing private keys to start your cluster

```sh
# Start the distributed validator cluster
docker-compose up --build
docker compose up --build
```
Check the monitoring dashboard and see if things look all right

Expand Down Expand Up @@ -115,7 +116,7 @@ A threshold of nodes in the cluster need to perform this task to exit a validato
- `compose-volutary-exit.yml` is configured with `--epoch=112260` which is the latest Bellatrix fork on Prater.
- If the Charon cluster is running on a different chain, **ALL** operators must update `--epoch` to the same latest fork version returned by `curl $BEACON_NODE/eth/v1/config/fork_schedule`.
- Run the command to submit this node's partially signed voluntary exit:
- `docker-compose -f compose-voluntary-exit.yml up`
- `docker compose -f compose-voluntary-exit.yml up`
- Confirm the logs: `Exit for validator XXXXX submitted`
- Exit the container: `Ctrl-C`
- The charon metric `core_parsigdb_exit_total` will be incremented each time a voluntary exit partial signature is received, either from this node or from peers.
Expand All @@ -136,7 +137,8 @@ which needs a prysm beacon node to work alongside a REST based beacon node. Here
docker compose -f docker-compose.yml -f compose-prysm.yml -f docker-compose.override.yml up --build
```
Note: Support for prysm VCs with is in experimental phase as prysm doesn't provide complete support of REST API compatible validator client.
Note: Support for prysm validator clients is in an experimental phase as prysm doesn't provide [complete support](https://github.com/prysmaticlabs/prysm/issues/11580)
for running their validator client on a beacon node REST API.

## Feedback

Expand Down
22 changes: 11 additions & 11 deletions docs/int/quickstart/quickstart-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
cd charon-distributed-validator-node

# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.13.0 create enr
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.14.0 create enr
```

You should expect to see a console output like
Expand All @@ -59,7 +59,7 @@ Finally, share your ENR with the leader or creator so that he/she can proceed to

3. Run the `charon create dkg` command that generates DKG cluster-definition.json file.
```
docker run --rm -v "$(pwd):/opt/charon" --env-file .env.create_dkg obolnetwork/charon:v0.13.0 create dkg
docker run --rm -v "$(pwd):/opt/charon" --env-file .env.create_dkg obolnetwork/charon:v0.14.0 create dkg
```

This command should output a file at `.charon/cluster-definition.json`. This file needs to be shared with the other operators in a cluster.
Expand All @@ -72,7 +72,7 @@ Every cluster member then participates in the DKG ceremony. For Charon v1, this

```
# Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.13.0 dkg
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.14.0 dkg
```

>This is a helpful [video walkthrough](https://www.youtube.com/watch?v=94Pkovp5zoQ&ab_channel=ObolNetwork).
Expand Down Expand Up @@ -106,7 +106,7 @@ Before completing these instructions, you should assign a static local IP addres
rm -r ./data/lighthouse
# Spin up a Distributed Validator Node with a Validator Client
docker-compose up
docker compose up
# Open Grafana dashboard
open http://localhost:3000/d/singlenode/
Expand All @@ -125,7 +125,7 @@ If at any point you need to turn off your node, you can run:

```
# Shut down the currently running distributed validator node
docker-compose down
docker compose down
```

## Step 5. Activate the deposit data
Expand Down Expand Up @@ -199,7 +199,7 @@ A threshold of operators in the cluster need to perform this task to exit a vali
- `compose-volutary-exit.yml` is configured with `--epoch=112260` which is the latest Bellatrix fork on Prater.
- If the Charon cluster is running on a different chain, **ALL** operators must update `--epoch` to the same latest fork version returned by `curl $BEACON_NODE/eth/v1/config/fork_schedule`.
- Run the command to submit this node's partially signed voluntary exit:
- `docker-compose -f compose-voluntary-exit.yml up`
- `docker compose -f compose-voluntary-exit.yml up`
- Confirm the logs: `Exit for validator XXXXX submitted`
- Exit the container: `Ctrl-C`
- The charon metric `core_parsigdb_exit_total` will be incremented each time a voluntary exit partial signature is received, either from this node or from peers.
Expand All @@ -214,7 +214,7 @@ The above steps should get you running a distributed validator cluster. The foll

### Docker power users

This section of the readme is intended for the "docker power users", i.e., for the ones who are familiar with working with `docker-compose` and want to have more flexibility and power to change the default configuration.
This section of the readme is intended for the "docker power users", i.e., for the ones who are familiar with working with `docker compose` and want to have more flexibility and power to change the default configuration.

We use the "Multiple Compose File" feature which provides a very powerful way to override any configuration in `docker-compose.yml` without needing to modify git-checked-in files since that results in conflicts when upgrading this repo.
See [this](https://docs.docker.com/compose/extends/#multiple-compose-files) for more details.
Expand All @@ -223,7 +223,7 @@ There are two additional files in this repository, `compose-debug.yml` and `dock

- `compose-debug.yml` contains some additional containers that developers can use for debugging, like `jaeger`. To achieve this, you can run:
```
docker-compose -f docker-compose.yml -f compose-debug.yml up
docker compose -f docker-compose.yml -f compose-debug.yml up
```

- `docker-compose.override.yml.sample` is intended to override the default configuration provided in `docker-compose.yml`. This is useful when, for example, you wish to add port mappings or want to disable a container.
Expand All @@ -232,11 +232,11 @@ docker-compose -f docker-compose.yml -f compose-debug.yml up
```
cp docker-compose.override.yml.sample docker-compose.override.yml
# Tweak docker-compose.override.yml and then run docker-compose up
docker-compose up
# Tweak docker-compose.override.yml and then run docker compose up
docker compose up
```

- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
```
docker-compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
```
Loading

0 comments on commit f8587e0

Please sign in to comment.