Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: public network deployments #10089

Merged
merged 5 commits into from
Nov 21, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 4 additions & 6 deletions spartan/aztec-network/files/config/config-prover-env.sh
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
#!/bin/sh
#!/bin/bash
set -eu

alias aztec='node --no-warnings /usr/src/yarn-project/aztec/dest/bin/index.js'

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alias stopped working. I have no idea why.

# Pass the bootnode url as an argument
# Ask the bootnode for l1 contract addresses
output=$(aztec get-node-info -u $1)
output=$(node --no-warnings /usr/src/yarn-project/aztec/dest/bin/index.js get-node-info -u $1)

echo "$output"

Expand All @@ -22,7 +20,7 @@ governance_proposer_address=$(echo "$output" | grep -oP 'GovernanceProposer Addr
governance_address=$(echo "$output" | grep -oP 'Governance Address: \K0x[a-fA-F0-9]{40}')

# Write the addresses to a file in the shared volume
cat <<EOF > /shared/contracts.env
cat <<EOF > /shared/contracts/contracts.env
export BOOTSTRAP_NODES=$boot_node_enr
export ROLLUP_CONTRACT_ADDRESS=$rollup_address
export REGISTRY_CONTRACT_ADDRESS=$registry_address
Expand All @@ -36,4 +34,4 @@ export GOVERNANCE_PROPOSER_CONTRACT_ADDRESS=$governance_proposer_address
export GOVERNANCE_CONTRACT_ADDRESS=$governance_address
EOF

cat /shared/contracts.env
cat /shared/contracts/contracts.env
9 changes: 4 additions & 5 deletions spartan/aztec-network/files/config/config-validator-env.sh
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
#!/bin/sh
#!/bin/bash
set -eu

alias aztec='node --no-warnings /usr/src/yarn-project/aztec/dest/bin/index.js'

# Pass the bootnode url as an argument
# Ask the bootnode for l1 contract addresses
output=$(aztec get-node-info -u $1)
output=$(node --no-warnings /usr/src/yarn-project/aztec/dest/bin/index.js get-node-info -u $1)

echo "$output"

Expand All @@ -28,7 +27,7 @@ private_key=$(jq -r ".[$INDEX]" /app/config/keys.json)


# Write the addresses to a file in the shared volume
cat <<EOF > /shared/contracts.env
cat <<EOF > /shared/contracts/contracts.env
export BOOTSTRAP_NODES=$boot_node_enr
export ROLLUP_CONTRACT_ADDRESS=$rollup_address
export REGISTRY_CONTRACT_ADDRESS=$registry_address
Expand All @@ -45,4 +44,4 @@ export L1_PRIVATE_KEY=$private_key
export SEQ_PUBLISHER_PRIVATE_KEY=$private_key
EOF

cat /shared/contracts.env
cat /shared/contracts/contracts.env
11 changes: 5 additions & 6 deletions spartan/aztec-network/files/config/deploy-l1-contracts.sh
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
#!/bin/sh
#!/bin/bash
set -exu

CHAIN_ID=$1

alias aztec='node --no-warnings /usr/src/yarn-project/aztec/dest/bin/index.js'

# Use default account, it is funded on our dev machine
export PRIVATE_KEY="0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"
Expand All @@ -12,9 +11,9 @@ export PRIVATE_KEY="0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4
output=""
# if INIT_VALIDATORS is true, then we need to pass the validators flag to the deploy-l1-contracts command
if [ "$INIT_VALIDATORS" = "true" ]; then
output=$(aztec deploy-l1-contracts --validators $2 --l1-chain-id $CHAIN_ID)
output=$(node --no-warnings /usr/src/yarn-project/aztec/dest/bin/index.js deploy-l1-contracts --validators $2 --l1-chain-id $CHAIN_ID)
else
output=$(aztec deploy-l1-contracts --l1-chain-id $CHAIN_ID)
output=$(node --no-warnings /usr/src/yarn-project/aztec/dest/bin/index.js deploy-l1-contracts --l1-chain-id $CHAIN_ID)
fi

echo "$output"
Expand All @@ -32,7 +31,7 @@ governance_proposer_address=$(echo "$output" | grep -oP 'GovernanceProposer Addr
governance_address=$(echo "$output" | grep -oP 'Governance Address: \K0x[a-fA-F0-9]{40}')

# Write the addresses to a file in the shared volume
cat <<EOF > /shared/contracts.env
cat <<EOF > /shared/contracts/contracts.env
export ROLLUP_CONTRACT_ADDRESS=$rollup_address
export REGISTRY_CONTRACT_ADDRESS=$registry_address
export INBOX_CONTRACT_ADDRESS=$inbox_address
Expand All @@ -45,4 +44,4 @@ export GOVERNANCE_PROPOSER_CONTRACT_ADDRESS=$governance_proposer_address
export GOVERNANCE_CONTRACT_ADDRESS=$governance_address
EOF

cat /shared/contracts.env
cat /shared/contracts/contracts.env
39 changes: 39 additions & 0 deletions spartan/aztec-network/files/config/setup-p2p-addresses.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
#!/bin/sh

POD_NAME=$(echo $HOSTNAME)

if [ "${NETWORK_PUBLIC}" = "true" ]; then
# First try treating HOSTNAME as a pod name
NODE_NAME=$(kubectl get pod $POD_NAME -n ${NAMESPACE} -o jsonpath='{.spec.nodeName}' 2>/dev/null)

# If that fails, HOSTNAME might be the node name itself
if [ $? -ne 0 ]; then
echo "Could not find pod $POD_NAME, assuming $POD_NAME is the node name"
NODE_NAME=$POD_NAME
fi

EXTERNAL_IP=$(kubectl get node $NODE_NAME -o jsonpath='{.status.addresses[?(@.type=="ExternalIP")].address}')
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warning. Our AWS nodes do not have external IP addresses. So our prod deployments will likely move to GCP.


if [ -z "$EXTERNAL_IP" ]; then
echo "Warning: Could not find ExternalIP, falling back to InternalIP"
EXTERNAL_IP=$(kubectl get node $NODE_NAME -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}')
fi

TCP_ADDR="${EXTERNAL_IP}:${P2P_TCP_PORT}"
UDP_ADDR="${EXTERNAL_IP}:${P2P_UDP_PORT}"

else
# Get pod IP for non-public networks
POD_IP=$(hostname -i)
TCP_ADDR="${POD_IP}:${P2P_TCP_PORT}"
UDP_ADDR="${POD_IP}:${P2P_UDP_PORT}"
fi

# Write addresses to file for sourcing
echo "export P2P_TCP_ANNOUNCE_ADDR=${TCP_ADDR}" > /shared/p2p/p2p-addresses
echo "export P2P_TCP_LISTEN_ADDR=0.0.0.0:${P2P_TCP_PORT}" >> /shared/p2p/p2p-addresses
echo "export P2P_UDP_ANNOUNCE_ADDR=${UDP_ADDR}" >> /shared/p2p/p2p-addresses
echo "export P2P_UDP_LISTEN_ADDR=0.0.0.0:${P2P_UDP_PORT}" >> /shared/p2p/p2p-addresses

echo "P2P addresses configured:"
cat /shared/p2p/p2p-addresses
88 changes: 88 additions & 0 deletions spartan/aztec-network/files/config/setup-service-addresses.sh
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whole purpose of this file is to get the proper hostname of various services. Reason it is complicated is we have 3 cases:

  1. An "external" host is provided (e.g. we have sepolia for L1, or an external boot node)
  2. We're running a public network, so we need the underlying/physical node's IP
  3. We're running an internal network, so we use the k8s service name

Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
#!/bin/bash

set -ex

# Function to get pod and node details
get_service_address() {
local SERVICE_LABEL=$1
local PORT=$2
local MAX_RETRIES=30
local RETRY_INTERVAL=2
local attempt=1

# Get pod name
while [ $attempt -le $MAX_RETRIES ]; do
POD_NAME=$(kubectl get pods -n ${NAMESPACE} -l app=${SERVICE_LABEL} -o jsonpath='{.items[0].metadata.name}')
if [ -n "$POD_NAME" ]; then
break
fi
echo "Attempt $attempt: Waiting for ${SERVICE_LABEL} pod to be available..." >&2
sleep $RETRY_INTERVAL
attempt=$((attempt + 1))
done

if [ -z "$POD_NAME" ]; then
echo "Error: Failed to get ${SERVICE_LABEL} pod name after $MAX_RETRIES attempts" >&2
return 1
fi
echo "Pod name: [${POD_NAME}]" >&2

# Get node name
attempt=1
NODE_NAME=""
while [ $attempt -le $MAX_RETRIES ]; do
NODE_NAME=$(kubectl get pod ${POD_NAME} -n ${NAMESPACE} -o jsonpath='{.spec.nodeName}')
if [ -n "$NODE_NAME" ]; then
break
fi
echo "Attempt $attempt: Waiting for node name to be available..." >&2
sleep $RETRY_INTERVAL
attempt=$((attempt + 1))
done

if [ -z "$NODE_NAME" ]; then
echo "Error: Failed to get node name after $MAX_RETRIES attempts" >&2
return 1
fi
echo "Node name: ${NODE_NAME}" >&2

# Get the node's external IP
NODE_IP=$(kubectl get node ${NODE_NAME} -o jsonpath='{.status.addresses[?(@.type=="ExternalIP")].address}')
echo "Node IP: ${NODE_IP}" >&2
echo "http://${NODE_IP}:${PORT}"
}

# Configure Ethereum address
if [ "${ETHEREUM_EXTERNAL_HOST}" != "" ]; then
ETHEREUM_ADDR="${ETHEREUM_EXTERNAL_HOST}"
elif [ "${NETWORK_PUBLIC}" = "true" ]; then
ETHEREUM_ADDR=$(get_service_address "ethereum" "${ETHEREUM_PORT}")
else
ETHEREUM_ADDR="http://${SERVICE_NAME}-ethereum.${NAMESPACE}:${ETHEREUM_PORT}"
fi

# Configure Boot Node address
if [ "${BOOT_NODE_EXTERNAL_HOST}" != "" ]; then
BOOT_NODE_ADDR="${BOOT_NODE_EXTERNAL_HOST}"
elif [ "${NETWORK_PUBLIC}" = "true" ]; then
BOOT_NODE_ADDR=$(get_service_address "boot-node" "${BOOT_NODE_PORT}")
else
BOOT_NODE_ADDR="http://${SERVICE_NAME}-boot-node.${NAMESPACE}:${BOOT_NODE_PORT}"
fi

# Configure Prover Node address
if [ "${PROVER_NODE_EXTERNAL_HOST}" != "" ]; then
PROVER_NODE_ADDR="${PROVER_NODE_EXTERNAL_HOST}"
elif [ "${NETWORK_PUBLIC}" = "true" ]; then
PROVER_NODE_ADDR=$(get_service_address "prover-node" "${PROVER_NODE_PORT}")
else
PROVER_NODE_ADDR="http://${SERVICE_NAME}-prover-node.${NAMESPACE}:${PROVER_NODE_PORT}"
fi


# Write addresses to file for sourcing
echo "export ETHEREUM_HOST=${ETHEREUM_ADDR}" >> /shared/config/service-addresses
echo "export BOOT_NODE_HOST=${BOOT_NODE_ADDR}" >> /shared/config/service-addresses
echo "export PROVER_NODE_HOST=${PROVER_NODE_ADDR}" >> /shared/config/service-addresses
echo "Addresses configured:"
cat /shared/config/service-addresses
108 changes: 88 additions & 20 deletions spartan/aztec-network/templates/_helpers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -50,37 +50,19 @@ app.kubernetes.io/name: {{ include "aztec-network.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{- define "aztec-network.ethereumHost" -}}
{{- if .Values.ethereum.externalHost -}}
http://{{ .Values.ethereum.externalHost }}:{{ .Values.ethereum.service.port }}
{{- else -}}
http://{{ include "aztec-network.fullname" . }}-ethereum.{{ .Release.Namespace }}:{{ .Values.ethereum.service.port }}
{{- end -}}
{{- end -}}


{{- define "aztec-network.pxeUrl" -}}
{{- if .Values.pxe.externalHost -}}
http://{{ .Values.pxe.externalHost }}:{{ .Values.pxe.service.port }}
{{- else -}}
http://{{ include "aztec-network.fullname" . }}-pxe.{{ .Release.Namespace }}:{{ .Values.pxe.service.port }}
{{- end -}}
http://{{ include "aztec-network.fullname" . }}-pxe.{{ .Release.Namespace }}:{{ .Values.pxe.service.nodePort }}
{{- end -}}

{{- define "aztec-network.bootNodeUrl" -}}
{{- if .Values.bootNode.externalTcpHost -}}
http://{{ .Values.bootNode.externalTcpHost }}:{{ .Values.bootNode.service.nodePort }}
{{- else -}}
http://{{ include "aztec-network.fullname" . }}-boot-node-0.{{ include "aztec-network.fullname" . }}-boot-node.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.bootNode.service.nodePort }}
{{- end -}}
{{- end -}}

{{- define "aztec-network.validatorUrl" -}}
{{- if .Values.validator.externalTcpHost -}}
http://{{ .Values.validator.externalTcpHost }}:{{ .Values.validator.service.nodePort }}
{{- else -}}
http://{{ include "aztec-network.fullname" . }}-validator.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.validator.service.nodePort }}
{{- end -}}
{{- end -}}

{{- define "aztec-network.metricsHost" -}}
http://{{ include "aztec-network.fullname" . }}-metrics.{{ .Release.Namespace }}
Expand Down Expand Up @@ -123,3 +105,89 @@ http://{{ include "aztec-network.fullname" . }}-metrics.{{ .Release.Namespace }}
{{- end -}}
{{- end -}}
{{- end -}}

{{/*
P2P Setup Container
*/}}
{{- define "aztec-network.p2pSetupContainer" -}}
- name: setup-p2p-addresses
image: bitnami/kubectl
command:
- /bin/sh
- -c
- |
cp /scripts/setup-p2p-addresses.sh /tmp/setup-p2p-addresses.sh && \
chmod +x /tmp/setup-p2p-addresses.sh && \
/tmp/setup-p2p-addresses.sh
env:
- name: NETWORK_PUBLIC
value: "{{ .Values.network.public }}"
- name: NAMESPACE
value: {{ .Release.Namespace }}
- name: P2P_TCP_PORT
value: "{{ .Values.validator.service.p2pTcpPort }}"
- name: P2P_UDP_PORT
value: "{{ .Values.validator.service.p2pUdpPort }}"
volumeMounts:
- name: scripts
mountPath: /scripts
- name: p2p-addresses
mountPath: /shared/p2p
{{- end -}}

{{/*
Service Address Setup Container
*/}}
{{- define "aztec-network.serviceAddressSetupContainer" -}}
- name: setup-service-addresses
image: bitnami/kubectl
command:
- /bin/bash
- -c
- |
cp /scripts/setup-service-addresses.sh /tmp/setup-service-addresses.sh && \
chmod +x /tmp/setup-service-addresses.sh && \
/tmp/setup-service-addresses.sh
env:
- name: NETWORK_PUBLIC
value: "{{ .Values.network.public }}"
- name: NAMESPACE
value: {{ .Release.Namespace }}
- name: EXTERNAL_ETHEREUM_HOST
value: "{{ .Values.ethereum.externalHost }}"
- name: ETHEREUM_PORT
value: "{{ .Values.ethereum.service.port }}"
- name: EXTERNAL_BOOT_NODE_HOST
value: "{{ .Values.bootNode.externalHost }}"
- name: BOOT_NODE_PORT
value: "{{ .Values.bootNode.service.nodePort }}"
- name: EXTERNAL_PROVER_NODE_HOST
value: "{{ .Values.proverNode.externalHost }}"
- name: PROVER_NODE_PORT
value: "{{ .Values.proverNode.service.nodePort }}"
- name: SERVICE_NAME
value: {{ include "aztec-network.fullname" . }}
volumeMounts:
- name: scripts
mountPath: /scripts
- name: config
mountPath: /shared/config
{{- end -}}

{{/**
Anti-affinity when running in public network mode
*/}}
{{- define "aztec-network.publicAntiAffinity" -}}
affinity:
podAntiAffinity:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will restrict the number of validators we can run right? IIRC we mentioned the cluster having 10 nodes, so we can have a max of 10 services running?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, thanks, I meant to call that out in the description of the PR. will update it.

Will also update the values.yaml to call this out explicitly.

requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- validator
- boot-node
- prover
topologyKey: "kubernetes.io/hostname"
{{- end -}}
Loading
Loading