Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix docker build #338

Merged
merged 22 commits into from
Nov 22, 2024
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,9 @@ jobs:
if: steps.cache-cbrotli.outputs.cache-hit != 'true'
run: ./scripts/build-brotli.sh -w -d

- name: Install solidty dependencies
run: cd contracts && yarn install && forge install

- name: Build
run: make build -j

Expand Down
131 changes: 66 additions & 65 deletions .github/workflows/espresso-docker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ run-name: Docker build CI triggered from @${{ github.actor }} of ${{ github.head
on:
workflow_dispatch:
merge_group:
pull_request:
push:
branches:
- master
Expand All @@ -35,20 +36,19 @@ jobs:
strategy:
matrix:
platform: [linux/amd64, linux/arm64]
include:
- platform: linux/amd64
runs-on: ubuntu-latest
- platform: linux/arm64
runs-on: buildjet-4vcpu-ubuntu-2204-arm
# Don't run arm build on PRs, "exclude:" is processed before "include:",
# so we avoid using `include:`.
exclude:
- platform: ${{ github.event_name == 'pull_request' && 'linux/arm64' }}

runs-on: ${{ matrix.runs-on }}
runs-on: ${{ matrix.platform == 'linux/amd64' && 'ubuntu-latest' || 'buildjet-4vcpu-ubuntu-2204-arm' }}

steps:
- uses: cargo-bins/cargo-binstall@main
if: matrix.runs-on == 'ubuntu-latest'
if: ${{ runner.arch != 'ARM64' }}

- name: Make more disk space available on public runner
if: matrix.runs-on == 'ubuntu-latest'
if: ${{ runner.arch != 'ARM64' }}
run: |
# rmz seems to be faster at deleting files than rm
cargo binstall -y rmz
Expand Down Expand Up @@ -158,60 +158,61 @@ jobs:
#
# For documentation refer to
# https://docs.docker.com/build/ci/github-actions/multi-platform/#distribute-build-across-multiple-runners
merge_into_multiplatform_images:
needs:
- docker_build
strategy:
matrix:
target: [nitro-node, nitro-node-dev]
include:
- target: nitro-node
image: ghcr.io/espressosystems/nitro-espresso-integration/nitro-node
- target: nitro-node-dev
image: ghcr.io/espressosystems/nitro-espresso-integration/nitro-node-dev

runs-on: ubuntu-latest
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Login to Github Container Repo
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Remove digests dir
run: |
rm -rf "${{ runner.temp }}/digests"

- name: Download digests
uses: actions/download-artifact@v3
with:
name: "${{ matrix.target }}-digests"
path: "${{ runner.temp }}/digests"

- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ matrix.image }}

- name: Create manifest list and push
working-directory: "${{ runner.temp }}/digests"
run: |
# Count the number of files in the directory
file_count=$(find . -type f | wc -l)

if [ "$file_count" -ne 2 ]; then
echo "Should have exactly 2 digests to combine, something went wrong"
ls -lah
exit 1
fi

docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ matrix.image }}@sha256:%s ' *)
- name: Inspect image
run: |
docker buildx imagetools inspect ${{ matrix.image }}:${{ steps.meta.outputs.version }}
# Only building for AMD64
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Sneh1999 why comment this out? The arm image is used by anyone developing on a mac.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I tried re-enabling it but now I'm thinking the part of extracting the wavm binary we may not need anymore as long as we always use the upstream one.

# merge_into_multiplatform_images:
# needs:
# - docker_build
# strategy:
# matrix:
# target: [nitro-node, nitro-node-dev]
# include:
# - target: nitro-node
# image: ghcr.io/espressosystems/nitro-espresso-integration/nitro-node
# - target: nitro-node-dev
# image: ghcr.io/espressosystems/nitro-espresso-integration/nitro-node-dev

# runs-on: ubuntu-latest
# steps:
# - name: Set up Docker Buildx
# uses: docker/setup-buildx-action@v3

# - name: Login to Github Container Repo
# uses: docker/login-action@v2
# with:
# registry: ghcr.io
# username: ${{ github.repository_owner }}
# password: ${{ secrets.GITHUB_TOKEN }}

# - name: Remove digests dir
# run: |
# rm -rf "${{ runner.temp }}/digests"

# - name: Download digests
# uses: actions/download-artifact@v3
# with:
# name: "${{ matrix.target }}-digests"
# path: "${{ runner.temp }}/digests"

# - name: Docker meta
# id: meta
# uses: docker/metadata-action@v5
# with:
# images: ${{ matrix.image }}

# - name: Create manifest list and push
# working-directory: "${{ runner.temp }}/digests"
# run: |
# # Count the number of files in the directory
# file_count=$(find . -type f | wc -l)

# if [ "$file_count" -ne 2 ]; then
# echo "Should have exactly 2 digests to combine, something went wrong"
# ls -lah
# exit 1
# fi

# docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
# $(printf '${{ matrix.image }}@sha256:%s ' *)
# - name: Inspect image
# run: |
# docker buildx imagetools inspect ${{ matrix.image }}:${{ steps.meta.outputs.version }}
3 changes: 3 additions & 0 deletions .github/workflows/espresso-e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,9 @@ jobs:
if: steps.cache-cbrotli.outputs.cache-hit != 'true'
run: ./scripts/build-brotli.sh -w -d

- name: Install solidty dependencies
run: cd contracts && yarn install && forge install

- name: Build
run: make build build-replay-env -j

Expand Down
2 changes: 2 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,8 @@ COPY scripts/build-brotli.sh scripts/
COPY brotli brotli
RUN apt-get update && apt-get install -y cmake
RUN NITRO_BUILD_IGNORE_TIMESTAMPS=1 make build-prover-header

RUN apt-get install -y libssl-dev pkg-config
RUN NITRO_BUILD_IGNORE_TIMESTAMPS=1 make build-espresso-crypto-lib

FROM scratch AS prover-header-export
Expand Down
3 changes: 1 addition & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -571,8 +571,7 @@ contracts/test/prover/proofs/%.json: $(arbitrator_cases)/%.wasm $(prover_bin)

.make/solidity: $(DEP_PREDICATE) safe-smart-account/contracts/*/*.sol safe-smart-account/contracts/*.sol contracts/src/*/*.sol .make/yarndeps $(ORDER_ONLY_PREDICATE) .make
yarn --cwd safe-smart-account build
yarn --cwd contracts build
yarn --cwd contracts build:forge:yul
yarn --cwd contracts build:all
@touch $@

.make/yarndeps: $(DEP_PREDICATE) contracts/package.json contracts/yarn.lock $(ORDER_ONLY_PREDICATE) .make
Expand Down
68 changes: 37 additions & 31 deletions arbnode/batch_poster.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,8 @@ var (
const (
batchPosterSimpleRedisLockKey = "node.batch-poster.redis-lock.simple-lock-key"

sequencerBatchPostMethodName = "addSequencerL2BatchFromOrigin0"
oldSequencerBatchPostMethodName = "addSequencerL2BatchFromOrigin1"
newSequencerBatchPostMethodName = "addSequencerL2BatchFromOrigin"
sequencerBatchPostWithBlobsMethodName = "addSequencerL2BatchFromBlobs"
)

Expand Down Expand Up @@ -620,7 +621,6 @@ func (b *BatchPoster) submitEspressoTransactionPos(pos arbutil.MessageIndex) err

// Store the pos in the database to be used later to submit the message
// to hotshot for finalization.
log.Info("submitting pos", "pos", pos)
err = b.streamer.SubmitEspressoTransactionPos(pos, b.streamer.db.NewBatch())
if err != nil {
log.Error("failed to submit espresso transaction pos", "pos", pos, "err", err)
Expand Down Expand Up @@ -1088,27 +1088,20 @@ func (b *BatchPoster) encodeAddBatch(
delayedMsg uint64,
use4844 bool,
) ([]byte, []kzg4844.Blob, error) {
methodName := sequencerBatchPostMethodName
if use4844 {
methodName = sequencerBatchPostWithBlobsMethodName
}
method, ok := b.seqInboxABI.Methods[methodName]
if !ok {
return nil, nil, errors.New("failed to find add batch method")
}

var calldata []byte
var kzgBlobs []kzg4844.Blob
fullCalldata := make([]byte, 0)
var err error
var userData []byte
if use4844 {
method, ok := b.seqInboxABI.Methods[sequencerBatchPostWithBlobsMethodName]
if !ok {
return nil, nil, errors.New("failed to find add batch method")
}
kzgBlobs, err = blobs.EncodeBlobs(l2MessageData)
if err != nil {
return nil, nil, fmt.Errorf("failed to encode blobs: %w", err)
}
_, blobHashes, err := blobs.ComputeCommitmentsAndHashes(kzgBlobs)
if err != nil {
return nil, nil, fmt.Errorf("failed to compute blob hashes: %w", err)
}
// EIP4844 transactions to the sequencer inbox will not use transaction calldata for L2 info.
calldata, err = method.Inputs.Pack(
seqNum,
Expand All @@ -1118,46 +1111,58 @@ func (b *BatchPoster) encodeAddBatch(
new(big.Int).SetUint64(uint64(newMsgNum)),
)
if err != nil {
return nil, nil, fmt.Errorf("failed to pack calldata: %w", err)
return nil, nil, fmt.Errorf("failed to pack calldata for eip-4844: %w", err)
}
fullCalldata = append(fullCalldata, method.ID...)
fullCalldata = append(fullCalldata, calldata...)
} else {
// initially constructing the calldata using the old oldSequencerBatchPostMethodName method
// This will allow us to get the attestation quote on the hash of the data
method, ok := b.seqInboxABI.Methods[oldSequencerBatchPostMethodName]
if !ok {
return nil, nil, errors.New("failed to find add batch method")
}
// userData has blobHashes along with other calldata for EIP-4844 transactions
userData, err = method.Inputs.Pack(
calldata, err = method.Inputs.Pack(
seqNum,
l2MessageData,
new(big.Int).SetUint64(delayedMsg),
b.config().gasRefunder,
new(big.Int).SetUint64(uint64(prevMsgNum)),
new(big.Int).SetUint64(uint64(newMsgNum)),
blobHashes,
)

if err != nil {
return nil, nil, fmt.Errorf("failed to pack user data: %w", err)
return nil, nil, fmt.Errorf("failed to pack calldata without attestation quote: %w", err)
}
_, err = b.getAttestationQuote(userData)

attestationQuote, err := b.getAttestationQuote(calldata)
if err != nil {
return nil, nil, fmt.Errorf("failed to get attestation quote: %w", err)
}
} else {

// construct the calldata with attestation quote
method, ok = b.seqInboxABI.Methods[newSequencerBatchPostMethodName]
if !ok {
return nil, nil, errors.New("failed to find add batch method")
}

calldata, err = method.Inputs.Pack(
seqNum,
l2MessageData,
new(big.Int).SetUint64(delayedMsg),
b.config().gasRefunder,
new(big.Int).SetUint64(uint64(prevMsgNum)),
new(big.Int).SetUint64(uint64(newMsgNum)),
attestationQuote,
)

if err != nil {
return nil, nil, fmt.Errorf("failed to pack calldata: %w", err)
}

_, err = b.getAttestationQuote(calldata)
if err != nil {
return nil, nil, fmt.Errorf("failed to get attestation quote: %w", err)
return nil, nil, fmt.Errorf("failed to pack calldata with attestation quote: %w", err)
}
fullCalldata = append([]byte{}, method.ID...)
fullCalldata = append(fullCalldata, calldata...)
}
// TODO: when contract is updated add attestationQuote to the calldata
fullCalldata := append([]byte{}, method.ID...)
fullCalldata = append(fullCalldata, calldata...)

return fullCalldata, kzgBlobs, nil
}

Expand Down Expand Up @@ -1647,6 +1652,7 @@ func (b *BatchPoster) maybePostSequencerBatch(ctx context.Context) (bool, error)
return false, fmt.Errorf("produced %v blobs for batch but a block can only hold %v (compressed batch was %v bytes long)", len(kzgBlobs), params.MaxBlobGasPerBlock/params.BlobTxBlobGasPerBlob, len(sequencerMsg))
}
accessList := b.accessList(batchPosition.NextSeqNum, b.building.segments.delayedMsg)

// On restart, we may be trying to estimate gas for a batch whose successor has
// already made it into pending state, if not latest state.
// In that case, we might get a revert with `DelayedBackwards()`.
Expand Down
3 changes: 2 additions & 1 deletion arbnode/node.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ import (
"github.com/offchainlabs/nitro/wsbroadcastserver"
)

func GenerateRollupConfig(prod bool, wasmModuleRoot common.Hash, rollupOwner common.Address, chainConfig *params.ChainConfig, serializedChainConfig []byte, loserStakeEscrow common.Address) rollupgen.Config {
func GenerateRollupConfig(prod bool, wasmModuleRoot common.Hash, rollupOwner common.Address, chainConfig *params.ChainConfig, serializedChainConfig []byte, loserStakeEscrow common.Address, espressoTEEVerifier common.Address) rollupgen.Config {
var confirmPeriod uint64
if prod {
confirmPeriod = 45818
Expand All @@ -67,6 +67,7 @@ func GenerateRollupConfig(prod bool, wasmModuleRoot common.Hash, rollupOwner com
Owner: rollupOwner,
LoserStakeEscrow: loserStakeEscrow,
ChainId: chainConfig.ChainID,
EspressoTEEVerifier: espressoTEEVerifier,
// TODO could the ChainConfig be just []byte?
ChainConfig: string(serializedChainConfig),
SequencerInboxMaxTimeVariation: rollupgen.ISequencerInboxMaxTimeVariation{
Expand Down
2 changes: 1 addition & 1 deletion arbnode/sequencer_inbox.go
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ func init() {
}
batchDeliveredID = sequencerBridgeABI.Events["SequencerBatchDelivered"].ID
sequencerBatchDataABI = sequencerBridgeABI.Events[sequencerBatchDataEvent]
addSequencerL2BatchFromOriginCallABI = sequencerBridgeABI.Methods["addSequencerL2BatchFromOrigin0"]
addSequencerL2BatchFromOriginCallABI = sequencerBridgeABI.Methods["addSequencerL2BatchFromOrigin"]
}

type SequencerInbox struct {
Expand Down
3 changes: 3 additions & 0 deletions ci_skip_tests
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,9 @@ TestTwoNodesLong
# These tests are failing with celestia integration
TestEmptyCliConfig
TestChallengeToTooFar
TestLyingSequencer
TestLyingSequencerLocalDAS
TestStylusOpcodeTraceEquivalence

# These tests are specific to Espresso and we have a dedicated
# CI workflow for them. See: .github/workflows/espresso-e2e.yml
Expand Down
8 changes: 7 additions & 1 deletion cmd/deploy/deploy.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ func main() {
deployAccount := flag.String("l1DeployAccount", "", "l1 seq account to use (default is first account in keystore)")
ownerAddressString := flag.String("ownerAddress", "", "the rollup owner's address")
sequencerAddressString := flag.String("sequencerAddress", "", "the sequencer's address")
espressoTEEVerifierAddressString := flag.String("espressoTEEVerifierAddress", "", "the address of the espressoTEEVerifier contract")
batchPostersString := flag.String("batchPosters", "", "the comma separated array of addresses of batch posters. Defaults to sequencer address")
batchPosterManagerAddressString := flag.String("batchPosterManger", "", "the batch poster manger's address. Defaults to owner address")
nativeTokenAddressString := flag.String("nativeTokenAddress", "0x0000000000000000000000000000000000000000", "address of the ERC20 token which is used as native L2 currency")
Expand Down Expand Up @@ -97,6 +98,11 @@ func main() {
if !common.IsHexAddress(*sequencerAddressString) && len(*sequencerAddressString) > 0 {
panic("specified sequencer address is invalid")
}

esperssoTEEVerifierAddress := common.HexToAddress(*espressoTEEVerifierAddressString)
if !common.IsHexAddress(esperssoTEEVerifierAddress.String()) {
panic("specified espressoTEEVerifier address is invalid")
}
sequencerAddress := common.HexToAddress(*sequencerAddressString)

if !common.IsHexAddress(*ownerAddressString) {
Expand Down Expand Up @@ -187,7 +193,7 @@ func main() {
batchPosters,
batchPosterManagerAddress,
*authorizevalidators,
arbnode.GenerateRollupConfig(*prod, moduleRoot, ownerAddress, &chainConfig, chainConfigJson, loserEscrowAddress),
arbnode.GenerateRollupConfig(*prod, moduleRoot, ownerAddress, &chainConfig, chainConfigJson, loserEscrowAddress, esperssoTEEVerifierAddress),
nativeToken,
maxDataSize,
true,
Expand Down
2 changes: 1 addition & 1 deletion contracts
Submodule contracts updated 71 files
+19 −0 .env.sample.goerli
+19 −5 .github/workflows/audit-ci.yml
+83 −16 .github/workflows/contract-tests.yml
+3 −0 .gitmodules
+1 −11 audit-ci.jsonc
+8 −0 deploy/SequencerInboxStubCreator.js
+4 −3 foundry.toml
+24 −22 hardhat.config.ts
+1 −0 lib/automata-dcap-attestation
+1 −1 lib/forge-std
+4 −6 package.json
+8 −0 remappings.txt
+1 −0 scripts/config.ts.example
+6 −0 scripts/createERC20Rollup.ts
+11 −1 scripts/createEthRollup.ts
+30 −0 scripts/deployEspressoTEEVerifier.ts
+1 −17 scripts/deployment.ts
+102 −0 scripts/deploymentCelestiaReuseExisting.ts
+12 −63 scripts/deploymentUtils.ts
+11 −1 scripts/local-deployment/deployCreatorAndCreateRollup.ts
+0 −90 scripts/printMetadataHashes.ts
+26 −10 scripts/rollupCreation.ts
+76 −0 src/bridge/EspressoTEEVerifier.sol
+32 −0 src/bridge/ISequencerInbox.sol
+111 −71 src/bridge/SequencerInbox.sol
+167 −0 src/celestia/BlobstreamVerifier.sol
+349 −0 src/celestia/DAVerifier.sol
+44 −0 src/celestia/IBlobstreamX.sol
+8 −0 src/celestia/lib/Constants.sol
+15 −0 src/celestia/lib/DataRootTuple.sol
+19 −0 src/celestia/lib/IDAOracle.sol
+27 −0 src/celestia/lib/tree/Constants.sol
+40 −0 src/celestia/lib/tree/Types.sol
+86 −0 src/celestia/lib/tree/Utils.sol
+12 −0 src/celestia/lib/tree/binary/BinaryMerkleProof.sol
+172 −0 src/celestia/lib/tree/binary/BinaryMerkleTree.sol
+23 −0 src/celestia/lib/tree/binary/TreeHasher.sol
+14 −0 src/celestia/lib/tree/namespace/NamespaceMerkleMultiproof.sol
+14 −0 src/celestia/lib/tree/namespace/NamespaceMerkleProof.sol
+409 −0 src/celestia/lib/tree/namespace/NamespaceMerkleTree.sol
+29 −0 src/celestia/lib/tree/namespace/NamespaceNode.sol
+83 −0 src/celestia/lib/tree/namespace/TreeHasher.sol
+1 −1 src/chain/CacheManager.sol
+6 −0 src/libraries/Error.sol
+0 −2 src/mocks/BridgeUnproxied.sol
+17 −0 src/mocks/EspressoTEEVerifier.sol
+83 −0 src/mocks/MockBlobstream.sol
+488 −0 src/mocks/OneStepProverHostIoCelestiaMock.sol
+3 −1 src/mocks/SequencerInboxStub.sol
+2 −1 src/mocks/Simple.sol
+52 −18 src/osp/OneStepProverHostIo.sol
+1 −1 src/precompiles/ArbWasm.sol
+20 −5 src/rollup/BridgeCreator.sol
+2 −0 src/rollup/Config.sol
+10 −12 src/rollup/RollupAdminLogic.sol
+20 −23 src/rollup/RollupCreator.sol
+34 −23 test/contract/arbRollup.spec.ts
+27 −10 test/contract/sequencerInbox.spec.4844.ts
+32 −9 test/contract/sequencerInboxForceInclude.spec.ts
+1 −1 test/contract/toolkit4844.ts
+400 −375 test/e2e/orbitChain.ts
+9 −3 test/foundry/BridgeCreator.t.sol
+30 −12 test/foundry/ChallengeManager.t.sol
+64 −0 test/foundry/EspressoTEEVerifier.t.sol
+19 −10 test/foundry/RollupCreator.t.sol
+92 −18 test/foundry/SequencerInbox.t.sol
+ test/foundry/configs/attestation.bin
+ test/foundry/configs/incorrect_attestation_quote.bin
+1 −0 test/foundry/configs/tcbinfo.json
+1 −0 test/foundry/configs/tee_identity.json
+29 −26 yarn.lock
Loading
Loading