Skip to content

Commit

Permalink
feat: Run block-proving jobs in parallel by forking world-state (#7655)
Browse files Browse the repository at this point in the history
# Goal

We want to be able to kick off more than one `prove(blocknumber)` job in
the prover-node at the same time.

We currently cannot do it because the prover-node has a single
world-state, and building a proof modifies world-state. In particular,
preparing the inputs for base rollups modifies state trees, and
finalising the block modifies the archive tree.

# Why?

This'll be needed for the proving integration contest, in case we
generate blocks faster than the time it takes to prove them.

It may still be useful once we move to epoch proving and
sequencer-prover coordination, since the same prover could be picked for
generating proofs for two consecutive epochs.

# How?

## The easy way

Easiest approach is to keep everything as-is today, and clone the world
state before kicking off a job. Eventually, once we implement [Phil's
world-state](AztecProtocol/engineering-designs#9),
we can use the writeable world-state snapshots for this. **This is what
we're doing on this PR.**

## The not-so-easy way

Another approach is to decouple input-generation from proving. Today the
prover-orchestrator is responsible for computing all inputs, but this is
not strictly needed. We can have one component that generates all
inputs, modifies world-state, and outputs a graph of proving jobs (as
Mitch commented
[here](https://aztecprotocol.slack.com/archives/C04BTJAA694/p1722195399887399?thread_ts=1722195378.794149&cid=C04BTJAA694)).
And then another component that orchestrates the execution of proving
jobs exclusively based on their dependencies.

Note that this new component would be a good fit for generating a block
in the sequencer, which today runs an orchestrator without proving
enabled to get to a block header.

It's unclear whether this component should run everything serially (like
[the old block
builder](https://aztecprotocol.slack.com/archives/C04BTJAA694/p1722195399887399?thread_ts=1722195378.794149&cid=C04BTJAA694)
did), or it makes more sense to fan out circuit simulation jobs to
workers (like the proving orchestrator can do now).
  • Loading branch information
spalladino authored Aug 7, 2024
1 parent f0f28fc commit d3c8237
Show file tree
Hide file tree
Showing 34 changed files with 495 additions and 370 deletions.
3 changes: 2 additions & 1 deletion yarn-project/aztec-node/src/aztec-node/server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ export class AztecNodeService implements AztecNode {

const simulationProvider = await createSimulationProvider(config, log);

const prover = await createProverClient(config, worldStateSynchronizer, archiver, telemetry);
const prover = await createProverClient(config, telemetry);

if (!prover && !config.disableSequencer) {
throw new Error("Can't start a sequencer without a prover");
Expand Down Expand Up @@ -742,6 +742,7 @@ export class AztecNodeService implements AztecNode {
this.telemetry,
);
const processor = publicProcessorFactory.create(prevHeader, newGlobalVariables);

// REFACTOR: Consider merging ProcessReturnValues into ProcessedTx
const [processedTxs, failedTxs, returns] = await processor.process([tx]);
// REFACTOR: Consider returning the error/revert rather than throwing
Expand Down
1 change: 1 addition & 0 deletions yarn-project/circuit-types/src/interfaces/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,4 @@ export * from './block-prover.js';
export * from './server_circuit_prover.js';
export * from './private_kernel_prover.js';
export * from './tx-provider.js';
export * from './merkle_tree_operations.js';
Original file line number Diff line number Diff line change
@@ -1,14 +1,56 @@
import { type L2Block, type MerkleTreeId, type SiblingPath } from '@aztec/circuit-types';
import { type Fr, type Header, type NullifierLeafPreimage, type StateReference } from '@aztec/circuits.js';
import { createDebugLogger } from '@aztec/foundation/log';
import { type IndexedTreeLeafPreimage } from '@aztec/foundation/trees';
import { type AppendOnlyTree, type BatchInsertionResult, type IndexedTree } from '@aztec/merkle-tree';

import { type L2Block } from '../l2_block.js';
import { type MerkleTreeId } from '../merkle_tree_id.js';
import { type SiblingPath } from '../sibling_path/sibling_path.js';

/**
* Type alias for the nullifier tree ID.
*/
export type IndexedTreeId = MerkleTreeId.NULLIFIER_TREE | MerkleTreeId.PUBLIC_DATA_TREE;

/**
* All of the data to be return during batch insertion.
*/
export interface LowLeafWitnessData<N extends number> {
/**
* Preimage of the low nullifier that proves non membership.
*/
leafPreimage: IndexedTreeLeafPreimage;
/**
* Sibling path to prove membership of low nullifier.
*/
siblingPath: SiblingPath<N>;
/**
* The index of low nullifier.
*/
index: bigint;
}

/**
* The result of a batch insertion in an indexed merkle tree.
*/
export interface BatchInsertionResult<TreeHeight extends number, SubtreeSiblingPathHeight extends number> {
/**
* Data for the leaves to be updated when inserting the new ones.
*/
lowLeavesWitnessData?: LowLeafWitnessData<TreeHeight>[];
/**
* Sibling path "pointing to" where the new subtree should be inserted into the tree.
*/
newSubtreeSiblingPath: SiblingPath<SubtreeSiblingPathHeight>;
/**
* The new leaves being inserted in high to low order. This order corresponds with the order of the low leaves witness.
*/
sortedNewLeaves: Buffer[];
/**
* The indexes of the sorted new leaves to the original ones.
*/
sortedNewLeavesIndexes: number[];
}

/**
* Defines tree information.
*/
Expand All @@ -32,14 +74,6 @@ export interface TreeInfo {
depth: number;
}

export type MerkleTreeMap = {
[MerkleTreeId.NULLIFIER_TREE]: IndexedTree;
[MerkleTreeId.NOTE_HASH_TREE]: AppendOnlyTree<Fr>;
[MerkleTreeId.PUBLIC_DATA_TREE]: IndexedTree;
[MerkleTreeId.L1_TO_L2_MESSAGE_TREE]: AppendOnlyTree<Fr>;
[MerkleTreeId.ARCHIVE]: AppendOnlyTree<Fr>;
};

type LeafTypes = {
[MerkleTreeId.NULLIFIER_TREE]: Buffer;
[MerkleTreeId.NOTE_HASH_TREE]: Fr;
Expand Down
6 changes: 5 additions & 1 deletion yarn-project/circuit-types/src/interfaces/prover-client.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ import { type TxHash } from '@aztec/circuit-types';
import { type Fr } from '@aztec/circuits.js';

import { type BlockProver } from './block-prover.js';
import { type MerkleTreeOperations } from './merkle_tree_operations.js';
import { type ProvingJobSource } from './proving-job.js';

/**
Expand Down Expand Up @@ -29,8 +30,11 @@ export type ProverConfig = {
/**
* The interface to the prover client.
* Provides the ability to generate proofs and build rollups.
* TODO(palla/prover-node): Rename this interface
*/
export interface ProverClient extends BlockProver {
export interface ProverClient {
createBlockProver(db: MerkleTreeOperations): BlockProver;

start(): Promise<void>;

stop(): Promise<void>;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,15 +68,36 @@ export class L2BlockDownloader {
/**
* Repeatedly queries the block source and adds the received blocks to the block queue.
* Stops when no further blocks are received.
* @param targetBlockNumber - Optional block number to stop at.
* @param proven - Optional override of the default "proven" setting.
* @returns The total number of blocks added to the block queue.
*/
private async collectBlocks() {
private async collectBlocks(targetBlockNumber?: number, onlyProven?: boolean) {
let totalBlocks = 0;
while (true) {
const blocks = await this.l2BlockSource.getBlocks(this.from, 10, this.proven);
// If we have a target and have reached it, return
if (targetBlockNumber !== undefined && this.from > targetBlockNumber) {
log.verbose(`Reached target block number ${targetBlockNumber}`);
return totalBlocks;
}

// If we have a target, then request at most the number of blocks to get to it
const limit = targetBlockNumber !== undefined ? Math.min(targetBlockNumber - this.from + 1, 10) : 10;
const proven = onlyProven === undefined ? this.proven : onlyProven;

// Hit the archiver for blocks
const blocks = await this.l2BlockSource.getBlocks(this.from, limit, proven);

// If there are no more blocks, return
if (!blocks.length) {
return totalBlocks;
}

log.verbose(
`Received ${blocks.length} blocks from archiver after querying from ${this.from} limit ${limit} (proven ${proven})`,
);

// Push new blocks into the queue and loop
await this.semaphore.acquire();
this.blockQueue.put(blocks);
this.from += blocks.length;
Expand Down Expand Up @@ -116,9 +137,13 @@ export class L2BlockDownloader {

/**
* Forces an immediate request for blocks.
* Repeatedly queries the block source and adds the received blocks to the block queue.
* Stops when no further blocks are received.
* @param targetBlockNumber - Optional block number to stop at.
* @param proven - Optional override of the default "proven" setting.
* @returns A promise that fulfills once the poll is complete
*/
public pollImmediate(): Promise<number> {
return this.jobQueue.put(() => this.collectBlocks());
public pollImmediate(targetBlockNumber?: number, onlyProven?: boolean): Promise<number> {
return this.jobQueue.put(() => this.collectBlocks(targetBlockNumber, onlyProven));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ import { getConfigEnvVars } from '@aztec/aztec-node';
import { AztecAddress, Body, Fr, GlobalVariables, type L2Block, createDebugLogger, mockTx } from '@aztec/aztec.js';
// eslint-disable-next-line no-restricted-imports
import {
type BlockProver,
PROVING_STATUS,
type ProcessedTx,
makeEmptyProcessedTx as makeEmptyProcessedTxFromHistoricalTreeRoots,
Expand Down Expand Up @@ -82,6 +83,7 @@ describe('L1Publisher integration', () => {

let builder: TxProver;
let builderDb: MerkleTrees;
let prover: BlockProver;

// The header of the last block
let prevHeader: Header;
Expand Down Expand Up @@ -138,7 +140,8 @@ describe('L1Publisher integration', () => {
};
const worldStateSynchronizer = new ServerWorldStateSynchronizer(tmpStore, builderDb, blockSource, worldStateConfig);
await worldStateSynchronizer.start();
builder = await TxProver.new(config, worldStateSynchronizer, blockSource, new NoopTelemetryClient());
builder = await TxProver.new(config, new NoopTelemetryClient());
prover = builder.createBlockProver(builderDb.asLatest());

publisher = getL1Publisher(
{
Expand Down Expand Up @@ -285,9 +288,9 @@ describe('L1Publisher integration', () => {
};

const buildBlock = async (globalVariables: GlobalVariables, txs: ProcessedTx[], l1ToL2Messages: Fr[]) => {
const blockTicket = await builder.startNewBlock(txs.length, globalVariables, l1ToL2Messages);
const blockTicket = await prover.startNewBlock(txs.length, globalVariables, l1ToL2Messages);
for (const tx of txs) {
await builder.addNewTx(tx);
await prover.addNewTx(tx);
}
return blockTicket;
};
Expand Down Expand Up @@ -360,7 +363,7 @@ describe('L1Publisher integration', () => {
const ticket = await buildBlock(globalVariables, txs, currentL1ToL2Messages);
const result = await ticket.provingPromise;
expect(result.status).toBe(PROVING_STATUS.SUCCESS);
const blockResult = await builder.finaliseBlock();
const blockResult = await prover.finaliseBlock();
const block = blockResult.block;
prevHeader = block.header;
blockSource.getL1ToL2Messages.mockResolvedValueOnce(currentL1ToL2Messages);
Expand Down Expand Up @@ -450,10 +453,10 @@ describe('L1Publisher integration', () => {
GasFees.empty(),
);
const blockTicket = await buildBlock(globalVariables, txs, l1ToL2Messages);
await builder.setBlockCompleted();
await prover.setBlockCompleted();
const result = await blockTicket.provingPromise;
expect(result.status).toBe(PROVING_STATUS.SUCCESS);
const blockResult = await builder.finaliseBlock();
const blockResult = await prover.finaliseBlock();
const block = blockResult.block;
prevHeader = block.header;
blockSource.getL1ToL2Messages.mockResolvedValueOnce(l1ToL2Messages);
Expand Down
29 changes: 15 additions & 14 deletions yarn-project/end-to-end/src/e2e_prover_node.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import {
sleep,
} from '@aztec/aztec.js';
import { StatefulTestContract, TestContract } from '@aztec/noir-contracts.js';
import { type ProverNode, createProverNode } from '@aztec/prover-node';
import { createProverNode } from '@aztec/prover-node';
import { type SequencerClientConfig } from '@aztec/sequencer-client';

import { sendL1ToL2Message } from './fixtures/l1_to_l2_messaging.js';
Expand Down Expand Up @@ -107,20 +107,12 @@ describe('e2e_prover_node', () => {
ctx = await snapshotManager.setup();
});

const prove = async (proverNode: ProverNode, blockNumber: number) => {
logger.info(`Proving block ${blockNumber}`);
await proverNode.prove(blockNumber, blockNumber);

logger.info(`Proof submitted. Awaiting aztec node to sync...`);
await retryUntil(async () => (await ctx.aztecNode.getProvenBlockNumber()) === blockNumber, 'block-1', 10, 1);
expect(await ctx.aztecNode.getProvenBlockNumber()).toEqual(blockNumber);
};

it('submits three blocks, then prover proves the first two', async () => {
// Check everything went well during setup and txs were mined in two different blocks
const [txReceipt1, txReceipt2, txReceipt3] = txReceipts;
const firstBlock = txReceipt1.blockNumber!;
expect(txReceipt2.blockNumber).toEqual(firstBlock + 1);
const secondBlock = firstBlock + 1;
expect(txReceipt2.blockNumber).toEqual(secondBlock);
expect(txReceipt3.blockNumber).toEqual(firstBlock + 2);
expect(await contract.methods.get_public_value(recipient).simulate()).toEqual(20n);
expect(await contract.methods.summed_values(recipient).simulate()).toEqual(10n);
Expand All @@ -141,9 +133,18 @@ describe('e2e_prover_node', () => {
const archiver = ctx.aztecNode.getBlockSource() as Archiver;
const proverNode = await createProverNode(proverConfig, { aztecNodeTxProvider: ctx.aztecNode, archiver });

// Prove the first two blocks
await prove(proverNode, firstBlock);
await prove(proverNode, firstBlock + 1);
// Prove the first two blocks simultaneously
logger.info(`Starting proof for first block #${firstBlock}`);
await proverNode.startProof(firstBlock, firstBlock);
logger.info(`Starting proof for second block #${secondBlock}`);
await proverNode.startProof(secondBlock, secondBlock);

// Confirm that we cannot go back to prove an old one
await expect(proverNode.startProof(firstBlock, firstBlock)).rejects.toThrow(/behind the current world state/i);

// Await until proofs get submitted
await retryUntil(async () => (await ctx.aztecNode.getProvenBlockNumber()) === secondBlock, 'proven', 10, 1);
expect(await ctx.aztecNode.getProvenBlockNumber()).toEqual(secondBlock);

// Check that the prover id made it to the emitted event
const { publicClient, l1ContractAddresses } = ctx.deployL1ContractsValues;
Expand Down
5 changes: 5 additions & 0 deletions yarn-project/kv-store/src/interfaces/store.ts
Original file line number Diff line number Diff line change
Expand Up @@ -58,4 +58,9 @@ export interface AztecKVStore {
* Clears the store
*/
clear(): Promise<void>;

/**
* Forks the store.
*/
fork(): Promise<AztecKVStore>;
}
29 changes: 29 additions & 0 deletions yarn-project/kv-store/src/lmdb/store.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import { mkdtemp } from 'fs/promises';
import { tmpdir } from 'os';
import { join } from 'path';

import { AztecLmdbStore } from './store.js';

describe('AztecLmdbStore', () => {
const itForks = async (store: AztecLmdbStore) => {
const singleton = store.openSingleton('singleton');
await singleton.set('foo');

const forkedStore = await store.fork();
const forkedSingleton = forkedStore.openSingleton('singleton');
expect(forkedSingleton.get()).toEqual('foo');
await forkedSingleton.set('bar');
expect(singleton.get()).toEqual('foo');
};

it('forks a persistent store', async () => {
const path = join(await mkdtemp(join(tmpdir(), 'aztec-store-test-')), 'main.mdb');
const store = AztecLmdbStore.open(path, false);
await itForks(store);
});

it('forks an ephemeral store', async () => {
const store = AztecLmdbStore.open(undefined, true);
await itForks(store);
});
});
23 changes: 17 additions & 6 deletions yarn-project/kv-store/src/lmdb/store.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
import { createDebugLogger } from '@aztec/foundation/log';

import { mkdtemp } from 'fs/promises';
import { type Database, type Key, type RootDatabase, open } from 'lmdb';
import { tmpdir } from 'os';
import { join } from 'path';

import { type AztecArray } from '../interfaces/array.js';
import { type AztecCounter } from '../interfaces/counter.js';
Expand All @@ -22,7 +25,7 @@ export class AztecLmdbStore implements AztecKVStore {
#data: Database<unknown, Key>;
#multiMapData: Database<unknown, Key>;

constructor(rootDb: RootDatabase) {
constructor(rootDb: RootDatabase, public readonly isEphemeral: boolean) {
this.#rootDb = rootDb;

// big bucket to store all the data
Expand Down Expand Up @@ -57,11 +60,19 @@ export class AztecLmdbStore implements AztecKVStore {
log = createDebugLogger('aztec:kv-store:lmdb'),
): AztecLmdbStore {
log.info(`Opening LMDB database at ${path || 'temporary location'}`);
const rootDb = open({
path,
noSync: ephemeral,
});
return new AztecLmdbStore(rootDb);
const rootDb = open({ path, noSync: ephemeral });
return new AztecLmdbStore(rootDb, ephemeral);
}

/**
* Forks the current DB into a new DB by backing it up to a temporary location and opening a new lmdb db.
* @returns A new AztecLmdbStore.
*/
async fork() {
const forkPath = join(await mkdtemp(join(tmpdir(), 'aztec-store-fork-')), 'root.mdb');
await this.#rootDb.backup(forkPath, false);
const forkDb = open(forkPath, { noSync: this.isEphemeral });
return new AztecLmdbStore(forkDb, this.isEphemeral);
}

/**
Expand Down
Loading

0 comments on commit d3c8237

Please sign in to comment.