Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Update to consensus 0.1.0-rc.4 (BFT-486) #2475

Merged
merged 4 commits into from
Jul 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 20 additions & 20 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

22 changes: 11 additions & 11 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ members = [
"core/tests/vm-benchmark/harness",

# Parts of prover workspace that are needed for Core workspace
"prover/crates/lib/prover_dal"
"prover/crates/lib/prover_dal",
]
resolver = "2"

Expand Down Expand Up @@ -209,16 +209,16 @@ zk_evm_1_4_1 = { package = "zk_evm", version = "0.141.0" }
zk_evm_1_5_0 = { package = "zk_evm", version = "0.150.0" }

# Consensus dependencies.
zksync_concurrency = "=0.1.0-rc.2"
zksync_consensus_bft = "=0.1.0-rc.2"
zksync_consensus_crypto = "=0.1.0-rc.2"
zksync_consensus_executor = "=0.1.0-rc.2"
zksync_consensus_network = "=0.1.0-rc.2"
zksync_consensus_roles = "=0.1.0-rc.2"
zksync_consensus_storage = "=0.1.0-rc.2"
zksync_consensus_utils = "=0.1.0-rc.2"
zksync_protobuf = "=0.1.0-rc.2"
zksync_protobuf_build = "=0.1.0-rc.2"
zksync_concurrency = "=0.1.0-rc.4"
zksync_consensus_bft = "=0.1.0-rc.4"
zksync_consensus_crypto = "=0.1.0-rc.4"
zksync_consensus_executor = "=0.1.0-rc.4"
zksync_consensus_network = "=0.1.0-rc.4"
zksync_consensus_roles = "=0.1.0-rc.4"
zksync_consensus_storage = "=0.1.0-rc.4"
zksync_consensus_utils = "=0.1.0-rc.4"
zksync_protobuf = "=0.1.0-rc.4"
zksync_protobuf_build = "=0.1.0-rc.4"

# "Local" dependencies
zksync_multivm = { version = "0.1.0", path = "core/lib/multivm" }
Expand Down
14 changes: 0 additions & 14 deletions core/node/consensus/src/storage/connection.rs
Original file line number Diff line number Diff line change
Expand Up @@ -411,20 +411,6 @@ impl<'a> Connection<'a> {
.await
.context("get_last_batch_number()")?;

let last = if let Some(last) = last {
// For now it would be unexpected if we couldn't retrieve the payloads
// for the `last` batch number, as an L1 batch is only created if we
// have all the L2 miniblocks for it.
Some(
self.get_batch(ctx, last)
.await
.context("get_batch()")?
.context("last batch not available")?,
)
} else {
None
};

Ok(BatchStoreState {
first: first
.map(|n| attester::BatchNumber(n.0 as u64))
Expand Down
40 changes: 21 additions & 19 deletions core/node/consensus/src/storage/store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -446,42 +446,44 @@ impl storage::PersistentBatchStore for Store {
self.batches_persisted.clone()
}

/// Get the earliest L1 batch number which has to be (re)signed by a node.
///
/// Ideally we would make this decision by looking up the last batch submitted to L1,
/// and so it might require a quorum of attesters to sign a certificate for it.
/// Get the earliest L1 batch number which has to be signed by attesters.
async fn earliest_batch_number_to_sign(
&self,
ctx: &ctx::Ctx,
) -> ctx::Result<Option<attester::BatchNumber>> {
// This is the rough roadmap of how this logic will evolve:
// 1. Make best effort at gossiping and collecting votes; the `BatchVotes` in consensus only considers the last vote per attesters.
// Still, we can re-sign more than the last batch, anticipating step 2.
// 2. Change `BatchVotes` to handle multiple pending batch numbers, anticipating that batch intervals might decrease dramatically.
// 3. Ask the Main Node what is the earliest batch number that it still expects votes for (ie. what is the last submission + 1).
// 4. Look at L1 to figure out what is the last submssion, and sign after that.
// 2. Ask the Main Node what is the earliest batch number that it still expects votes for (ie. what is the last submission + 1).
// 3. Change `BatchVotes` to handle multiple pending batch numbers, anticipating that batch intervals might decrease dramatically.
// 4. Once QC is required to submit to L1, Look at L1 to figure out what is the last submission, and sign after that.

// Originally this method returned all unsigned batch numbers by doing a DAL query, but we decided it shoudl be okay and cheap
// Originally this method returned all unsigned batch numbers by doing a DAL query, but we decided it should be okay and cheap
// to resend signatures for already signed batches, and we don't have to worry about skipping them. Because of that, we also
// didn't think it makes sense to query the database for the earliest unsigned batch *after* the submission, because we might
// as well just re-sign everything. Until we have a way to argue about the "last submission" we just re-sign the last 10 to
// try to produce as many QCs as the voting register allows, within reason.

let Some(last_batch_number) = self.last_batch(ctx).await? else {
return Ok(None);
};
Ok(Some(attester::BatchNumber(
last_batch_number.0.saturating_sub(10),
)))
}
// The latest decision is not to store batches with gaps between in the database *of the main node*.
// Once we have an API to serve to external nodes the earliest number the main node wants them to sign,
// we can get rid of this method: on the main node we can sign from what `last_batch_qc` returns, and
// while external nodes we can go from whatever the API returned.

/// Get the highest L1 batch number from storage.
async fn last_batch(&self, ctx: &ctx::Ctx) -> ctx::Result<Option<attester::BatchNumber>> {
self.conn(ctx)
const NUM_BATCHES_TO_SIGN: u64 = 10;

let Some(last_batch_number) = self
.conn(ctx)
.await?
.get_last_batch_number(ctx)
.await
.wrap("get_last_batch_number")
.wrap("get_last_batch_number")?
else {
return Ok(None);
};

Ok(Some(attester::BatchNumber(
last_batch_number.0.saturating_sub(NUM_BATCHES_TO_SIGN),
)))
}

/// Get the L1 batch QC from storage with the highest number.
Expand Down
14 changes: 11 additions & 3 deletions core/node/consensus/src/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -120,17 +120,25 @@ async fn test_connection_get_batch(from_snapshot: bool, version: ProtocolVersion
let batches = conn.batches_range(ctx).await?;
let last = batches.last.expect("last is set");
let (min, max) = conn
.get_l2_block_range_of_l1_batch(ctx, last.number)
.get_l2_block_range_of_l1_batch(ctx, last)
.await?
.unwrap();

let last_batch = conn
.get_batch(ctx, last)
.await?
.expect("last batch can be retrieved");

assert_eq!(
last.payloads.len(),
last_batch.payloads.len(),
(max.0 - min.0) as usize,
"all block payloads present"
);

let first_payload = last.payloads.first().expect("last batch has payloads");
let first_payload = last_batch
.payloads
.first()
.expect("last batch has payloads");

let want_payload = conn.payload(ctx, min).await?.expect("payload is in the DB");
let want_payload = want_payload.encode();
Expand Down
28 changes: 14 additions & 14 deletions prover/Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading
Loading