Skip to content

Commit

Permalink
Merge pull request #367 from lambdaclass/eigen-client-extra-main
Browse files Browse the repository at this point in the history
Eigen client extra merge main
  • Loading branch information
gianbelinche authored Dec 6, 2024
2 parents 0d01df7 + 2100256 commit 83c3e13
Show file tree
Hide file tree
Showing 43 changed files with 409 additions and 296 deletions.
15 changes: 1 addition & 14 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 0 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,6 @@ members = [
# Test infrastructure
"core/tests/loadnext",
"core/tests/vm-benchmark",
"core/lib/bin_metadata",
]
resolver = "2"

Expand Down
1 change: 0 additions & 1 deletion core/bin/external_node/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ zksync_health_check.workspace = true
zksync_web3_decl.workspace = true
zksync_types.workspace = true
zksync_block_reverter.workspace = true
zksync_shared_metrics.workspace = true
zksync_node_genesis.workspace = true
zksync_node_fee_model.workspace = true
zksync_node_db_pruner.workspace = true
Expand Down
3 changes: 0 additions & 3 deletions core/bin/external_node/src/metrics/framework.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ use zksync_node_framework::{
implementations::resources::pools::{MasterPool, PoolResource},
FromContext, IntoContext, StopReceiver, Task, TaskId, WiringError, WiringLayer,
};
use zksync_shared_metrics::{GIT_METRICS, RUST_METRICS};
use zksync_types::{L1ChainId, L2ChainId, SLChainId};

use super::EN_METRICS;
Expand Down Expand Up @@ -39,8 +38,6 @@ impl WiringLayer for ExternalNodeMetricsLayer {
}

async fn wire(self, input: Self::Input) -> Result<Self::Output, WiringError> {
RUST_METRICS.initialize();
GIT_METRICS.initialize();
EN_METRICS.observe_config(
self.l1_chain_id,
self.sl_chain_id,
Expand Down
9 changes: 4 additions & 5 deletions core/bin/snapshots_creator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,8 @@ repository root. The storage location can be configured using the object store c
filesystem, or Google Cloud Storage (GCS). Beware that for end-to-end testing of snapshot recovery, changes applied to
the main node configuration must be reflected in the external node configuration.

Creating a snapshot is a part of the [snapshot recovery integration test]. You can run the test using
`yarn recovery-test snapshot-recovery-test`. It requires the main node to be launched with a command like
`zk server --components api,tree,eth,state_keeper,commitment_generator`.
Creating a snapshot is a part of the [snapshot recovery integration test]. You can run the test using `yarn recovery-test snapshot-recovery-test`.
It requires the main node to be launched with a command like `zk server --components api,tree,eth,state_keeper,commitment_generator`.

## Snapshots format

Expand All @@ -59,8 +58,8 @@ Each snapshot consists of three types of data (see [`snapshots.rs`] for exact de
enumeration index; both are used to restore the contents of the `initial_writes` table. Chunking storage logs is
motivated by their parallel generation; each chunk corresponds to a distinct non-overlapping range of hashed storage
keys. (This should be considered an implementation detail for the purposes of snapshot recovery; recovery must not
rely on any particular key distribution among chunks.) Stored as gzipped Protobuf messages in an [object store]; each
chunk is a separate object.
rely on any particular key distribution among chunks.) Stored as gzipped Protobuf messages in an [object store]; each chunk
is a separate object.
- **Factory dependencies:** All bytecodes deployed on L2 at the time the snapshot is made. Stored as a single gzipped
Protobuf message in an object store.

Expand Down
18 changes: 0 additions & 18 deletions core/lib/bin_metadata/Cargo.toml

This file was deleted.

2 changes: 1 addition & 1 deletion core/lib/config/src/configs/api.rs
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ impl Web3JsonRpcConfig {
pubsub_polling_interval: Some(200),
max_nonce_ahead: 50,
gas_price_scale_factor: 1.2,
estimate_gas_scale_factor: 1.2,
estimate_gas_scale_factor: 1.5,
estimate_gas_acceptable_overestimation: 1000,
estimate_gas_optimize_search: false,
max_tx_size: 1000000,
Expand Down

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 4 additions & 4 deletions core/lib/dal/src/system_dal.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use std::{collections::HashMap, time::Duration};

use chrono::DateTime;
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use zksync_db_connection::{connection::Connection, error::DalResult, instrument::InstrumentExt};

Expand All @@ -14,11 +14,11 @@ pub(crate) struct TableSize {
pub total_size: u64,
}

#[derive(Debug, Serialize, Deserialize)]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DatabaseMigration {
pub version: i64,
pub description: String,
pub installed_on: DateTime<chrono::Utc>,
pub installed_on: DateTime<Utc>,
pub success: bool,
pub checksum: String,
pub execution_time: Duration,
Expand Down Expand Up @@ -118,7 +118,7 @@ impl SystemDal<'_, '_> {
installed_on: row.installed_on,
success: row.success,
checksum: hex::encode(row.checksum),
execution_time: Duration::from_millis(u64::try_from(row.execution_time).unwrap_or(0)),
execution_time: Duration::from_nanos(u64::try_from(row.execution_time).unwrap_or(0)),
})
}
}
16 changes: 10 additions & 6 deletions core/lib/dal/src/tee_proof_generation_dal.rs
Original file line number Diff line number Diff line change
Expand Up @@ -66,10 +66,16 @@ impl TeeProofGenerationDal<'_, '_> {
let min_batch_number = i64::from(min_batch_number.0);
let mut transaction = self.storage.start_transaction().await?;

// Lock rows in the proof_generation_details table to prevent race conditions. The
// tee_proof_generation_details table does not have corresponding entries yet if this is the
// first time the query is invoked for a batch. Locking rows in proof_generation_details
// ensures that two different TEE prover instances will not try to prove the same batch.
// Lock the entire tee_proof_generation_details table in EXCLUSIVE mode to prevent race
// conditions. Locking the table ensures that two different TEE prover instances will not
// try to prove the same batch.
sqlx::query("LOCK TABLE tee_proof_generation_details IN EXCLUSIVE MODE")
.instrument("lock_batch_for_proving#lock_table")
.execute(&mut transaction)
.await?;

// The tee_proof_generation_details table does not have corresponding entries yet if this is
// the first time the query is invoked for a batch.
let batch_number = sqlx::query!(
r#"
SELECT
Expand All @@ -95,8 +101,6 @@ impl TeeProofGenerationDal<'_, '_> {
)
)
LIMIT 1
FOR UPDATE OF p
SKIP LOCKED
"#,
tee_type.to_string(),
TeeProofGenerationJobStatus::PickedByProver.to_string(),
Expand Down
1 change: 0 additions & 1 deletion core/lib/health_check/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ serde_json.workspace = true
thiserror.workspace = true
tokio = { workspace = true, features = ["sync", "time"] }
tracing.workspace = true
zksync_bin_metadata.workspace = true

[dev-dependencies]
assert_matches.workspace = true
Expand Down
21 changes: 0 additions & 21 deletions core/lib/health_check/src/binary.rs

This file was deleted.

19 changes: 14 additions & 5 deletions core/lib/health_check/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,9 @@ pub use async_trait::async_trait;
use futures::future;
use serde::Serialize;
use tokio::sync::watch;
use zksync_bin_metadata::BIN_METADATA;

use self::metrics::{CheckResult, METRICS};
use crate::metrics::AppHealthCheckConfig;
use crate::metrics::{AppHealthCheckConfig, CheckResult, METRICS};

mod binary;
mod metrics;

#[cfg(test)]
Expand Down Expand Up @@ -114,6 +111,8 @@ pub struct AppHealthCheck {

#[derive(Debug, Clone)]
struct AppHealthCheckInner {
/// Application-level health details.
app_details: Option<serde_json::Value>,
components: Vec<Arc<dyn CheckHealth>>,
slow_time_limit: Duration,
hard_time_limit: Duration,
Expand All @@ -136,6 +135,7 @@ impl AppHealthCheck {

let inner = AppHealthCheckInner {
components: Vec::default(),
app_details: None,
slow_time_limit,
hard_time_limit,
};
Expand Down Expand Up @@ -181,6 +181,13 @@ impl AppHealthCheck {
}
}

/// Sets app-level health details. They can include build info etc.
pub fn set_details(&self, details: impl Serialize) {
let details = serde_json::to_value(details).expect("failed serializing app details");
let mut inner = self.inner.lock().expect("`AppHealthCheck` is poisoned");
inner.app_details = Some(details);
}

/// Inserts health check for a component.
///
/// # Errors
Expand Down Expand Up @@ -220,6 +227,7 @@ impl AppHealthCheck {
// Clone `inner` so that we don't hold a lock for them across a wait point.
let AppHealthCheckInner {
components,
app_details,
slow_time_limit,
hard_time_limit,
} = self
Expand All @@ -238,7 +246,8 @@ impl AppHealthCheck {
.map(|health| health.status)
.max_by_key(|status| status.priority_for_aggregation())
.unwrap_or(HealthStatus::Ready);
let inner = Health::with_details(aggregated_status.into(), BIN_METADATA);
let mut inner = Health::from(aggregated_status);
inner.details = app_details.clone();

let health = AppHealth { inner, components };
if !health.inner.status.is_healthy() {
Expand Down
1 change: 1 addition & 0 deletions core/lib/health_check/src/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,7 @@ async fn aggregating_health_checks() {
let (first_check, first_updater) = ReactiveHealthCheck::new("first");
let (second_check, second_updater) = ReactiveHealthCheck::new("second");
let inner = AppHealthCheckInner {
app_details: None,
components: vec![Arc::new(first_check), Arc::new(second_check)],
slow_time_limit: AppHealthCheck::DEFAULT_SLOW_TIME_LIMIT,
hard_time_limit: AppHealthCheck::DEFAULT_HARD_TIME_LIMIT,
Expand Down
6 changes: 3 additions & 3 deletions core/lib/merkle_tree/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Merkle Tree

Binary Merkle tree implementation based on amortized radix-16 Merkle tree (AR16MT) described in the [Jellyfish Merkle
tree] white paper. Unlike Jellyfish Merkle tree, our construction uses vanilla binary tree hashing algorithm to make it
easier for the circuit creation. The depth of the tree is 256, and Blake2 is used as the hashing function.
Binary Merkle tree implementation based on amortized radix-16 Merkle tree (AR16MT) described in the [Jellyfish
Merkle tree] white paper. Unlike Jellyfish Merkle tree, our construction uses vanilla binary tree hashing algorithm to
make it easier for the circuit creation. The depth of the tree is 256, and Blake2 is used as the hashing function.

## Snapshot tests

Expand Down
11 changes: 11 additions & 0 deletions core/lib/test_contracts/contracts/transfer/ERC20.sol
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
// SPDX-License-Identifier: UNLICENSED

pragma solidity ^0.8.0;

import "@openzeppelin/contracts/token/ERC20/ERC20.sol";

contract TestERC20 is ERC20("Test", "TEST") {
constructor(uint256 _toMint) {
_mint(msg.sender, _toMint);
}
}
7 changes: 7 additions & 0 deletions core/lib/test_contracts/src/contracts.rs
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,13 @@ impl TestContract {
&CONTRACT
}

/// Returns a test ERC20 token implementation.
pub fn test_erc20() -> &'static Self {
static CONTRACT: Lazy<TestContract> =
Lazy::new(|| TestContract::new(raw::transfer::TestERC20));
&CONTRACT
}

/// Returns a mock version of `ContractDeployer`.
pub fn mock_deployer() -> &'static Self {
static CONTRACT: Lazy<TestContract> =
Expand Down
2 changes: 1 addition & 1 deletion core/node/api_server/src/tx_sender/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -588,7 +588,7 @@ impl TxSender {
}

// For now, both L1 gas price and pubdata price are scaled with the same coefficient
async fn scaled_batch_fee_input(&self) -> anyhow::Result<BatchFeeInput> {
pub(crate) async fn scaled_batch_fee_input(&self) -> anyhow::Result<BatchFeeInput> {
self.0
.batch_fee_input_provider
.get_batch_fee_input_scaled(
Expand Down
Loading

0 comments on commit 83c3e13

Please sign in to comment.