Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reconfiguration with dkg #10328

Closed
wants to merge 17 commits into from
Closed

reconfiguration with dkg #10328

wants to merge 17 commits into from

Conversation

zjma
Copy link
Contributor

@zjma zjma commented Sep 30, 2023

Context

To generate on-chain randomness, the current validators will together run a DKG protocol right before starting a new epoch. The DKG protocol needs the exact validator set (address + voting power) to be determined for the new epoch, and can take a while. During a DKG, care must be taken for validator set changes and reconfiguration requests.

Already done

None.

In this PR

  • Add feature flag: RECONFIGURE_WITH_DKG.
  • A governance proposal should start a slow reconfiguration.
  • When the current epoch expires and there is no slow reconfiguration in progress, block prologue should start a slow reconfigure.
  • Triggering a slow reconfiguration should fail if another one is in progress.
  • The following on-chain configs will have 2 versions: one for the current epoch and one for the next epoch. The switch will happen at the epoch boundary.
    • ConsensusConfig
    • ExecutionConfig
    • GasScheduleV2
    • Features
    • Version
  • When DKG finishes/exceeds its time limit, block prologue should update the "current" version of every on-chain config and emit NewEpochEvent.
  • Reject any on-chain config change if a slow reconfiguration is in progress.

Next steps

  • Identify the remaining on-chain configs that need 2 versions.
  • Update prover tests.
  • Adapt aptos-release-builder to slow reconfiguration and on-chain config behavior change.
  • Release proposal/migration plan.

@zjma zjma changed the title [DKG] reconfigure refactoring 1 [DKG] validator set change locking Oct 2, 2023
@@ -365,7 +365,7 @@ module aptos_framework::genesis {
validator.consensus_pubkey,
validator.proof_of_possession,
);
stake::update_network_and_fullnode_addresses(
stake::force_update_network_and_fullnode_addresses(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this for genesis?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some smoke tests set this flag (disable validator set change) which made genesis failed...

@zjma
Copy link
Contributor Author

zjma commented Oct 3, 2023

Alternatives discussed

We need 2 modes of reconfiguration.

  • Fast reconfiguration. This should be used by governance proposals for on-chain config changes (e.g. consensus/execution config changes). High-level breakdown:
    • Update epoch counter and reset epoch timer.
    • Send NewEpochEvent to validators.
  • Slow reconfiguration with DKG (2-phase). This should be used in epoch expiry handling.
    • (Phase 0)
      • Compute the next validator set, lock it.
      • Notify validators to start DKG.
    • (Phase 1: Once DKG finishes/times out)
      • Update validator set (apply pending join/leave requests, distribute rewards).
      • Update epoch counter and reset epoch timer.
      • Send NewEpochEvent to validators.

The key behavior changes to implement.

  • When the current epoch expires, block prologue should trigger a slow reconfiguration.
  • When DKG finishes/times out, block prologue should trigger the phase 1 of slow reconfiguration.
  • A separate slow reconfiguration triggered during DKG should be rejected.
  • A fast reconfiguration invoked during DKG should be executed and cause the in-progress DKG to be abandoned.
  • When starting a new epoch, a validator should do the following.
    • If the DKG output for the new epoch is available, use it for randomness generation.
    • If the DKG output for the new epoch is not available, but the one for the old epoch is, and the validator set doesn't change, use the old one for randomness generation.
    • Otherwise(, it must be an edge case , e.g. DKG timed out or disaster recovery handled by an urgent validator set change + fast reconfiguration), continue the new epoch without randomness.
  • regular validator set change requests (join/leave validator set, add/withdraw stakes) during a DKG should either be rejected (implemented in this PR) or queued up.

This PR starts with the last bullet item.

@zjma zjma changed the title [DKG] validator set change locking [DKG] reconfiguration update Oct 4, 2023
@zjma zjma added the CICD:run-e2e-tests when this label is present github actions will run all land-blocking e2e tests from the PR label Oct 10, 2023
@zjma zjma marked this pull request as ready for review October 10, 2023 03:29
writer,
"aptos_governance::reconfigure(&framework_signer);"
);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this approach consensus_config::set no longer calls reconfigure, I think the proposal script should call reconfigure explicitly. (Same below for some other configs)

@zekun000 @movekevin @sherry-x could you confirm if these release builder updates are also needed?

gas_schedule::on_new_epoch(account);
std::version::on_new_epoch(account);
features::on_new_epoch(account);
// TODO: complete the list.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the list should contain on-chain states that may be accessed in rust directly, and here are what I have identified. Anything I missed?

@zekun000 @movekevin

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

system_addresses::assert_aptos_framework(account);
assert!(vector::length(&config) > 0, error::invalid_argument(EINVALID_CONFIG));
std::config_for_next_epoch::upsert<ConsensusConfig>(account, ConsensusConfig {config});
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Today, consensus_config::set() calls reconfiguration::reconfigure().
But in our new design, consensus_config::set(X) writes X to a buffer; then at the end of DKG, reconfiguration calls a new func consensus_config::on_new_epoch() to apply X.
This means that we can't control consensus_config::set() behavior with a feature flag, otherwise compile fails with a circular dependency.

But I think if we also update aptos-release-builder accordingly to append a aptos_governance::reconfigure() in the proposal script, things sound be fine?

@github-actions

This comment has been minimized.

let features = config_for_next_epoch::extract<Features>();
*borrow_global_mut<Features>(@std) = features;
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::features::Features is a critical on-chain state that i think has to have a pending change buffer (and therefore a on_new_epoch function).
What's special is, it's defined in move-stdlib package (while reconfiguration.move and all the other configs I identified so far are in move-framework package). for all other configs' on_new_epoch() , we can protect it by setting scope to public(friend) and make reconfiguration.move a friend.
But for Features, this protection is not available.

The workaround I'm thinking.

  • Leave this function public.
  • Introduce a flag struct ExtractPermit.
  • Any system txn that may invoke reconfigure should put 0x1::ExtractPermit before the invocation, and remove it after.
  • Inside config_for_next_epoch::extract, do the work only if the 0x1::ExtractPermit exists.

Copy link
Contributor Author

@zjma zjma Oct 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential alternative:

  • Have a clone of reconfigure() that takes a signer: reconfigure_v2(account: &signer).
  • Update all reconfigure callers to use reconfigure_v2.
  • Have features::on_new_epoch(account: &signer) and assert the signer is 0x0/0x1.

struct UpsertLock has copy, drop, key {
seq_num: u64,
locked: bool,
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comes from the requested behavior: when a slow reconfigure is in progress, on-chain config changes should be rejected.

Ideally we define a struct T to represent the lock and put a 0x1::T when starting DKG and remove it when finishing. But reconfigure can be triggered by either 0x1 (proposal) or 0x0 (epoch expiry), and 0x0 can't put things under 0x1. 2 solutions i can imagine.

  1. Define struct SomeLock { locked: bool }; run a initialization script to put 0x1::SomeLock. Later both 0x0 and 0x1 can read/write the locked field.
  2. (implemented currently in the PR) Have 0x0 and 0x1 maintain a struct under their own address. Each maintain a seqnum so we know who is the last write. When reading the lock state, read from the last write.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions
Copy link
Contributor

✅ Forge suite realistic_env_max_load success on 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b

two traffics test: inner traffic : committed: 7789 txn/s, latency: 5042 ms, (p50: 4800 ms, p90: 6000 ms, p99: 10500 ms), latency samples: 3365080
two traffics test : committed: 100 txn/s, latency: 2243 ms, (p50: 2100 ms, p90: 2700 ms, p99: 5200 ms), latency samples: 1760
Latency breakdown for phase 0: ["QsBatchToPos: max: 0.210, avg: 0.200", "QsPosToProposal: max: 0.177, avg: 0.160", "ConsensusProposalToOrdered: max: 0.654, avg: 0.617", "ConsensusOrderedToCommit: max: 0.548, avg: 0.511", "ConsensusProposalToCommit: max: 1.182, avg: 1.128"]
Max round gap was 1 [limit 4] at version 1458961. Max no progress secs was 4.22366 [limit 10] at version 1458961.
Test Ok

@github-actions
Copy link
Contributor

❌ Forge suite framework_upgrade failure on aptos-node-v1.5.1 ==> 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b

Compatibility test results for aptos-node-v1.5.1 ==> 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b (PR)
Upgrade the nodes to version: 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b
Test Failed: API error: Unknown error error sending request for url (http://aptos-node-3-validator.forge-framework-upgrade-pr-10328.svc:8080/v1/estimate_gas_price): error trying to connect: dns error: failed to lookup address information: Name or service not known

Stack backtrace:
   0: aptos_release_builder::validate::execute_release::{{closure}}
             at ./aptos-move/aptos-release-builder/src/validate.rs:433:22
      aptos_release_builder::validate::validate_config_and_generate_release::{{closure}}
             at ./aptos-move/aptos-release-builder/src/validate.rs:495:6
      aptos_release_builder::validate::validate_config::{{closure}}
             at ./aptos-move/aptos-release-builder/src/validate.rs:481:80
      tokio::runtime::park::CachedParkThread::block_on::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/park.rs:283:63
      tokio::runtime::coop::with_budget
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/coop.rs:107:5
      tokio::runtime::coop::budget
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/coop.rs:73:5
      tokio::runtime::park::CachedParkThread::block_on
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/park.rs:283:31
   1: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/context/blocking.rs:66:9
      tokio::runtime::scheduler::multi_thread::MultiThread::block_on::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/mod.rs:87:13
      tokio::runtime::context::runtime::enter_runtime
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/context/runtime.rs:65:16
   2: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/mod.rs:86:9
      tokio::runtime::runtime::Runtime::block_on
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/runtime.rs:313:50
   3: <aptos_testcases::framework_upgrade::FrameworkUpgrade as aptos_forge::interface::network::NetworkTest>::run
             at ./testsuite/testcases/src/framework_upgrade.rs:97:9
   4: aptos_forge::runner::Forge<F>::run::{{closure}}
             at ./testsuite/forge/src/runner.rs:598:42
      aptos_forge::runner::run_test
             at ./testsuite/forge/src/runner.rs:666:11
      aptos_forge::runner::Forge<F>::run
             at ./testsuite/forge/src/runner.rs:598:30
   5: forge::run_forge
             at ./testsuite/forge-cli/src/main.rs:414:11
      forge::main
             at ./testsuite/forge-cli/src/main.rs:340:21
   6: core::ops::function::FnOnce::call_once
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/ops/function.rs:250:5
      std::sys_common::backtrace::__rust_begin_short_backtrace
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/sys_common/backtrace.rs:135:18
   7: std::rt::lang_start::{{closure}}
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/rt.rs:166:18
   8: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/ops/function.rs:284:13
      std::panicking::try::do_call
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:500:40
      std::panicking::try
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:464:19
      std::panic::catch_unwind
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panic.rs:142:14
      std::rt::lang_start_internal::{{closure}}
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/rt.rs:148:48
      std::panicking::try::do_call
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:500:40
      std::panicking::try
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:464:19
      std::panic::catch_unwind
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panic.rs:142:14
      std::rt::lang_start_internal
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/rt.rs:148:20
   9: main
  10: __libc_start_main
  11: _start
Trailing Log Lines:
      std::panicking::try
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:464:19
      std::panic::catch_unwind
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panic.rs:142:14
      std::rt::lang_start_internal
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/rt.rs:148:20
   9: main
  10: __libc_start_main
  11: _start


Swarm logs can be found here: See fgi output for more information.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ApiError: namespaces "forge-framework-upgrade-pr-10328" not found: NotFound (ErrorResponse { status: "Failure", message: "namespaces \"forge-framework-upgrade-pr-10328\" not found", reason: "NotFound", code: 404 })

Caused by:
    namespaces "forge-framework-upgrade-pr-10328" not found: NotFound

Stack backtrace:
   0: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/result.rs:1961:27
      aptos_forge::backend::k8s::cluster_helper::delete_k8s_cluster::{{closure}}
             at ./testsuite/forge/src/backend/k8s/cluster_helper.rs:289:13
   1: aptos_forge::backend::k8s::cluster_helper::uninstall_testnet_resources::{{closure}}
             at ./testsuite/forge/src/backend/k8s/cluster_helper.rs:399:48
   2: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/park.rs:283:63
      tokio::runtime::coop::with_budget
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/coop.rs:107:5
      tokio::runtime::coop::budget
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/coop.rs:73:5
      tokio::runtime::park::CachedParkThread::block_on
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/park.rs:283:31
   3: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/context/blocking.rs:66:9
      tokio::runtime::scheduler::multi_thread::MultiThread::block_on::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/mod.rs:87:13
      tokio::runtime::context::runtime::enter_runtime
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/context/runtime.rs:65:16
   4: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/mod.rs:86:9
      tokio::runtime::runtime::Runtime::block_on
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/runtime.rs:313:50
   5: <aptos_forge::backend::k8s::swarm::K8sSwarm as core::ops::drop::Drop>::drop
             at ./testsuite/forge/src/backend/k8s/swarm.rs:674:13
   6: core::ptr::drop_in_place<aptos_forge::backend::k8s::swarm::K8sSwarm>
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/ptr/mod.rs:497:1
   7: core::ptr::drop_in_place<alloc::boxed::Box<dyn aptos_forge::interface::swarm::Swarm>>
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/ptr/mod.rs:497:1
   8: aptos_forge::runner::Forge<F>::run
             at ./testsuite/forge/src/runner.rs:611:9
   9: forge::run_forge
             at ./testsuite/forge-cli/src/main.rs:414:11
      forge::main
             at ./testsuite/forge-cli/src/main.rs:340:21
  10: core::ops::function::FnOnce::call_once
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/ops/function.rs:250:5
      std::sys_common::backtrace::__rust_begin_short_backtrace
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/sys_common/backtrace.rs:135:18
  11: std::rt::lang_start::{{closure}}
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/rt.rs:166:18
  12: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/ops/function.rs:284:13
      std::panicking::try::do_call
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:500:40
      std::panicking::try
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:464:19
      std::panic::catch_unwind
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panic.rs:142:14
      std::rt::lang_start_internal::{{closure}}
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/rt.rs:148:48
      std::panicking::try::do_call
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:500:40
      std::panicking::try
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:464:19
      std::panic::catch_unwind
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panic.rs:142:14
      std::rt::lang_start_internal
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/rt.rs:148:20
  13: main
  14: __libc_start_main
  15: _start', testsuite/forge/src/backend/k8s/swarm.rs:676:18
stack backtrace:
   0: rust_begin_unwind
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/std/src/panicking.rs:593:5
   1: core::panicking::panic_fmt
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/panicking.rs:67:14
   2: core::result::unwrap_failed
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/result.rs:1651:5
   3: core::result::Result<T,E>::unwrap
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/result.rs:1076:23
   4: <aptos_forge::backend::k8s::swarm::K8sSwarm as core::ops::drop::Drop>::drop
             at ./testsuite/forge/src/backend/k8s/swarm.rs:674:13
   5: core::ptr::drop_in_place<aptos_forge::backend::k8s::swarm::K8sSwarm>
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/ptr/mod.rs:497:1
   6: core::ptr::drop_in_place<alloc::boxed::Box<dyn aptos_forge::interface::swarm::Swarm>>
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/ptr/mod.rs:497:1
   7: aptos_forge::runner::Forge<F>::run
             at ./testsuite/forge/src/runner.rs:611:9
   8: forge::run_forge
             at ./testsuite/forge-cli/src/main.rs:414:11
   9: forge::main
             at ./testsuite/forge-cli/src/main.rs:340:21
  10: core::ops::function::FnOnce::call_once
             at /rustc/d5c2e9c342b358556da91d61ed4133f6f50fc0c3/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Debugging output:

@github-actions

This comment has been minimized.

@github-actions
Copy link
Contributor

✅ Forge suite compat success on aptos-node-v1.6.2 ==> 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b

Compatibility test results for aptos-node-v1.6.2 ==> 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b (PR)
1. Check liveness of validators at old version: aptos-node-v1.6.2
compatibility::simple-validator-upgrade::liveness-check : committed: 4442 txn/s, latency: 7125 ms, (p50: 6500 ms, p90: 10200 ms, p99: 17100 ms), latency samples: 182160
2. Upgrading first Validator to new version: 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b
compatibility::simple-validator-upgrade::single-validator-upgrade : committed: 1642 txn/s, latency: 17923 ms, (p50: 19500 ms, p90: 23900 ms, p99: 24800 ms), latency samples: 85420
3. Upgrading rest of first batch to new version: 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b
compatibility::simple-validator-upgrade::half-validator-upgrade : committed: 1760 txn/s, latency: 16256 ms, (p50: 19100 ms, p90: 22200 ms, p99: 22700 ms), latency samples: 91540
4. upgrading second batch to new version: 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b
compatibility::simple-validator-upgrade::rest-validator-upgrade : committed: 3627 txn/s, latency: 8849 ms, (p50: 9600 ms, p90: 12600 ms, p99: 12900 ms), latency samples: 145080
5. check swarm health
Compatibility test for aptos-node-v1.6.2 ==> 0c7f6ce51a4a8f522da9c6c54521a25d23ab8e0b passed
Test Ok

Copy link
Contributor

github-actions bot commented Dec 3, 2023

This issue is stale because it has been open 45 days with no activity. Remove the stale label, comment or push a commit - otherwise this will be closed in 15 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CICD:run-e2e-tests when this label is present github actions will run all land-blocking e2e tests from the PR Stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants