Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: fix all typos #1626

Merged
merged 5 commits into from
Oct 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 25 additions & 3 deletions .github/workflows/pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,13 @@ jobs:

examples:
name: Examples
runs-on: [runs-on, runner=64cpu-linux-x64, spot=false, "run-id=${{ github.run_id }}"]
runs-on:
[
runs-on,
runner=64cpu-linux-x64,
spot=false,
"run-id=${{ github.run_id }}",
]
env:
CARGO_NET_GIT_FETCH_WITH_CLI: "true"
steps:
Expand Down Expand Up @@ -266,7 +272,7 @@ jobs:

# low-memory:
# name: Low Memory
# strategy:
# strategy:
# matrix:
# mem_limit: [16, 32, 64]
# runs-on:
Expand All @@ -289,7 +295,7 @@ jobs:
# - name: Install SP1 toolchain
# run: |
# curl -L https://sp1.succinct.xyz | bash
# ~/.sp1/bin/sp1up
# ~/.sp1/bin/sp1up
# ~/.sp1/bin/cargo-prove prove --version

# - name: Install SP1 CLI
Expand Down Expand Up @@ -382,3 +388,19 @@ jobs:
# AWS_SUBNET_ID: "${{ secrets.AWS_SUBNET_ID }}"
# AWS_SG_ID: "${{ secrets.AWS_SG_ID }}"
# GH_PAT: "${{ secrets.GH_PAT }}"

typos:
name: Spell Check
runs-on: ubuntu-latest
steps:
- name: Checkout Actions Repository
uses: actions/checkout@v4

- name: Check all typos
uses: crate-ci/typos@master
with:
write_changes: true

- uses: getsentry/action-git-diff-suggestions@main
with:
message: typos
5 changes: 5 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -132,3 +132,8 @@ p3-bn254-fr = "0.1.4-succinct"
# p3-uni-stark = { path = "../Plonky3/uni-stark" }
# p3-maybe-rayon = { path = "../Plonky3/maybe-rayon" }
# p3-bn254-fr = { path = "../Plonky3/bn254-fr" }

[workspace.metadata.typos]
# TODO: Fix in next version since CommitCommitedValuesDigest is retained since it's present in constraints.json
default.extend-ignore-re = ["Jo-Philipp Wich", "SubEIN", "DivEIN", "CommitCommitedValuesDigest"]
default.extend-ignore-words-re = ["(?i)groth", "TRE"]
2 changes: 1 addition & 1 deletion book/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ SP1 has undergone multiple audits from leading ZK security firms and is currentl

## The future of ZK is writing normal code

Zero-knowledge proofs (ZKPs) are one of the most critical technologies to blockchain scaling, interoperability and privacy. But, historically building ZKP systems was extrememly complicated--requiring large teams with specialized cryptography expertise and taking years to go to production.
Zero-knowledge proofs (ZKPs) are one of the most critical technologies to blockchain scaling, interoperability and privacy. But, historically building ZKP systems was extremely complicated--requiring large teams with specialized cryptography expertise and taking years to go to production.

SP1 provides a performant, general-purpose zkVM that enables **any developer** to use ZKPs by writing normal code (in Rust), and get cheap and fast proofs. SP1 will enable ZKPs to become mainstream, introducing a new era of verifiability for all of blockchain infrastructure and beyond.

Expand Down
4 changes: 2 additions & 2 deletions book/writing-programs/inputs-and-outputs.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ sp1_zkvm::io::commit::<u64>(&b);
sp1_zkvm::io::commit::<String>(&c);
```

Note that `T` must implement the `Serialize` and `Deserialize` trait. If you want to write bytes directly, you can also use `sp1_zkvm::io::write_slice` method:
Note that `T` must implement the `Serialize` and `Deserialize` trait. If you want to write bytes directly, you can also use `sp1_zkvm::io::commit_slice` method:

```rust,noplayground
let mut my_slice = [0_u8; 32];
Expand All @@ -46,7 +46,7 @@ sp1_zkvm::io::commit_slice(&my_slice);
Typically, you can implement the `Serialize` and `Deserialize` traits using a simple derive macro on a struct.

```rust,noplayground
use serde::{Serialize, de::Deserialize};
use serde::{Serialize, Deserialize};

#[derive(Serialize, Deserialize)]
struct MyStruct {
Expand Down
7 changes: 3 additions & 4 deletions book/writing-programs/proof-aggregation.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ Note that to verify an SP1 proof inside SP1, you must generate a "compressed" SP

### When to use aggregation

Note that by itself, SP1 can already prove arbitarily large programs by chunking the program's execution into multiple "shards" (contiguous batches of cycles) and generating proofs for each shard in parallel, and then recursively aggregating the proofs. Thus, aggregation is generally **not necessary** for most use-cases, as SP1's proving for large programs is already parallelized. However, aggregation can be useful for aggregating computations that require more than the zkVM's limited (~2GB) memory or for aggregating multiple SP1 proofs from different parties into a single proof to save on onchain verification costs.
Note that by itself, SP1 can already prove arbitrarily large programs by chunking the program's execution into multiple "shards" (contiguous batches of cycles) and generating proofs for each shard in parallel, and then recursively aggregating the proofs. Thus, aggregation is generally **not necessary** for most use-cases, as SP1's proving for large programs is already parallelized. However, aggregation can be useful for aggregating computations that require more than the zkVM's limited (~2GB) memory or for aggregating multiple SP1 proofs from different parties into a single proof to save on onchain verification costs.

## Verifying Proofs inside the zkVM
## Verifying Proofs inside the zkVM

To verify a proof inside the zkVM, you can use the `sp1_zkvm::lib::verify::verify_proof` function.

Expand Down Expand Up @@ -48,12 +48,11 @@ let input_proof = client
let mut stdin = SP1Stdin::new();
stdin.write_proof(input_proof, input_vk);

// Generate a proof that will recusively verify / aggregate the input proof.
// Generate a proof that will recursively verify / aggregate the input proof.
let aggregation_proof = client
.prove(&aggregation_pk, stdin)
.compressed()
.run()
.expect("proving failed");

```

2 changes: 1 addition & 1 deletion crates/cli/src/commands/build_toolchain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ pub struct BuildToolchainCmd {}

impl BuildToolchainCmd {
pub fn run(&self) -> Result<()> {
// Get enviroment variables.
// Get environment variables.
let github_access_token = std::env::var("GITHUB_ACCESS_TOKEN");
let build_dir = std::env::var("SP1_BUILD_DIR");

Expand Down
8 changes: 4 additions & 4 deletions crates/cli/src/commands/trace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ fn strip_hash(name_with_hash: &str) -> String {
result
}

fn print_intruction_counts(
fn print_instruction_counts(
first_header: &str,
count_vec: Vec<(String, usize)>,
top_n: usize,
Expand Down Expand Up @@ -377,7 +377,7 @@ impl TraceCmd {
println!("\n\nTotal instructions in trace: {}", total_lines);
if !no_stack_counts {
println!("\n\n Instruction counts considering call graph");
print_intruction_counts(
print_instruction_counts(
"Function Name",
raw_counts,
top_n,
Expand All @@ -391,7 +391,7 @@ impl TraceCmd {
raw_counts.sort_by(|a, b| b.1.cmp(&a.1));
if !no_raw_counts {
println!("\n\n Instruction counts ignoring call graph");
print_intruction_counts(
print_instruction_counts(
"Function Name",
raw_counts,
top_n,
Expand Down Expand Up @@ -421,7 +421,7 @@ impl TraceCmd {
raw_counts.sort_by(|a, b| b.1.cmp(&a.1));
if let Some(f) = function_name {
println!("\n\n Stack patterns for function '{f}' ");
print_intruction_counts("Function Stack", raw_counts, top_n, strip_hashes, None);
print_instruction_counts("Function Stack", raw_counts, top_n, strip_hashes, None);
}
Ok(())
}
Expand Down
2 changes: 1 addition & 1 deletion crates/core/executor/src/context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ use crate::{
/// Context to run a program inside SP1.
#[derive(Clone, Default)]
pub struct SP1Context<'a> {
/// The registry of hooks invokable from inside SP1.
/// The registry of hooks invocable from inside SP1.
///
/// Note: `None` denotes the default list of hooks.
pub hook_registry: Option<HookRegistry<'a>>,
Expand Down
2 changes: 1 addition & 1 deletion crates/core/executor/src/events/alu.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ use super::{create_alu_lookups, LookupId};
/// shard, opcode, operands, and other relevant information.
#[derive(Debug, Clone, Copy, Serialize, Deserialize)]
pub struct AluEvent {
/// The lookup identifer.
/// The lookup identifier.
pub lookup_id: LookupId,
/// The shard number.
pub shard: u32,
Expand Down
4 changes: 2 additions & 2 deletions crates/core/executor/src/events/precompiles/ec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ pub struct EllipticCurveAddEvent {
/// This event is emitted when an elliptic curve doubling operation is performed.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct EllipticCurveDoubleEvent {
/// The lookup identifer.
/// The lookup identifier.
pub lookup_id: LookupId,
/// The shard number.
pub shard: u32,
Expand All @@ -68,7 +68,7 @@ pub struct EllipticCurveDoubleEvent {
/// This event is emitted when an elliptic curve point decompression operation is performed.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct EllipticCurveDecompressEvent {
/// The lookup identifer.
/// The lookup identifier.
pub lookup_id: LookupId,
/// The shard number.
pub shard: u32,
Expand Down
2 changes: 1 addition & 1 deletion crates/core/executor/src/events/precompiles/edwards.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ use crate::events::{
/// This event is emitted when an edwards decompression operation is performed.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct EdDecompressEvent {
/// The lookup identifer.
/// The lookup identifier.
pub lookup_id: LookupId,
/// The shard number.
pub shard: u32,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ pub(crate) const STATE_SIZE: usize = 25;
/// This event is emitted when a keccak-256 permutation operation is performed.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct KeccakPermuteEvent {
/// The lookup identifer.
/// The lookup identifier.
pub lookup_id: LookupId,
/// The shard number.
pub shard: u32,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ use crate::events::{
/// This event is emitted when a SHA-256 compress operation is performed.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct ShaCompressEvent {
/// The lookup identifer.
/// The lookup identifier.
pub lookup_id: LookupId,
/// The shard number.
pub shard: u32,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ use crate::events::{
/// This event is emitted when a SHA-256 extend operation is performed.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct ShaExtendEvent {
/// The lookup identifer.
/// The lookup identifier.
pub lookup_id: LookupId,
/// The shard number.
pub shard: u32,
Expand Down
2 changes: 1 addition & 1 deletion crates/core/executor/src/events/precompiles/uint256.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ use crate::events::{
/// This event is emitted when a uint256 mul operation is performed.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct Uint256MulEvent {
/// The lookup identifer.
/// The lookup identifier.
pub lookup_id: LookupId,
/// The shard number.
pub shard: u32,
Expand Down
2 changes: 1 addition & 1 deletion crates/core/executor/src/executor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ pub struct Executor<'a> {
/// The maximum size of each shard.
pub shard_size: u32,

/// The maximimum number of shards to execute at once.
/// The maximum number of shards to execute at once.
pub shard_batch_size: u32,

/// The maximum number of cycles for a syscall.
Expand Down
2 changes: 1 addition & 1 deletion crates/core/executor/src/opcode.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ use enum_map::Enum;
use p3_field::Field;
use serde::{Deserialize, Serialize};

/// An opcode (short for "operation code") specifies the operation to be perfomed by the processor.
/// An opcode (short for "operation code") specifies the operation to be performed by the processor.
///
/// In the context of the RISC-V ISA, an opcode specifies which operation (i.e., addition,
/// subtraction, multiplication, etc.) to perform on up to three operands such as registers,
Expand Down
2 changes: 1 addition & 1 deletion crates/core/executor/src/state.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ pub struct ExecutionState {
/// + timestamp that each memory address was accessed.
pub memory: PagedMemory<MemoryRecord>,

/// The global clock keeps track of how many instrutions have been executed through all shards.
/// The global clock keeps track of how many instructions have been executed through all shards.
pub global_clk: u64,

/// The clock increments by 4 (possibly more in syscalls) for each instruction that has been
Expand Down
4 changes: 2 additions & 2 deletions crates/core/machine/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- implement `isEqualWordOperation` and use it in `DivRemChip` ([#103](https://github.com/succinctlabs/sp1/pull/103))
- Implement `MSB` byte lookup op and use it in ALU tables ([#100](https://github.com/succinctlabs/sp1/pull/100))
- `IsZero` Operation ([#92](https://github.com/succinctlabs/sp1/pull/92))
- sha256 compress contraints ([#94](https://github.com/succinctlabs/sp1/pull/94))
- sha256 compress constraints ([#94](https://github.com/succinctlabs/sp1/pull/94))
- add4 operations ([#91](https://github.com/succinctlabs/sp1/pull/91))
- tracing, profiling, benchmarking ([#99](https://github.com/succinctlabs/sp1/pull/99))
- fix all cargo tests + add ci + rename curta to succinct ([#97](https://github.com/succinctlabs/sp1/pull/97))
Expand Down Expand Up @@ -370,7 +370,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Merge branch 'dev' into john/fix-main-regression
- fix program and permutation trace exports ([#887](https://github.com/succinctlabs/sp1/pull/887))
- refactor derive, serialize bounds ([#869](https://github.com/succinctlabs/sp1/pull/869))
- increase byte lookup channes ([#876](https://github.com/succinctlabs/sp1/pull/876))
- increase byte lookup channels ([#876](https://github.com/succinctlabs/sp1/pull/876))
- constraint selectors when is_real zero ([#873](https://github.com/succinctlabs/sp1/pull/873))
- state_mem validity ([#871](https://github.com/succinctlabs/sp1/pull/871))
- fixes ([#821](https://github.com/succinctlabs/sp1/pull/821))
Expand Down
4 changes: 2 additions & 2 deletions crates/core/machine/src/alu/add_sub/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ pub const NUM_ADD_SUB_COLS: usize = size_of::<AddSubCols<u8>>();

/// A chip that implements addition for the opcode ADD and SUB.
///
/// SUB is basically an ADD with a re-arrangment of the operands and result.
/// SUB is basically an ADD with a re-arrangement of the operands and result.
/// E.g. given the standard ALU op variable name and positioning of `a` = `b` OP `c`,
/// `a` = `b` + `c` should be verified for ADD, and `b` = `a` + `c` (e.g. `a` = `b` - `c`)
/// should be verified for SUB.
Expand Down Expand Up @@ -205,7 +205,7 @@ where
local.is_add + local.is_sub,
);

// Receive the arguments. There are seperate receives for ADD and SUB.
// Receive the arguments. There are separate receives for ADD and SUB.
// For add, `add_operation.value` is `a`, `operand_1` is `b`, and `operand_2` is `c`.
builder.receive_alu(
Opcode::ADD.as_field::<AB::F>(),
Expand Down
2 changes: 1 addition & 1 deletion crates/core/machine/src/bytes/columns.rs
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,6 @@ pub struct BytePreprocessedCols<T> {
#[derive(Debug, Clone, Copy, AlignedBorrow)]
#[repr(C)]
pub struct ByteMultCols<T> {
/// The multiplicites of each byte operation.
/// The multiplicities of each byte operation.
pub multiplicities: [T; NUM_BYTE_OPS],
}
6 changes: 3 additions & 3 deletions crates/core/machine/src/cpu/air/memory.rs
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ impl CpuChip {

// Get the memory offset flags.
self.eval_offset_value_flags(builder, memory_columns, local);
// Compute the offset_is_zero flag. The other offset flags are already contrained by the
// Compute the offset_is_zero flag. The other offset flags are already constrained by the
// method `eval_memory_address_and_access`, which is called in
// `eval_memory_address_and_access`.
let offset_is_zero = AB::Expr::one()
Expand Down Expand Up @@ -271,7 +271,7 @@ impl CpuChip {
) {
let mem_val = *memory_columns.memory_access.value();

// Compute the offset_is_zero flag. The other offset flags are already contrained by the
// Compute the offset_is_zero flag. The other offset flags are already constrained by the
// method `eval_memory_address_and_access`, which is called in
// `eval_memory_address_and_access`.
let offset_is_zero = AB::Expr::one()
Expand All @@ -286,7 +286,7 @@ impl CpuChip {
+ mem_val[3] * memory_columns.offset_is_three;
let byte_value = Word::extend_expr::<AB>(mem_byte.clone());

// When the instruciton is LB or LBU, just use the lower byte.
// When the instruction is LB or LBU, just use the lower byte.
builder
.when(local.selectors.is_lb + local.selectors.is_lbu)
.assert_word_eq(byte_value, local.unsigned_mem_val.map(|x| x.into()));
Expand Down
2 changes: 1 addition & 1 deletion crates/core/machine/src/cpu/air/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ impl CpuChip {
/// Constraints related to the shard and clk.
///
/// This method ensures that all of the shard values are the same and that the clk starts at 0
/// and is transitioned apporpriately. It will also check that shard values are within 16 bits
/// and is transitioned appropriately. It will also check that shard values are within 16 bits
/// and clk values are within 24 bits. Those range checks are needed for the memory access
/// timestamp check, which assumes those values are within 2^24. See
/// [`MemoryAirBuilder::verify_mem_access_ts`].
Expand Down
2 changes: 1 addition & 1 deletion crates/core/machine/src/memory/columns.rs
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ pub struct MemoryAccessCols<T> {
/// timestamp.
pub diff_16bit_limb: T,

/// This column is the most signficant 8 bit limb of current access timestamp - prev access
/// This column is the most significant 8 bit limb of current access timestamp - prev access
/// timestamp.
pub diff_8bit_limb: T,
}
Expand Down
10 changes: 5 additions & 5 deletions crates/core/machine/src/memory/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -178,10 +178,10 @@ pub struct MemoryInitCols<T> {
/// A witness to assert whether or not we the previous address is zero.
pub is_prev_addr_zero: IsZeroOperation<T>,

/// Auxilary column, equal to `(1 - is_prev_addr_zero.result) * is_first_row`.
/// Auxiliary column, equal to `(1 - is_prev_addr_zero.result) * is_first_row`.
pub is_first_comp: T,

/// A flag to inidicate the last non-padded address. An auxiliary column needed for degree 3.
/// A flag to indicate the last non-padded address. An auxiliary column needed for degree 3.
pub is_last_addr: T,
}

Expand Down Expand Up @@ -240,7 +240,7 @@ where
);

// Assertion for increasing address. We need to make two types of less-than assertions,
// first we ned to assert that the addr < addr' when the next row is real. Then we need to
// first we need to assert that the addr < addr' when the next row is real. Then we need to
// make assertions with regards to public values.
//
// If the chip is a `MemoryInit`:
Expand Down Expand Up @@ -322,7 +322,7 @@ where
// Constraints related to register %x0.

// Register %x0 should always be 0. See 2.6 Load and Store Instruction on
// P.18 of the RISC-V spec. To ensure that, we will constain that the value is zero
// P.18 of the RISC-V spec. To ensure that, we will constrain that the value is zero
// whenever the `is_first_comp` flag is set to to zero as well. This guarantees that the
// presence of this flag asserts the initialization/finalization of %x0 to zero.
//
Expand All @@ -334,7 +334,7 @@ where
}

// Make assertions for the final value. We need to connect the final valid address to the
// correspinding `last_addr` value.
// corresponding `last_addr` value.
let last_addr_bits = match self.kind {
MemoryChipType::Initialize => &public_values.last_init_addr_bits,
MemoryChipType::Finalize => &public_values.last_finalize_addr_bits,
Expand Down
Loading
Loading