-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Added cryptography section to yellow paper #3647
Changes from 11 commits
81681fd
3f3fc53
f2fdfae
22707ef
d2ac345
fdd5353
1864a7e
c8fc369
d3b154a
71b0a8b
c760d8b
8080c34
980b7fe
a72f440
0cfc37b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
{ | ||
"label": "Cryptography", | ||
"position": 10, | ||
"link": { | ||
"type": "generated-index", | ||
"description": "Aztec cryptography tech stack" | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,181 @@ | ||
# Honk targets and win conditions | ||
|
||
## Introduction & context | ||
|
||
Aztec's cryptography tech stack and its associated implementation is an open-ended project with potential for many enhancements, optimisations and scope-creep. | ||
|
||
This document is designed to definitively answer the following questions: | ||
|
||
1. What are the metrics we care about when measuring our cryptography components? | ||
3. What are minimum satisfiable values for these metrics? | ||
4. What are the aspirational values for these metrics? | ||
|
||
# Important Metrics | ||
|
||
The following is a list of the relevant properties that affect the performance of the Aztec network: | ||
|
||
* Size of a user transaction (in kb) | ||
* Time to generate a user transaction proof | ||
* Memory required to generate a user transaction proof | ||
* Time to generate an Aztec Virtual Machine proof | ||
* Memory required to generate an Aztec Virtual Machine proof | ||
* Time to compute a 2x2 rollup proof | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I know this "two-by-two" terminology has taken hold, but it's just wrong--IMO a two-by-two thing involves four things of the same type. The term does not indicate any kind of compression. Motion to change to "2x1" or "2-to-1" 🙏🙏 |
||
* Memory required to compute a 2x2 rollup proof | ||
|
||
<!-- We can break these properties down into metrics linked to specitic cryptographic components: | ||
|
||
* Size of Goblin Plonk proofs | ||
* Size of Honk proofs | ||
* Honk prover time | ||
* Goblin Plonk prover time | ||
* Protogalaxy recursion --> | ||
|
||
"MVP" = minimum standards that we can go to main-net with. | ||
|
||
Note: gb = gigabytes (not gigabits, gigibits or gigibytes) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "gigifoo" There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I thought giga = 2^30 and gigi = 10^9 ? |
||
|
||
| metric | how to measure | MVP (10tps) | ideal (100tps) | | ||
| --- | --- | --- | --- | | ||
| proof size | total size of a user tx incl. goblin plonk proofs | 80kb | 8kb | | ||
| prover time | A baseline "medium complexity" transaction (in web browser). Full description further down | 1 min | 10 seconds | | ||
| verifier time | how long does it take the verifier to check a proof (incl. grumpkin IPA MSMs) | 20ms | 1ms | | ||
| client memory consumption | fold 2^19 circuits into an accumulator an arbitrary number of times | 4gb | 1gb | | ||
| size of the kernel circuit | number of gates | 2^17 | 2^15 | | ||
| Aztec Virtual Machine prover time | 1 million VM step circuit | 60 seconds | 6 seconds | | ||
| Aztec Virtual Machine memory consumption | 1 million VM step circuit | 128gb | 16gb | | ||
| 2x2 rollup proving time | 1 2x2 rollup proof | 7.4 seconds | 0.74 seconds | | ||
| 2x2 rollup memory consumption | 1 2x2 rollup proof | 128gb | 16gb | | ||
|
||
To come up with the above estimates, we are targetting 10 transactions per second for the MVP and 100 tps for the "ideal" case. We are assuming both block producers and rollup Provers have access to 128-core machines with 128gb of RAM. Additionally, we assume that the various process required to produce a block consume the following: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @zac-williamson currently we have been assuming 32 -64 cores max as the perfomance benefit drops off after that. See spreadsheet here: https://docs.google.com/spreadsheets/d/1cBBZZ_dyD0tiUmAdjoGbnLrk2H0oSmOgYubtZy9JTRU/edit#gid=1562975724 Secondly, ideally block producers (sequencers) can be run on 16 core 32gb machines as they will not be producing proofs. |
||
|
||
| process | percent of block production time allocated to process | | ||
| --- | --- | | ||
| transaction validation | 10% | | ||
| block building (tx simulation) | 20% | | ||
| public VM proof construction time | 20% | | ||
| rollup prover time | 40% | | ||
| UltraPlonk proof compression time | 10% | | ||
|
||
These are very rough estimates that could use further evaluation and validation! | ||
|
||
### Proof size | ||
|
||
The MVP wishes to target a tx through put of 10 tx per second. | ||
|
||
Each Aztec node (not sequencer/prover, just a regular node that is sending transactions) needs to download `10*proof_size` bytes of data to keep track of the mempool. However, this is the *best case* scenario. | ||
|
||
More practically, the data throughput of a p2p network will be less than the bandwidth of participants due to network coordination costs. | ||
As a rough heuristic, we assume that network bandwidth will be 10% of p2p user bandwidth. | ||
NOTE: can we find some high-quality information about p2p network throughput relative to the data consumed by p2p node operators? | ||
|
||
As a result, the MPV data throughput could scale up to `100 * proof_size` bytes of data per second. | ||
|
||
For an MVP we wish to target a maximum bandwidth of 8MB per second (i.e. a good broadband connection). This gives us a network bandwidth of 0.8MB/s. | ||
|
||
This sets the proof size limit to 819.2 kb per second per 100 transactions => 82 kilobytes of data per transaction. | ||
|
||
As a rough estimate, we can assume the non-proof tx data will be irrelevant compared to 82kb, so we target a proof size of $80$ kilobytes for the MPV. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. MPV |
||
|
||
To support 100 transactions per second we would rquire a proof size of $8$ kilobytes. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. rquire |
||
|
||
### Prover time | ||
|
||
The critical UX factor. To measure prover time for a transaction, we must first define a baseline transaction we wish to measure and the execution environment of the Prover. | ||
|
||
As we build+refine our MPV, we want to avoid optimising the best-case scenario (i.e. the most basic tx type, a token transfer). Instead we want to ensure that transactions of a "moderate" complexity are possible with consuer hardware. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. MPV |
||
|
||
As a north star, we consider a private swap, and transpose it into an Aztec contract. | ||
|
||
To perform a private swap, the following must occur: | ||
|
||
1. Validate the user's account contract (1 kernel call) | ||
2. Call a swap contract (1 kernel call) | ||
3. The swap contract will initiate `transfer` calls on two token contracts (2 kernel calls) | ||
4. A fee must be paid via our fee abstraction spec (1 kernel call) | ||
5. A final "cleanup" proof is generated that evaluates state reads and processes the queues that have been constructed by previous kernel circuits (1 kernel call + 1 function call; the cleanup proof) | ||
|
||
In total we have 6 kernel calls and 6 function calls. | ||
|
||
We can further abstract the above by making the following assumption: | ||
|
||
1. The kernel circuit is $2^{17}$ constraints | ||
2. The average number of constraints per function call is $2^{17}$ constraints, but the first function called has $2^{19}$ constraints | ||
|
||
Defining the first function to cost $2^{19}$ constraints is a conservative assumption due to the fact that the kernel circuit can support functions that have a max of $2^{19}$ constraints. We want to ensure that our benchmarks (and possible optimisations) capture the "heavy function" case and we don't just optimise for lightweight functions. | ||
|
||
#### Summary of what we are measuring to capture Prover time | ||
|
||
1. A mock kernel circuit has a size of $2^{17}$ constraints and folds *two* Honk instances into an accumulator (the prev. kernel and the function being called) | ||
2. The Prover must prove 5 mock function circuit proofs of size $2^{17}$ and one mock function proof of size $2^{19}$ | ||
3. The Prover must iteratively prove 6 mock kernel circuit proofs | ||
|
||
#### Execution environment | ||
|
||
For the MVP we can assume the user has reasonable hardware. For the purpose we use a 2-year-old macbook with 16gb RAM. The proof must be generated in a web browser | ||
|
||
#### Performance targets | ||
|
||
For an MVP, we target a 1 minute proof generation time. This is a substantial amount of time to ask a user to wait and we are measuring on good hardware. | ||
|
||
In an ideal world, a 10 second proof generation time would be much better for UX. | ||
|
||
### Verifier time | ||
|
||
This matters because verifying a transaction is effectively free work being performed by sequencers and network nodes that propagate txns to the mempool. If verification time becomes too large it opens up potential DDOS attacks. | ||
|
||
If we reserve 10% of the block production time for verifying user proofs, at 10 transaction per seconds this gives us 0.01s per transaction. i.e. 10ms per proof. | ||
|
||
If the block producer has access to more than one physical machine that they can use to parallelise verification, we can extend the maximum tolerable verification time. For an MVP that requires 20ms to verify each proof, each block producer would require at least 2 physical machines to successfully build blocks. | ||
|
||
100tps with one physical machine would require a verifiation time of 1ms per proof. | ||
|
||
### Memory consumption | ||
|
||
This is *critical*. Users can tolerate slow proofs, but if Honk consumes too much memory, a user cannot make a proof at all. | ||
|
||
safari on iPhone will purge tabs that consume more than 1gb of RAM. The WASM memory cap is 4gb which defines the upper limit for an MVP. | ||
|
||
### Kernel circuit size | ||
|
||
Not a critical metric, but the prover time + prover memory metrics are predicated on a kernel circuit costing about 2^17 constraints! | ||
iAmMichaelConnor marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
### AVM Prover time | ||
|
||
Our goal is to hit main-net with a network that can support 10 transactions per second. We need to estimate how many VM computation steps will be needed per transaction to determine the required speed of the VM Prover. The following uses very conservative estimations due to the difficulty of estimating this. | ||
iAmMichaelConnor marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
An Ethereum block consists of approximately 1,000 transactions, with a block gas limit of roughly 10 million gas. Basic computational steps in the Ethereum Virtual Machine consume 3 gas. If the entire block gas limit is consumed with basic computation steps (not true but let's assume for a moment), this implies that 1,000 transactions consume 3.33 million computation steps. i.e. 10 transactions per second would require roughly 33,000 steps per second and 3,330 steps per transaction. | ||
|
||
An AVM circuit with 1 million steps can therefore accomodate approximately 300 "typical" transactions. If we budget 20% of the block time to constructing AVM public funciton proofs, proof construction time must therefore be approximately 6 seconds to be able to prove all AVM programs in a block and achieve 10 tps. | ||
|
||
However, with device parallelisation these numbers can be increased substantially. Assuming the Prover network has access to 10 machines, this scales to 60 seconds. | ||
|
||
Note: this measurement assumes we can evaluate multiple public VM function calls in a single VM execution trace. | ||
|
||
### AVM Memory consumption | ||
|
||
A large AWS instance can consume 128Gb of memory which puts an upper limit for AVM RAM consumption. Ideally consumer-grade hardware can be used to generate AVM proofs i.e. 16 Gb. | ||
|
||
### 2x2 rollup proving time | ||
|
||
For a rollup block containing $2^d$ transactions, we need to compute 2x2 rollup proofs across $d$ layers (i.e. 2^{d-1} 2x2 proofs, followed by 2^{d-2} proofs, followed by... etc down to requiring 1 2x2 proof). To hit 10tps, we must produce 1 block in $\frac{2^d}{10}$ seconds. | ||
|
||
Note: this excludes network coordination costs, latency costs, block construction costs, public VM proof construction costs (must be computed before the 2x2 rollup proofs), cost to compute the final UltraPlonk proof. | ||
|
||
To accomodate the above costs, we assume can budget 40% of block production time towards making proofs. Given these constraints, the following table describes maximum allowable proof construction times for a selection of block sizes. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "we assume can budget" |
||
|
||
| block size | number of successive 2x2 rollup proofs | number of parallel Prover machines required for base layer proofs | time required to construct a rollup proof | | ||
| --- | --- | --- | --- | | ||
| $1,024$ | $10$ | $512$ | 4.1s | | ||
| $2,048$ | $11$ | $1,024$ | 7.4s | | ||
| $4,096$ | $12$ | $2,048$ | 13.6s | | ||
| $8,192$ | $13$ | $4,096$ | 25.2s | | ||
| $16,384$ | $14$ | $8,192$ | 46.8s | | ||
|
||
We must also define the maximum number of physical machines we can reasonably expect to be constructing proofs across the Prover network. If we can assume we can expect $1,024$ machines available, this caps the MPV proof construction time at 7.4 seconds. | ||
|
||
Supporting a proof construction time of 4.1s would enable us to reduce minimum hardware requirements for the Prover network to 512 physical machines. | ||
|
||
### 2x2 rollup memory consumption | ||
|
||
Same rationale as the public VM proof construction time. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You skipped 2. Sometimes you can just do 1. 1. 1. eg here