Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JAM Service for Validating Ethereum Optimistic Rollups #127

Conversation

sourabhniyogi
Copy link

@sourabhniyogi sourabhniyogi commented Oct 26, 2024

JAM's rollup host function can be extended from Polkadot rollups to non-Polkadot rollups. We outline the design of one of a few JAM services capable of securing Ethereum optimistic rollups, such as OP Stack and ArbOS. This will set up rollup users to benefit from the validity, DA and finality and high throughput of JAM services, alongside JAM's anticipated messaging services. These JAM services will enables rollup operators to choose JAM/Polkadot over Ethereum or other rollup service providers, or in addition to improve user experience and security. A precise design for one of the basic services DA refine-accumulate function is outlined, using tools already available.

Update: I'll rework this significantly based on rapid expert feedback in November.

sourabhniyogi and others added 9 commits October 9, 2023 04:35
Polkadot+Kusama should support the possibility of having up to 10-30% of its blockspace weight allocatable to EVM and WASM Smart Contracts.   This is not for Polkadot to be a yet another EVM Chain, but specifically  to:
1. support the use of Polkadot's Data Availability resources to non-Substrate [largely EVM] L2 Stacks, which will bring in additional demand for DOT and through those networks, new  users and developers
2. (assuming CoreJam Work Packages have some direct relation to EVM Contracts + WASM Contract *Interpretation*),  support Polkadot 2.0 experimentation in its transformation into a “map reduce” computer and bring in new classes of CoreJam+CorePlay developers in addressing sync/asynchronous composability
This proposal adapts the CoreJam architecture to EVM L2s, specifically utilizing OP Stack + Solidity instead of Polkadot/Substrate.

Almost all CoreJam concepts are retained, but CoreJam's Rust relay chain interfaces are replaced with Solidity/EVM Contracts + OP Stack's Golang,
situated in a "system chain".
JAM's *rollup host* function can be extended from Polkadot rollups to non-Polkadot rollups. We outline the design of a JAM service capable of securing Ethereum *optimistic* rollups, such as OP Stack and ArbOS. This service transforms optimistic rollups to cynical rollups, allowing users to benefit from the finality and high throughput of JAM, alongside JAM's anticipated messaging service. This JAM service's competitive advantage enables rollup operators to choose JAM/Polkadot over Ethereum or other rollup providers. A precise design for the service's refine-accumulate function is outlined, using tools already available.
The service maintains a summary of fully validated blocks of each chain in 3 service storage keys:
* the first block number $a$ validated with data fully available
* the last block number $b$ validated with data fully available
* a window parameter $w$, modeling the maximum $b-a$
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it'd be better to handle forks than focus on a single "chain". the reason is that finality on Ethereum is slow - if you want to guarantee that a rollup batch won't be reorged out of the Ethereum chain, you would really have to wait for Ethereum's finality. Finality on Polkadot is fast and there is an excess of cores. So it's better to validate all batches from all forks on Ethereum, and get that information to the unfinalized Ethereum chain as fast as possible.

Copy link
Author

@sourabhniyogi sourabhniyogi Oct 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you mean the case where a optimistic-on-Ethereum/cynical-on-Polkadot rollup uses JAM to prove finality on both ETH+JAM, correct? Clearly, this "grandfathering" is a critically important use case for the CEXes-on-OP Stack cases.

The "simple" solution I have had is that ETH L2 finality is based on Polkadot/JAM finality with the refinement context anchor block based on JAM alone. But if we get the Ethereum finality into JAM chain state (how exactly is not clear?), the historical_lookup / anchor block could include both ETH + Polkadot/JAM finality, and refine would be based on that.

The proof that a L2 has been finalized in JAM alone is coming out via C3/Beta, which makes it over to Ethereum slowly. With work packages validating against all of ETH's unfinalized forks preemptively, and then a signal coming in from ETH coming in as to which fork was finalized (how exactly is not clear, same question as above), we have a faster way to get the same answer of "finalized on both". There is an abundance of cores so yes, the preemptive work package submission is super sensible. Thank you for this idea.

But I'm not sure how Ethereum finality gets into JAM chain state though... what is ideal plan?

Copy link
Contributor

@rphmeier rphmeier Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you mean the case where a optimistic-on-Ethereum/cynical-on-Polkadot rollup uses JAM to prove finality on both ETH+JAM, correct?

No, I think you are confusing validity and finality. Finality is about consensus ordering of operations. Validity is what it sounds like. If you want to make Polkadot cores useful for Ethereum rollups, then Polkadot provides validity and Ethereum provides consensus ordering. So the ideal plan for handling Ethereum finality is to just sidestep it entirely.

My broader point is that for Polkadot to treat a rollup as a "chain", you'd either need permissioned sequencers or to follow the ordering of operations on the ETH chain, which is slow. To be clear, I think it's a bad idea to bake in any reliance on Ethereum block ordering and finality for a service that is fundamentally about providing validity guarantees on rollup blocks.

It seems conceptually simpler to just have a service for validating rollup blocks, and to have the rollup machinery itself organize them into a chain. Separate those concerns. That's basically how ZK rollups work: the ZK prover doesn't care about where the block lies in the rollup's chain, just that a specific block is valid. Expose a simple set of APIs to prove validity of your block on Polkadot and then find a way to bridge over a certificate of that validity.

Another bonus to focusing on blocks as opposed to chains is that rollup sequencers can submit blocks to Polkadot before they've even landed on the Ethereum chain at all. That's an important pipelining step. If there are competing sequencers they can all just submit their block to Polkadot in parallel.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, closing this for now, thank you so much for the insights!


#### Hashing

Currently, preimages are specified to use the Blake2b hash, while Ethereum rollup block hashes utilize Keccak256.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like an application-level concern. The service can commit to keccak-256 hashed data trivially

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright -- I'll adjust in a Polkadot-first way but it could done ETH L2 ORU-first centric way just as well.


#### CoreTime

Instead of ETH, rollups would require DOT for CoreTime to secure their rollup. However, rollups are not locked into JAM and may freely enter and exit the JAM ecosystem since work packages do not need to start at genesis.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also seems like an application-level concern

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is... but its like ss58 prefixes... can we get it right for both Polkadot and ORU on day 1 instead of day 1000? How would you address this concern early?


| JAM Refine | Content |
| --- | --- |
| *Work Package* $p_n \in \mathbb{P}$ | Data submitted by ORU operator for validation of blocks $i$ through $j$ |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's important to support partial validation in a WP, i.e. to spread the validation of a rollup batch across multiple WPs. The trend of rollups is to have large sequencers and produce heavy blocks. With the current formulation, you'd only support rollups whose batches fit into a single WP.

Note that WPs will require state tree proofs and will likely be bounded more by data than by compute.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will give it a shot, thank you!

@bkchr
Copy link
Contributor

bkchr commented Oct 26, 2024

Hey @sourabhniyogi, thank you for opening this RFC. Right now I don't see any need for this RFC. JAM is not finished and thus, service implementations are not yet really specable. Also JAM is build with no service build in and for sure with the idea to have more services than just the parachain service.

Generally having some discussion around the implementation details etc still makes sense to be prepared. However, I'm not sure a RFC is the best format for this right now.

@mfornos
Copy link

mfornos commented Oct 26, 2024

How do you prevent the inclusion of a completely fabricated block with valid but fake storage proofs? Since there's no validation of the signed transactions that legitimately alter the state, nothing prevents someone from submitting a forged block.

What am I missing? 😄

@Tomen
Copy link

Tomen commented Oct 26, 2024

The number of the RFC should be changed to RFC-127

@sourabhniyogi
Copy link
Author

sourabhniyogi commented Oct 26, 2024

How do you prevent the inclusion of a completely fabricated block with valid but fake storage proofs? Since there's no validation of the signed transactions that legitimately alter the state, nothing prevents someone from submitting a forged block.

What am I missing? 😄

Its not enough to have valid storage proofs against the state root -- you need a full contiguous set blocks to be 100.0000% available in the L1 DA -- the ORU CAN submit a forged block with valid and fake storage proofs but because these fake blocks are also available in JAM DA L1 (just as the call data is there are for a month in ETH L1), anyone in the community can validate on their own by pulling from the "official" JAM DA L1. If there is absolutely no one running community validator nodes (even though ORUs have just a single centralized sequencer run by a centralized team) the situation is dire and everyone is gaslightable (like the govts sometimes do ;) We certainly cannot have all the rollup state on JAM L1 to validate state transitions but we can have the last month of block to support the community to keep the centralized rollup in check.

@sourabhniyogi
Copy link
Author

Hey @sourabhniyogi, thank you for opening this RFC. Right now I don't see any need for this RFC. JAM is not finished and thus, service implementations are not yet really specable. Also JAM is build with no service build in and for sure with the idea to have more services than just the parachain service.

Generally having some discussion around the implementation details etc still makes sense to be prepared. However, I'm not sure a RFC is the best format for this right now.

Also not sure of the status of Polkadot rollup services (of which there would only be one, managed by the fellows by charter, worthy of specing) vs non-Polkadot rollup services (of which there are only so many rollup ecosystems, with 2 dominant approaches optimistic and zk rollups). Given the rollup reactor focus, I think its a good question for the fellows/w3f wizards as to what services are specable, worthy of RFC vs training material for whom. There are a dozen (possibly as many as 2 dozen) teams who can build/test non-Polkadot services in their JAM testnet (beyond tiny ones like Fib[n] = Fib[n-1]+Fib[n-2] or bootstrap service) just like ours -- whereas the parachain service is going to be a complex endeavor (!), the one here is going to be I guess, 2 orders of magnitude times smaller for a PoC. You can see how simple it is, and poke at it as a super expert at all of it.

Thank you for posting this tutorial, I read it like a JAM origin story (Safrole > Babe, this is how Polkadot validation really works) -- whereas some JAM implementers know Polkadot validation inside out like you, the opposite extreme of people coming at it fresh solely from GP also exists. It could be that their freshman-ness enables them to build all new JAM services that also fit in JAM's rollup reactor vision. I think experienced fellows have an important role to shepard the freshman. Having other rollup host services, even if none of them are "official" from the fellows, could support innovation. This is possible now in a way that was not realistic a year ago when we first learned about Corejam #31 -- right?

@burdges
Copy link

burdges commented Oct 27, 2024

Afaik an RFC seems pointless here. We should clean up the off-chain messaging story, probably some PRs to ISMP based upon Alistar's concerns. We'll want off-chain messaging RFCs eventually.

How do you prevent the inclusion of a completely fabricated block with valid but fake storage proofs? Since there's no validation of the signed transactions that legitimately alter the state, nothing prevents someone from submitting a forged block.

You need a parachain that puts OR blocks into parablocks aka wps. All parablocks/wps would be checked using the ELVES roll up deployed since 2021 for Parachains. This makes the OR blocks valid within the Polkadot threat model. All other known roll ups are insecure and/or high latency in comparison, so they cannot easily participate in messaging on polkadot.

Aside from validity, you must also check the ORs commitments on ETH using the BridgeHub parachain too, as well as the ORs' internal concensus, like any other bridge. You'll need forks like @rphmeier says if the OR lacks some real concensus.

Afaik Hyperbridge envisions doing this using parachains. Afaik JAM adds nothing here, but PolkaVM contracts help.

Also BridgeHub should communicate bare concensus, so state roots and validator sets, not messages. You'd prove the internal state of ORs contract on ETH using this.

Also an OR (hyper)bridge parachains should itself only communicate bare concensus too, like BridgeHub. Almost identical off-chain messaging pallets could make ETH and OR bridges look native to each parachain. And projects could alter these pallets, and the solidity contracts, if they wanted slightly different messaging, like timelocks or ordering, or wanted integration with existing contracts on ETH.

@mfornos
Copy link

mfornos commented Oct 28, 2024

How do you prevent the inclusion of a completely fabricated block with valid but fake storage proofs? Since there's no validation of the signed transactions that legitimately alter the state, nothing prevents someone from submitting a forged block.
What am I missing? 😄

Its not enough to have valid storage proofs against the state root -- you need a full contiguous set blocks to be 100.0000% available in the L1 DA -- the ORU CAN submit a forged block with valid and fake storage proofs but because these fake blocks are also available in JAM DA L1 (just as the call data is there are for a month in ETH L1), anyone in the community can validate on their own by pulling from the "official" JAM DA L1. If there is absolutely no one running community validator nodes (even though ORUs have just a single centralized sequencer run by a centralized team) the situation is dire and everyone is gaslightable (like the govts sometimes do ;) We certainly cannot have all the rollup state on JAM L1 to validate state transitions but we can have the last month of block to support the community to keep the centralized rollup in check.

So, either zk rollups or a parachain with EVM support if you want security. Otherwise, if you're using JAM as a DA layer with consensus (ala Celestia), then makes no sense to "verify" anything, just that a block hash was included at a point in time, being it valid or not.
Maybe then the use case could be generalized to secure timestamping (or sequencing if time is too hard) with data availability, instead of "securing optimistic rollups" that is not the case.

@sourabhniyogi sourabhniyogi deleted the support-non-substrate-l2 branch October 29, 2024 02:24
@AlistairStewart
Copy link

AlistairStewart commented Oct 29, 2024

While this isn't ready to be an RFC yet, it is an important conversation. @sourabhniyogi , will you be opening a dicussion somewhere else? What you wrote in this draft has value.

For Ethereum optimistic rollups, even those whose operators don't want to use Polkadot, someone might still want to post blocks to Polkadot for cynical validation. Ethereum L2s have bad interoperability and if we can show that there will not be a revert because the blocks are correct and there can be no fraud proof, then you can bridge sooner using Polkadot than just Ethereum.

For this we'd want to check that the blocks we have are those posted on Ethereum, both committed to in the EVM or posted to their blobs. We'd want to use the Ethereum state root from the bridge to check which blocks are final on Ethereum. But we'd probably want to post such blocks ahead of Ethereum's finality, as @rphmeier suggested. We also might want to mirror blocks posted to ither DA services like Celestia. But all of these extra checks work on top of the basic infrastructure outlined here.

It's up to the service/parachain itself whether one can post for free something committed to elsewhere, because this advances the state and is good for everyone, or else people can just pay to have their blocks validated before we have any idea whether it will be canonical or not. Doing things like following LMD Casper votes block by block, which Snowbridge doesn't do right now (nor does Hyperbridge) should be enough to reduce the Ethereum imported block rate down to one every 12 seconds, even if we don't know it is final. Our DA , even before JAM, is superior to Ethereum's blobs and most DA services, so we should be able to include everything that appears on these on Polkadot.

@sourabhniyogi
Copy link
Author

While this isn't ready to be an RFC yet, it is an important conversation. @sourabhniyogi , will you be opening a dicussion somewhere else? What you wrote in this draft has value.

For Ethereum optimistic rollups, even those whose operators don't want to use Polkadot, someone might still want to post blocks to Polkadot for cynical validation. Ethereum L2s have bad interoperability and if we can show that there will not be a revert because the blocks are correct and there can be no fraud proof, then you can bridge sooner using Polkadot than just Ethereum.

For this we'd want to check that the blocks we have are those posted on Ethereum, both committed to in the EVM or posted to their blobs. We'd want to use the Ethereum state root from the bridge to check which blocks are final on Ethereum. But we'd probably want to post such blocks ahead of Ethereum's finality, as @rphmeier suggested. We also might want to mirror blocks posted to ither DA services like Celestia. But all of these extra checks work on top of the basic infrastructure outlined here.

It's up to the service/parachain itself whether one can post for free something committed to elsewhere, because this advances the state and is good for everyone, or else people can just pay to have their blocks validated before we have any idea whether it will be canonical or not. Doing things like following LMD Casper votes block by block, which Snowbridge doesn't do right now (nor does Hyperbridge) should be enough to reduce the Ethereum imported block rate down to one every 12 seconds, even if we don't know it is final. Our DA , even before JAM, is superior to Ethereum's blobs and most DA services, so we should be able to include everything that appears on these on Polkadot.

@AlistairStewart @rphmeier @burdges @mfornos [all]

Given JAM's mission as a rollup host, the question I got your answers to was:

Question: What JAM Services should we build to support ETH ORUs vs ZKRUs?

(It's clear to assume that the market of non-Polkadot rollups is dominated by ETH ORUs followed by ETH ZKRUs.)

Answer: The JAM Services worth building are:
(A) DA for ORUs/ZKRUs [aggressively receiving all blocks, even if non-canonical]
(B) messaging for ORUs/ZKRUs
(C) validation for ZKRUs (with a "validate_state_transition" API) 
(D) ETH Light Client for ORU/ZKRUs (for both finalized [matching Snowfork] and unfinalized, pursuing all forks) 

For each of these, freshmen implementing JAM can outline refine-accumulate code and make it code complete in JAM Testnets and apply it to:

  • ORUs: (A) + (B) + (D)
  • ZKRUs: (A) + (B) + (C) + (D)

Everyone can fork their own variant for their rollup ecosystem from these PoC services and make it production-worthy.

There is NO route for (C) for ORUs that makes ORUs into CRUs and eliminates fraud proofs/challenge windows.  If that's wrong, we should discover what the route is, because it changes the impact by 3x-10x, since ETH ORUs dominate over ETH ZKRUs. 
The importance of (A)+(D) to get unfinalized/uncanonical was not obvious "on all forks" until you all made it clear.  Thank you for this! We have enough to do some implementation work in Nov/Dec before reopening this again.

Concerning who can build PoCs of the above: 

  • (A)+(D): very easy for freshmen non-fellow JAM implementers to do DA + ETH Light Client 
  • (B): very hard for XCM/XCMP centric work, but not if its generic blob passing 
  • (C): easy for freshmen to set up the stub, then it needs tech/bus dev 

By "freshmen" I mean JAM implementers who are completing M1+M2 and not insiders charged with keeping Polkadot/Substrate engineering in order, managing system chains, building the parachain and messaging service for Polkadot etc.  This is a new species who can do useful work in 2025 with just a little direction.   

Do you agree/disagree with the above? Do you see any way to eliminate the challenge period of ORUs?

@rphmeier
Copy link
Contributor

rphmeier commented Oct 29, 2024

While this isn't ready to be an RFC yet, it is an important conversation. @sourabhniyogi , will you be opening a dicussion somewhere else? What you wrote in this draft has value.

I agree with @AlistairStewart. Thanks for starting the conversation.

There is NO route for (C) for ORUs that makes ORUs into CRUs and eliminates fraud proofs/challenge windows. If that's wrong, we should discover what the route is, because it changes the impact by 3x-10x, since ETH ORUs dominate over ETH ZKRUs.

I believe that using Polkadot to validate ORU blocks is totally feasible. But the benefit requires a more holistic view. Infrastructure like bridges which rely on the ORU can use the fact that it was validated on Polkadot to know that no fraud has occurred, regardless of whether the ORU was built with Polkadot validation in mind. As such, it gives bridges and users a credible guarantee that the ORU won't be reverted. It can be permissionlessly applied on top of any current or existing ORU to make it a better product. Does it eliminate logical challenge windows? No. But it eliminates them in practice. You only need a one-way Polkadot->X bridge to achieve this benefit.

The JAM Services worth building are...

I agree with (A) and would amend (C) based on my above comment.

(B) seems like something you don't need a separate service for and something you can naturally get with (C). You can do it both ways, but moving higher up the stack can be beneficial to avoid opinionation at the service level. If one rollup can receive notifications about another rollup's validity, then it stands to reason that it can gather messages from the state of the remote rollup trivially. Bridging has always been something that you get for 'free' with fork-choice & validity conditions. Validity can come from Polkadot, and fork-choice is more easily evaluated higher up the stack with more information.

I don't see any concrete benefit to doing (D) as a service rather than a chain or smart contract. More importantly, it doesn't seem to be on the critical path of the technology tree, so far as validating rollups is concerned.

I also don't think we need to bake in much in the way of explicit dependency on state tree proofs. There are many kinds of state trees, which means adding explicit support at the service level for all the major ones. Also, state tree proofs could be compressed with ZK fairly trivially (repeated hashing is very ZK friendly), which would obviate the need for state tree proofs to land on Polkadot at all. Supporting major approaches is more important to handle in SDKs than within the service level.

My final thought is that it's best to 'keep it simple' at the service level and focus on what ELVES is good at: validating work packages. There are dozens of well-funded teams out there building infrastructure for ZKRUs and ORUs. Make it simple to just swap out ZK for ELVES or tack on ELVES to ORUs, and mobilize the broader rollup community to integrate and handle the bulk of the tech lift. It sounds like you are approaching this with the aim of building a full-fledged tech stack for messaging/bridging/fork-choice. Validity is the important 'wedge' here, not all the other stuff.

@mfornos
Copy link

mfornos commented Oct 29, 2024

I absolutely agree that 'validity is the wedge here.'

Still, it’s only half the story when it comes to mobilizing the rollup community, in my opinion. The two-way peg is another major aspect to consider for adoption, how it will perform compared to an "enshrined" rollup for example. At the end of the day, in many cases, you want to tap in some native assets.

@burdges
Copy link

burdges commented Oct 29, 2024

Given JAM's mission as a rollup host

We desigend polakdot to be a "pessemestic execution" or "cut n choose" roll up. Afaik JAM does not change anything there.

Answer: The JAM Services worth building are: ..

We're happy if people pay polkadot for (A) and (C), and maybe that's profitable short-term, but actually they're bot pointless longer-term.

We have availability only after approval checkers run reconstruction. After reconstruction, our 2 seconds of execution should cost USD less than this bandwidth. It follows (A) saves you nothing over polkadot. If (A) ever becomes cost effect, then our validator spects are too low.

As written (C) is ambiguous, any parachain/service could validate a ZKRU proof itself, unless the proof gets very large ala starks, but more likely what you meant:

After polakdot validates a parablock, then yes someone could produce snark proofs of that block. We've already proven the parablocks correct, but this snark improves the threat model in bridges or whatever, for anyone doesn't trust polakdot. I'd guestimate those snarks cost over 100 million USD per year, assuming 6 second blocktimes, and execution times around some fraction of a second, so way fewer transactions than our 2 seconds. Also, folks who do not trust polkadot scale byzantine assumptions rarely trust trusted setups either, so very fine line here.

It's absolutely fine if someone wants to spend all that money, but they'll eventually give us and become a parachain, because why spend 100 million USD per year for redundant wortk? Anyone like this has a lot of money, so we should quickly try to talk them into spending more of their heaps of cash on us, or on applicaitons, instead of on AWS or whatever.

Anyways both (A) and (C) represent people who do not yet understand what polkadot is. I'll honestly be surprised if we ever get willing clients like that, but..

The briliance of hyperledger-like plans is the ORs don't have to be willing. We can simply steal them and their ecosystem. ;)

As for (B), there is no messaging for us without proving in ELVES, so afaik (B) means basically what this RFC and hyperbridge are doing. It's fine, but we can deploy messaging before we write RFCs. :)

Anyways..

We only know one-ish "roll up" that makes sense, namely the one we're already doing, plus mild variations. All those variation are primarily off-chain changes, so JAM changes nothing there.

@sourabhniyogi
Copy link
Author

sourabhniyogi commented Oct 30, 2024

I believe that using Polkadot to validate ORU blocks is totally feasible.

Alright @rphmeier thank you for the encouraging advice -- Basically the only design I have for this is below. I thought was insane until this week @gavofyork suggested we could have Ethereum as a JAM Service (!!) and part 5: The Purge suggested "We could choose RISC-V ... to be the new 'official Ethereum VM' and then force-convert all EVM contracts into the new-VM code that interprets the logic of the original code (by compiling or intepreting it)." so ok, maybe its NOT insane to actually map EVM opcodes into PolkaVM opcodes in a ORU Validation refine operation.

For a ORU Block "Proof of Validity" in JAM:

  1. ORU Work package submission. For any block ${\bf B}$, the work package submitted by any ORU has to provide witness proofs for
    (a) any storage reads required against ${\bf H}_r$ of the prior state of ${\bf B}$
    (b) any storage writes required against ${\bf H}_r$ of posterior state of ${\bf B}$
    With some luck, we should be able to use trace_replayBlockTransactions or similar trace_block to figure out the reads and writes for every single account/contract addresses, then tweak OP Stack/ArbOS code to get what we need. All these prior and posterior state proofs would be submitted in a work package along with the whole block transactions.
  2. Refine. The ORU Validation Service (taking any block, whether finalized or not) in PVM refine would be to:
    (a) set up a "simulated" storage for all the witness proofs of 1(a), and verify the witness proofs of 1(a) + 1(b)
    (b) interpret every single EVM opcode call here [and the native transfers], except using 2(a) for reads and having the "simulated" storage keep intermediate values and finalize into storage values.
    (c) After processing all the evm byte coding and halting, 100% of the finalized values in simulated storage must be in 1:1 correspondence to the verified state witness proofs of 1(b)! No exceptions! (*)
  3. Accumulate. Verification result for any ORU block passed to accumulate to be stored in JAM State. Solicit issued for the Block. Any available block with a verified result marked as such. Intervals of what is both (a) available in DA and (b) validated in refine is stored in the ORU's service storage, with the last headerhash of the interval stored in the accumulateRoot for C3/Beta BEEFY for external systems certification. This is routed out through Bridgehub.

(*) Not sure if the exhaustive "must be in 1:1 correspondence" approach is strictly required or a sampling approach is required (using JAM's entropy state $\eta_3$ or something similar) to make the ORU's work package smaller and thus refine work reduced to fit within JAM gas limits.

The basic approach seems to be the kind where you can get 99.9x% of it in order in less than a couple man-months of sweaty work and then the last .0x% takes 2 man-years because host functions to do KZG aren't there or something like that. Test cases exist in the O(10^8-10^9) in the end though, so its just knocking out EVM opcodes that fail validation concerning the last .0x%.

How would you approach it differently/better or simplify?

@burdges
Copy link

burdges commented Oct 30, 2024

Accumulate should do nothing beyond collecting state roots, like in parachains and most every other JAM service, except elastic scaling (which could be done that way too).

Refine aka the PVF checks everything: validity, a BridgeHub proof of commitments, and roll up concensus. You might lower latency or handle forks if you split these slightly, but that's unecessarily complex. There is never much reason to waste resource doing anything outside of refine. Availability is built into ELVES, should never be alterable by users in JAM, etc.

There are no new host functions because BridgeHub already does the BLS12-381 verifcations. BridgeHub would be more efficent eventually with #113 (comment) but that's orthogonal.

@sourabhniyogi
Copy link
Author

a BridgeHub proof of commitments, and roll up concensus.

Thank you for making these high-level design constraints clear.

For "BridgeHub proof of commitments" I am only aware of the Snowfork components of basically finalized checkpoint proofs coming in from Ethereum, and Beefy commitments going out from Polkadot (which are basically accumulateRoots from work packages in MMRs), probably signed every X blocks and aggregated in BridgeHub -- can you detail the full scope of what you mean by "BridgeHub proof of commitments"?

It seems you have Hyperbridge + ISMP designs in mind and there seems to be a tradeoff debate to be had on
(a) "lets get the validation of blocks certified out ASAP [handle all forks, keep the validation API simple]" (from @rphmeier) vs
(b) "You might get lower latency or handle if you split these slightly [validity from BridgeHub proof of commitments], but that's unnecessarily complex" (from you @burdges).
I doubt this tradeoff debate is essential to resolve in 2024, but resolving the messaging details with the background of "Who is in charge of aggregation of BLS keys in JAM and how?" would be terrific.

For building PoCs of useful JAM Service, since we don't know anything about how JAM Services can send/receive from BridgeHub (and it will take the bulk of 2025 to happen, I guess?), a JAM service of
(D) ETH Light Client
seems like a good stub for what you mean by "BridgeHub proof of commitments", as well as a real life Ordered Accumulation case that can actually matter.

For "roll up consensus", I'll catch up on OP Stack vs ArbOS present vs future and attempt to do something about it.

Thank you again!

@rphmeier
Copy link
Contributor

@sourabhniyogi

That formulation looks fine to me. I'd recommend have a look at the semantics of block execution in popular rollup stacks before committing to anything granular, since I think they do more than just transaction processing. i.e. they have a runtime!

I wouldn't worry about the overhead of emulating EVM with RISC-V much. Bandwidth and SSD latencies during block authorship impact blockchain performance much more than compute.

I expect that the 'right' version of this service should look extremely similar to the parachains service, except without a focus on work packages actually being chains. To be honest, it is literally the only thing I think needs to change in the execution model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants