-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JAM Service for Validating Ethereum Optimistic Rollups #127
JAM Service for Validating Ethereum Optimistic Rollups #127
Conversation
Polkadot+Kusama should support the possibility of having up to 10-30% of its blockspace weight allocatable to EVM and WASM Smart Contracts. This is not for Polkadot to be a yet another EVM Chain, but specifically to: 1. support the use of Polkadot's Data Availability resources to non-Substrate [largely EVM] L2 Stacks, which will bring in additional demand for DOT and through those networks, new users and developers 2. (assuming CoreJam Work Packages have some direct relation to EVM Contracts + WASM Contract *Interpretation*), support Polkadot 2.0 experimentation in its transformation into a “map reduce” computer and bring in new classes of CoreJam+CorePlay developers in addressing sync/asynchronous composability
This proposal adapts the CoreJam architecture to EVM L2s, specifically utilizing OP Stack + Solidity instead of Polkadot/Substrate. Almost all CoreJam concepts are retained, but CoreJam's Rust relay chain interfaces are replaced with Solidity/EVM Contracts + OP Stack's Golang, situated in a "system chain".
JAM's *rollup host* function can be extended from Polkadot rollups to non-Polkadot rollups. We outline the design of a JAM service capable of securing Ethereum *optimistic* rollups, such as OP Stack and ArbOS. This service transforms optimistic rollups to cynical rollups, allowing users to benefit from the finality and high throughput of JAM, alongside JAM's anticipated messaging service. This JAM service's competitive advantage enables rollup operators to choose JAM/Polkadot over Ethereum or other rollup providers. A precise design for the service's refine-accumulate function is outlined, using tools already available.
The service maintains a summary of fully validated blocks of each chain in 3 service storage keys: | ||
* the first block number $a$ validated with data fully available | ||
* the last block number $b$ validated with data fully available | ||
* a window parameter $w$, modeling the maximum $b-a$ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it'd be better to handle forks than focus on a single "chain". the reason is that finality on Ethereum is slow - if you want to guarantee that a rollup batch won't be reorged out of the Ethereum chain, you would really have to wait for Ethereum's finality. Finality on Polkadot is fast and there is an excess of cores. So it's better to validate all batches from all forks on Ethereum, and get that information to the unfinalized Ethereum chain as fast as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume you mean the case where a optimistic-on-Ethereum/cynical-on-Polkadot rollup uses JAM to prove finality on both ETH+JAM, correct? Clearly, this "grandfathering" is a critically important use case for the CEXes-on-OP Stack cases.
The "simple" solution I have had is that ETH L2 finality is based on Polkadot/JAM finality with the refinement context anchor block based on JAM alone. But if we get the Ethereum finality into JAM chain state (how exactly is not clear?), the historical_lookup
/ anchor block could include both ETH + Polkadot/JAM finality, and refine would be based on that.
The proof that a L2 has been finalized in JAM alone is coming out via C3/Beta, which makes it over to Ethereum slowly. With work packages validating against all of ETH's unfinalized forks preemptively, and then a signal coming in from ETH coming in as to which fork was finalized (how exactly is not clear, same question as above), we have a faster way to get the same answer of "finalized on both". There is an abundance of cores so yes, the preemptive work package submission is super sensible. Thank you for this idea.
But I'm not sure how Ethereum finality gets into JAM chain state though... what is ideal plan?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume you mean the case where a optimistic-on-Ethereum/cynical-on-Polkadot rollup uses JAM to prove finality on both ETH+JAM, correct?
No, I think you are confusing validity and finality. Finality is about consensus ordering of operations. Validity is what it sounds like. If you want to make Polkadot cores useful for Ethereum rollups, then Polkadot provides validity and Ethereum provides consensus ordering. So the ideal plan for handling Ethereum finality is to just sidestep it entirely.
My broader point is that for Polkadot to treat a rollup as a "chain", you'd either need permissioned sequencers or to follow the ordering of operations on the ETH chain, which is slow. To be clear, I think it's a bad idea to bake in any reliance on Ethereum block ordering and finality for a service that is fundamentally about providing validity guarantees on rollup blocks.
It seems conceptually simpler to just have a service for validating rollup blocks, and to have the rollup machinery itself organize them into a chain. Separate those concerns. That's basically how ZK rollups work: the ZK prover doesn't care about where the block lies in the rollup's chain, just that a specific block is valid. Expose a simple set of APIs to prove validity of your block on Polkadot and then find a way to bridge over a certificate of that validity.
Another bonus to focusing on blocks as opposed to chains is that rollup sequencers can submit blocks to Polkadot before they've even landed on the Ethereum chain at all. That's an important pipelining step. If there are competing sequencers they can all just submit their block to Polkadot in parallel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, closing this for now, thank you so much for the insights!
|
||
#### Hashing | ||
|
||
Currently, preimages are specified to use the Blake2b hash, while Ethereum rollup block hashes utilize Keccak256. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like an application-level concern. The service can commit to keccak-256 hashed data trivially
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright -- I'll adjust in a Polkadot-first way but it could done ETH L2 ORU-first centric way just as well.
|
||
#### CoreTime | ||
|
||
Instead of ETH, rollups would require DOT for CoreTime to secure their rollup. However, rollups are not locked into JAM and may freely enter and exit the JAM ecosystem since work packages do not need to start at genesis. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also seems like an application-level concern
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is... but its like ss58 prefixes... can we get it right for both Polkadot and ORU on day 1 instead of day 1000? How would you address this concern early?
|
||
| JAM Refine | Content | | ||
| --- | --- | | ||
| *Work Package* $p_n \in \mathbb{P}$ | Data submitted by ORU operator for validation of blocks $i$ through $j$ | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's important to support partial validation in a WP, i.e. to spread the validation of a rollup batch across multiple WPs. The trend of rollups is to have large sequencers and produce heavy blocks. With the current formulation, you'd only support rollups whose batches fit into a single WP.
Note that WPs will require state tree proofs and will likely be bounded more by data than by compute.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will give it a shot, thank you!
Hey @sourabhniyogi, thank you for opening this RFC. Right now I don't see any need for this RFC. JAM is not finished and thus, service implementations are not yet really specable. Also JAM is build with no service build in and for sure with the idea to have more services than just the parachain service. Generally having some discussion around the implementation details etc still makes sense to be prepared. However, I'm not sure a RFC is the best format for this right now. |
How do you prevent the inclusion of a completely fabricated block with valid but fake storage proofs? Since there's no validation of the signed transactions that legitimately alter the state, nothing prevents someone from submitting a forged block. What am I missing? 😄 |
The number of the RFC should be changed to RFC-127 |
Its not enough to have valid storage proofs against the state root -- you need a full contiguous set blocks to be 100.0000% available in the L1 DA -- the ORU CAN submit a forged block with valid and fake storage proofs but because these fake blocks are also available in JAM DA L1 (just as the call data is there are for a month in ETH L1), anyone in the community can validate on their own by pulling from the "official" JAM DA L1. If there is absolutely no one running community validator nodes (even though ORUs have just a single centralized sequencer run by a centralized team) the situation is dire and everyone is gaslightable (like the govts sometimes do ;) We certainly cannot have all the rollup state on JAM L1 to validate state transitions but we can have the last month of block to support the community to keep the centralized rollup in check. |
Also not sure of the status of Polkadot rollup services (of which there would only be one, managed by the fellows by charter, worthy of specing) vs non-Polkadot rollup services (of which there are only so many rollup ecosystems, with 2 dominant approaches optimistic and zk rollups). Given the rollup reactor focus, I think its a good question for the fellows/w3f wizards as to what services are specable, worthy of RFC vs training material for whom. There are a dozen (possibly as many as 2 dozen) teams who can build/test non-Polkadot services in their JAM testnet (beyond tiny ones like Fib[n] = Fib[n-1]+Fib[n-2] or bootstrap service) just like ours -- whereas the parachain service is going to be a complex endeavor (!), the one here is going to be I guess, 2 orders of magnitude times smaller for a PoC. You can see how simple it is, and poke at it as a super expert at all of it. Thank you for posting this tutorial, I read it like a JAM origin story (Safrole > Babe, this is how Polkadot validation really works) -- whereas some JAM implementers know Polkadot validation inside out like you, the opposite extreme of people coming at it fresh solely from GP also exists. It could be that their freshman-ness enables them to build all new JAM services that also fit in JAM's rollup reactor vision. I think experienced fellows have an important role to shepard the freshman. Having other rollup host services, even if none of them are "official" from the fellows, could support innovation. This is possible now in a way that was not realistic a year ago when we first learned about Corejam #31 -- right? |
Afaik an RFC seems pointless here. We should clean up the off-chain messaging story, probably some PRs to ISMP based upon Alistar's concerns. We'll want off-chain messaging RFCs eventually.
You need a parachain that puts OR blocks into parablocks aka wps. All parablocks/wps would be checked using the ELVES roll up deployed since 2021 for Parachains. This makes the OR blocks valid within the Polkadot threat model. All other known roll ups are insecure and/or high latency in comparison, so they cannot easily participate in messaging on polkadot. Aside from validity, you must also check the ORs commitments on ETH using the BridgeHub parachain too, as well as the ORs' internal concensus, like any other bridge. You'll need forks like @rphmeier says if the OR lacks some real concensus. Afaik Hyperbridge envisions doing this using parachains. Afaik JAM adds nothing here, but PolkaVM contracts help. Also BridgeHub should communicate bare concensus, so state roots and validator sets, not messages. You'd prove the internal state of ORs contract on ETH using this. Also an OR (hyper)bridge parachains should itself only communicate bare concensus too, like BridgeHub. Almost identical off-chain messaging pallets could make ETH and OR bridges look native to each parachain. And projects could alter these pallets, and the solidity contracts, if they wanted slightly different messaging, like timelocks or ordering, or wanted integration with existing contracts on ETH. |
So, either zk rollups or a parachain with EVM support if you want security. Otherwise, if you're using JAM as a DA layer with consensus (ala Celestia), then makes no sense to "verify" anything, just that a block hash was included at a point in time, being it valid or not. |
While this isn't ready to be an RFC yet, it is an important conversation. @sourabhniyogi , will you be opening a dicussion somewhere else? What you wrote in this draft has value. For Ethereum optimistic rollups, even those whose operators don't want to use Polkadot, someone might still want to post blocks to Polkadot for cynical validation. Ethereum L2s have bad interoperability and if we can show that there will not be a revert because the blocks are correct and there can be no fraud proof, then you can bridge sooner using Polkadot than just Ethereum. For this we'd want to check that the blocks we have are those posted on Ethereum, both committed to in the EVM or posted to their blobs. We'd want to use the Ethereum state root from the bridge to check which blocks are final on Ethereum. But we'd probably want to post such blocks ahead of Ethereum's finality, as @rphmeier suggested. We also might want to mirror blocks posted to ither DA services like Celestia. But all of these extra checks work on top of the basic infrastructure outlined here. It's up to the service/parachain itself whether one can post for free something committed to elsewhere, because this advances the state and is good for everyone, or else people can just pay to have their blocks validated before we have any idea whether it will be canonical or not. Doing things like following LMD Casper votes block by block, which Snowbridge doesn't do right now (nor does Hyperbridge) should be enough to reduce the Ethereum imported block rate down to one every 12 seconds, even if we don't know it is final. Our DA , even before JAM, is superior to Ethereum's blobs and most DA services, so we should be able to include everything that appears on these on Polkadot. |
@AlistairStewart @rphmeier @burdges @mfornos [all] Given JAM's mission as a rollup host, the question I got your answers to was: Question: What JAM Services should we build to support ETH ORUs vs ZKRUs? (It's clear to assume that the market of non-Polkadot rollups is dominated by ETH ORUs followed by ETH ZKRUs.) Answer: The JAM Services worth building are: For each of these, freshmen implementing JAM can outline refine-accumulate code and make it code complete in JAM Testnets and apply it to:
Everyone can fork their own variant for their rollup ecosystem from these PoC services and make it production-worthy. There is NO route for (C) for ORUs that makes ORUs into CRUs and eliminates fraud proofs/challenge windows. If that's wrong, we should discover what the route is, because it changes the impact by 3x-10x, since ETH ORUs dominate over ETH ZKRUs. Concerning who can build PoCs of the above:
By "freshmen" I mean JAM implementers who are completing M1+M2 and not insiders charged with keeping Polkadot/Substrate engineering in order, managing system chains, building the parachain and messaging service for Polkadot etc. This is a new species who can do useful work in 2025 with just a little direction. Do you agree/disagree with the above? Do you see any way to eliminate the challenge period of ORUs? |
I agree with @AlistairStewart. Thanks for starting the conversation.
I believe that using Polkadot to validate ORU blocks is totally feasible. But the benefit requires a more holistic view. Infrastructure like bridges which rely on the ORU can use the fact that it was validated on Polkadot to know that no fraud has occurred, regardless of whether the ORU was built with Polkadot validation in mind. As such, it gives bridges and users a credible guarantee that the ORU won't be reverted. It can be permissionlessly applied on top of any current or existing ORU to make it a better product. Does it eliminate logical challenge windows? No. But it eliminates them in practice. You only need a one-way Polkadot->X bridge to achieve this benefit.
I agree with (A) and would amend (C) based on my above comment. (B) seems like something you don't need a separate service for and something you can naturally get with (C). You can do it both ways, but moving higher up the stack can be beneficial to avoid opinionation at the service level. If one rollup can receive notifications about another rollup's validity, then it stands to reason that it can gather messages from the state of the remote rollup trivially. Bridging has always been something that you get for 'free' with fork-choice & validity conditions. Validity can come from Polkadot, and fork-choice is more easily evaluated higher up the stack with more information. I don't see any concrete benefit to doing (D) as a service rather than a chain or smart contract. More importantly, it doesn't seem to be on the critical path of the technology tree, so far as validating rollups is concerned. I also don't think we need to bake in much in the way of explicit dependency on state tree proofs. There are many kinds of state trees, which means adding explicit support at the service level for all the major ones. Also, state tree proofs could be compressed with ZK fairly trivially (repeated hashing is very ZK friendly), which would obviate the need for state tree proofs to land on Polkadot at all. Supporting major approaches is more important to handle in SDKs than within the service level. My final thought is that it's best to 'keep it simple' at the service level and focus on what ELVES is good at: validating work packages. There are dozens of well-funded teams out there building infrastructure for ZKRUs and ORUs. Make it simple to just swap out ZK for ELVES or tack on ELVES to ORUs, and mobilize the broader rollup community to integrate and handle the bulk of the tech lift. It sounds like you are approaching this with the aim of building a full-fledged tech stack for messaging/bridging/fork-choice. Validity is the important 'wedge' here, not all the other stuff. |
I absolutely agree that 'validity is the wedge here.' Still, it’s only half the story when it comes to mobilizing the rollup community, in my opinion. The two-way peg is another major aspect to consider for adoption, how it will perform compared to an "enshrined" rollup for example. At the end of the day, in many cases, you want to tap in some native assets. |
We desigend polakdot to be a "pessemestic execution" or "cut n choose" roll up. Afaik JAM does not change anything there.
We're happy if people pay polkadot for (A) and (C), and maybe that's profitable short-term, but actually they're bot pointless longer-term. We have availability only after approval checkers run reconstruction. After reconstruction, our 2 seconds of execution should cost USD less than this bandwidth. It follows (A) saves you nothing over polkadot. If (A) ever becomes cost effect, then our validator spects are too low. As written (C) is ambiguous, any parachain/service could validate a ZKRU proof itself, unless the proof gets very large ala starks, but more likely what you meant: After polakdot validates a parablock, then yes someone could produce snark proofs of that block. We've already proven the parablocks correct, but this snark improves the threat model in bridges or whatever, for anyone doesn't trust polakdot. I'd guestimate those snarks cost over 100 million USD per year, assuming 6 second blocktimes, and execution times around some fraction of a second, so way fewer transactions than our 2 seconds. Also, folks who do not trust polkadot scale byzantine assumptions rarely trust trusted setups either, so very fine line here. It's absolutely fine if someone wants to spend all that money, but they'll eventually give us and become a parachain, because why spend 100 million USD per year for redundant wortk? Anyone like this has a lot of money, so we should quickly try to talk them into spending more of their heaps of cash on us, or on applicaitons, instead of on AWS or whatever. Anyways both (A) and (C) represent people who do not yet understand what polkadot is. I'll honestly be surprised if we ever get willing clients like that, but.. The briliance of hyperledger-like plans is the ORs don't have to be willing. We can simply steal them and their ecosystem. ;) As for (B), there is no messaging for us without proving in ELVES, so afaik (B) means basically what this RFC and hyperbridge are doing. It's fine, but we can deploy messaging before we write RFCs. :) Anyways.. We only know one-ish "roll up" that makes sense, namely the one we're already doing, plus mild variations. All those variation are primarily off-chain changes, so JAM changes nothing there. |
Alright @rphmeier thank you for the encouraging advice -- Basically the only design I have for this is below. I thought was insane until this week @gavofyork suggested we could have Ethereum as a JAM Service (!!) and part 5: The Purge suggested "We could choose RISC-V ... to be the new 'official Ethereum VM' and then force-convert all EVM contracts into the new-VM code that interprets the logic of the original code (by compiling or intepreting it)." so ok, maybe its NOT insane to actually map EVM opcodes into PolkaVM opcodes in a ORU Validation For a ORU Block "Proof of Validity" in JAM:
(*) Not sure if the exhaustive "must be in 1:1 correspondence" approach is strictly required or a sampling approach is required (using JAM's entropy state The basic approach seems to be the kind where you can get 99.9x% of it in order in less than a couple man-months of sweaty work and then the last .0x% takes 2 man-years because host functions to do KZG aren't there or something like that. Test cases exist in the O(10^8-10^9) in the end though, so its just knocking out EVM opcodes that fail validation concerning the last .0x%. How would you approach it differently/better or simplify? |
Accumulate should do nothing beyond collecting state roots, like in parachains and most every other JAM service, except elastic scaling (which could be done that way too). Refine aka the PVF checks everything: validity, a BridgeHub proof of commitments, and roll up concensus. You might lower latency or handle forks if you split these slightly, but that's unecessarily complex. There is never much reason to waste resource doing anything outside of refine. Availability is built into ELVES, should never be alterable by users in JAM, etc. There are no new host functions because BridgeHub already does the BLS12-381 verifcations. BridgeHub would be more efficent eventually with #113 (comment) but that's orthogonal. |
Thank you for making these high-level design constraints clear. For "BridgeHub proof of commitments" I am only aware of the Snowfork components of basically finalized checkpoint proofs coming in from Ethereum, and Beefy commitments going out from Polkadot (which are basically accumulateRoots from work packages in MMRs), probably signed every X blocks and aggregated in BridgeHub -- can you detail the full scope of what you mean by "BridgeHub proof of commitments"? It seems you have Hyperbridge + ISMP designs in mind and there seems to be a tradeoff debate to be had on For building PoCs of useful JAM Service, since we don't know anything about how JAM Services can send/receive from BridgeHub (and it will take the bulk of 2025 to happen, I guess?), a JAM service of For "roll up consensus", I'll catch up on OP Stack vs ArbOS present vs future and attempt to do something about it. Thank you again! |
That formulation looks fine to me. I'd recommend have a look at the semantics of block execution in popular rollup stacks before committing to anything granular, since I think they do more than just transaction processing. i.e. they have a runtime! I wouldn't worry about the overhead of emulating EVM with RISC-V much. Bandwidth and SSD latencies during block authorship impact blockchain performance much more than compute. I expect that the 'right' version of this service should look extremely similar to the parachains service, except without a focus on work packages actually being chains. To be honest, it is literally the only thing I think needs to change in the execution model. |
JAM's rollup host function can be extended from Polkadot rollups to non-Polkadot rollups. We outline the design of one of a few JAM services capable of securing Ethereum optimistic rollups, such as OP Stack and ArbOS. This will set up rollup users to benefit from the validity, DA and finality and high throughput of JAM services, alongside JAM's anticipated messaging services. These JAM services will enables rollup operators to choose JAM/Polkadot over Ethereum or other rollup service providers, or in addition to improve user experience and security. A precise design for one of the basic services DA refine-accumulate function is outlined, using tools already available.
Update: I'll rework this significantly based on rapid expert feedback in November.