-
Notifications
You must be signed in to change notification settings - Fork 766
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
revive: Bump PolkaVM and add static code validation #5939
Conversation
bot bench substrate-pallet --pallet=pallet_revive |
@athei https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7517121 was started for your command Comment |
…=dev --target_dir=substrate --features=riscv --pallet=pallet_revive
@athei Command |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A malicious program could trigger the compilation of the whole program
Compilation of a whole program should not be a problem regardless of the amount and size of basic blocks it contains. It follows that the compilation is not properly metered, and the charge for compilation of any BB is implied by the maximum BB size and a combination of some other limits. However we can't easily establish a sensible maximum limit, and charging the maximum for each basic block should heavily penalize contracts with many but small BBs.
We can benchmark the compilation of basic blocks and charge the overhead to ref_time then this wouldn't be a problem?
If this is really required, the PolkaVM linker should have a configurable maximum size for basic blocks, such that too large BBs are split up artificially so this is never going to a problem for contracts compiled with some toolchain that is aware. Otherwise we create a bad experience: We fail compilation of Solidity or ink! contracts if we see too large BBs in the final linked blob, and tell people "sorry you need to refactor your code so that it compiles to smaller basic blocks; glhf". This is nothing developers should never need to care about and reminds me of the Solidity StackTooDeep
exception.
With tooling support we can at least counter the user experience problems; but the proper solution would be to benchmark and charge for any work done, including lazy compilation overhead, properly?
AFAIK the compilation is not metered at all. IMHO the proper solution would be that the PolkaVM interpreter charges gas up front when encountering a new basic block. I consider the limit a stop gap solution.
Story of my life. This is always the problem with benchmarking. Only solution is to meter the compilation itself.
Yes. I think it should be easy to add this to the linker. |
Thanks for the clarification. FWIW I'm fine with this change as a stop-gap given we set the limit to something that makes basic code examples work (like at least 1000 instructions per BB). |
|
@@ -1,6 +1,5 @@ | |||
[package] | |||
name = "pallet-revive-fixtures" | |||
publish = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may need to update the umbrella crate as well to get ci to pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? publish = true
is a noop and this is why I removed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok nvm then, I thought I thought the umbrella python script was filtering crate based on the publish flag
Yeah I see. Well the maximum limit should be what we can set without it causing a DOS vuln. But it's hard to understand what that would be. Since the |
substrate/frame/revive/src/limits.rs
Outdated
for inst in program.instructions(ISA) { | ||
num_instructions += 1; | ||
basic_block_size += 1; | ||
max_basic_block_size = max_basic_block_size.max(basic_block_size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that better / faster than not doing the max but checking the block size inside the if block, before resetting it to 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved it into the block.
This PR adds static validation that prevents upload of code that:
200
)sbrk
instructionDoing that statically at upload time (instead of at runtime) allows us to change the basic block limit or add instructions later without worrying about breaking old code. This is well worth the linear scan of the whole blob on deployment in my opinion. Please note that those checks are not applied when existing code is just run (hot path).
Also some drive by fixes:
publish = true
RUST_LOG
)