To read about PR expectations, check out the Pull requests section. To learn about setting up the project for development and testing, but also getting a per-feature insight, check out Development.
⚠️ IMPORTANT NOTE⚠️ All contributions are expected to be of the highest possible quality! That means the PR is thoroughly tested and documented, and without blindly generated ChatGPT code and documentation! PRs that do not comply with these rules stated here shall not be considered!
It is advised to create an issue before creating a PR. Creating an issue is the best way to reach somebody with repository-specific experience who can provide more info on how a problem/idea can be addressed and if a PR is needed.
The PR template contains a checklist. It is important to go through the checklist to ensure the expected quality standards and to ensure the CI workflow succeeds once it is executed.
Once a PR is created, somebody from the team will review it. When a reviewer leaves a comment, the PR author should not mark the conversation as resolved. This is because the repository has a setting that prevents merging if there are unresolved conversations - let the reviewer resolve. The author can reply back with:
- a request for clarification from the reviewer
- a link to the commit which addresses the reviewer's observation (simply pasting the sha-digest is enough)
This is an example of a good author-reviewer correspondence: link.
This project's CI/CD platform (CircleCI) does not have the option to trigger the workflow on external PRs simply with a click. So once a PR is reviewed and looks like its workflow could pass, it can either be accepted & merged it blindly (which shall trigger the workflow on the target branch), or the following workaround can be used to trigger it:
# https://stackoverflow.com/questions/5884784/how-to-pull-remote-branch-from-somebody-elses-repo
$ git remote add <CONTRIBUTOR> <CONTRIBUTOR_GIT_FORK_URL>
$ git fetch <CONTRIBUTOR>
$ git checkout -b <CONTRIBUTOR>/<BRANCH> <CONTRIBUTOR>/<BRANCH>
$ git remote set-url --push <CONTRIBUTOR> [email protected]:0xSpaceShard/starknet-devnet-rs.git
$ git push <CONTRIBUTOR> HEAD
Some developer scripts used in this project are written in Python 3, with dependencies specified in scripts/requirements.txt
. You may want to install the dependencies in a virtual environment.
Documentation maintenance requires installing npm
.
It is highly recommended to get familiar with Visual Studio Code Dev Containers and install rust-analyzer extension.
Run the linter with:
$ ./scripts/clippy_check.sh
Run the formatter with:
$ ./scripts/format.sh
If you encounter an error like
error: toolchain 'nightly-x86_64-unknown-linux-gnu' is not installed
Resolve it with:
$ rustup default nightly
To check for unused dependencies, run:
$ ./scripts/check_unused_deps.sh
If you think this reports a dependency as a false positive (i.e. isn't unused), check here.
To check for spelling errors in the code, run:
$ ./scripts/check_spelling.sh
If you think this reports a false-positive, check here.
To speed up development, you can put the previous steps (and more) in a local script defined at .git/hooks/pre-commit
to have it run before each commit (more info).
Some tests require the anvil
command, so you need to install Foundry. The anvil
command might not be usable by tests if you run them using VS Code's Run Test
button available just above the test case. Either run tests using a shell which has foundry/anvil in PATH
, or modify the BackgroundAnvil Command to specify anvil
by its path on your system.
To ensure that integration tests pass, be sure to have run cargo build --release
or cargo run --release
prior to testing. This builds the production target used in integration tests, so spawning BackgroundDevnet won't time out.
Run all tests using all available CPUs with:
$ cargo test
The previous command might cause your testing to die along the way due to memory issues. In that case, limiting the number of jobs helps, but depends on your machine (rule of thumb: N=6):
$ cargo test --jobs <N>
To test if your contribution presents an improvement in execution time, check out the script at scripts/benchmark/command_stat_test.py
.
To run the criterion benchmarks and generate a performance report:
$ cargo bench
This command will compile the benchmarks and run them using all available CPUs on your machine. Criterion will perform multiple iterations of each benchmark to collect performance data and generate statistical analysis.
Check the report created at target/criterion/report/index.html
Criterion is highly configurable and offers various options to customise the benchmarking process. You can find more information about Criterion and its features in the Criterion documentation.
To measure and benchmark memory it is best to use external tools such as Valgrind, Leaks, etc.
Tests in devnet require an erc20 contract with the Mintable
feature, keep in mind that before the compilation process of cairo-contracts you need to mark the Mintable
check box in this wizard and copy this implementation to /src/presets/erc20.cairo
.
If smart contract constructor logic has changed, Devnet's predeployment logic needs to be changed, e.g. simulate_constructor
in crates/starknet-devnet-core/src/account.rs
.
Updating the underlying Starknet is done by updating the blockifier
dependency. It also requires updating the STARKNET_VERSION
constant.
Updating the RPC requires following the specification files in the starknet-specs repository. The spec_reader testing utility requires these files to be copied into the Devnet repository. The RPC_SPEC_VERSION
constant needs to be updated accordingly.
When adding new Rust dependencies, specify them in the root Cargo.toml and use { workspace = true }
in crate-specific Cargo.toml files.
The documentation website content has its own readme.
To release a new version, check out the release docs.