We can use help in a bunch of areas and any help is appreciated. Our GitHub issues serve as a place for any discussion, whether it's bug reports, questions, project direction etc. As the project grows this policy may change.
Our Discord server is open for help and more adhoc discussion. All activity on the Discord is still moderated and will be strictly enforced under the project's Code of Conduct.
Building this project requires a stable
Rust toolchain, which can be installed using rustup.
Clone the repository and navigate to the tools
directory:
git clone https://github.com/rome/tools
cd tools
Compile all packages and dependencies:
cargo build
Rome can be used via the rome
bin in the rome_cli
package:
cargo run --bin rome -- --help
Rome can be used as a language server by following the instructions below.
The Rome language server is the binary crate rome
which can be built using the command:
cargo build --bin rome
If benchmarking the language server, be sure to build with the --release
flag.
The VS Code extension can be installed from the Marketplace and can be used with a development build of the language server by setting the "rome.lspBin"
VS Code setting to the path of the binary:
"rome.lspBin": "/path/to/rome/target/debug/rome"
Please note that Windows disallows modifying an executable while it's running, meaning you won't be able to recompile the Rome binary once the extension was activated in your editor.
The server is spawned as a background daemon, and continues to run even after the editor is closed.
To stop the running daemon instance use the rome stop
command, with the editor closed as the extension
will try to restart it otherwise.
To build the VS Code extension from source, navigate to the editors/vscode
directory and run:
npm install
npm run build
This will create a rome_lsp.vsix
which you can install into VS Code by running:
npm run install-extension
The "rome.lspBin"
VS Code setting will still need to be set as described above.
When the extension is running, it will connect to a daemon server - or it will bootstrap one.
When you apply changes to the binary, you need to do two things:
- compile the binary
- kill the daemon process, so you can spawn a new server session with the new changes
When the daemon is running, it's possible to inspect its logs in the folder rome-logs
, placed
in the temporary folder of the operative system.
If files specific to your local development environment should be ignored, please add these files to a global git ignore file rather than to a git ignore file within Rome.
You can find more information on this process here.
The npm module npm/rome contains Rome's Node JS API that supports different backends:
wasm-nodejs
(WebAssembly)backend-jsonrpc
(Connection to the daemon)
For testing and developing, you need to build these packages, following the steps:
- install wasm-pack globally;
- run the
build
command inside the packagebackend-jsonrpc
; - run the
build
andbuild:wasm-node-dev
commands inside the packagejs-api
(foldernpm/js-api
); - run
pnpm i
inside the packagejs-api
(foldernpm/js-api
), this will link the WebAssembly bindings and the JSON-RPC bindings;
The tests are run against the compiled files, which means that you need to run the
build
command after you implemented features/bug fixes.
The Rome website is built with Astro. To start a development server you can run the following commands:
cd website
pnpm install
pnpm start
cargo lint
is a cargo alias that runsclippy
- rust official linter - under the hood;cargo format
is a cargo alias that runsrust-fmt
- rust official formatter - under the hood;cargo test
will run the suite; make sure to run this command from the root of the project, so it will run the tests of all the internal crates;
If you work on some parser and you create new nodes or modify existing ones, will need to run a command to update some files that are auto-generated.
This command will update the syntax of the parsers.
The source is generated from the ungram
files.
This command will create new tests for your parser. We currently have a neat infrastructure where tests for parser are generated com inline comments found inside the source code. Please read the proper chapter for more information
It's strongly advised to run this command before committing new changes.
This command will detect linter rules declared in the analyzers
and assists
directories in rome_analyze
, regenerate the index modules analyzers.rs
and assists.rs
to import these files, and update the registry builder function in registry.rs
to include all these rules.
It will also regenerate the configuration of the rules.
This command will check and report parser conformance against different test suites. We currently target the Official ECMAScript Conformance Test Suite and the Typescript Test Suite
The test suites are included as git submodules and can be pulled using:
git submodule update --init --recursive
Internally, the Rome team adheres as closely as possible to the conventional commit specification. The following this convention encourages commit best-practices and facilitates commit-powered features like change log generation.
The following commit prefixes are supported:
feat:
, a new featurefix:
, a bugfixdocs:
, a documentation updatetest:
, a test updatechore:
, project housekeepingperf:
, project performancerefactor:
, refactor of the code without change in functionality
Below are examples of well-formatted commits:
feat(compiler): implement parsing for new type of files
fix: fix nasty unhandled error
docs: fix link to website page
test(lint): add more cases to handle invalid rules
When creating a new pull request, it's preferable to use a conventional commit-formatted title, as this title will be used as the default commit message on the squashed commit after merging.
Please use the template provided.
If your PR requires some update on the website (new features, breaking changes, etc.), you should create a new PR once the previous PR is successfully merged.
Go to the issues section and check the pinned issues. You will find a pinned issue that starts with "Documentation and Focus". Inside, you will find the details of:
- the name of the branch where to point the PR that updates the documentation;
- the PR that we will merge when the release is ready;
If you can't create a new PR, please let the team know. The template should help to give all the information to the team.
Here are some other scripts that you might find useful.
If you are a core contributor, and you have access to create new branches from the main repository (not a fork), use these comments to run specific workflows:
!bench_parser
benchmarks the parser's runtime performance and writes a comment with the results;!bench_formatter
benchmarks the formatter runtime performance and writes a comment with the results;!bench_analyzer
benchmarks the analyzer runtime performance and writes a comment with the results;
To know the technical details of how our analyzer works, how to create a rule and how to write tests, please check our internal documentation page
To know the technical details of how our JavaScript works and how to write test, please check our internal documentation page
To know the technical details of how our formatter works and how to write test, please check our internal documentation page
We follow the specs suggested by the official documentation:
Odd minor versions are dedicated to pre-releases, e.g. *.5.*
.
Even minor versions are dedicated to official releases, e.g. *.6.*
.
Internally, we use insta
for snapshot tests. This means that you
follow their installation instructions to update/accept
the new snapshot tests.
A lot of the commands above are mor easily accessible using our Just recipes. For example:
You can install just
using cargo:
cargo install just
Or, using different methods, like explained in their documentation.
It's advised to install just
using a package manager, so
you can run just
as a binary.
❯ just
just --list -u
Available recipes:
codegen
documentation
new-lintrule path name
test-lintrule name
check-ready
All the necessary codegen
can be called using
> just codegen
After all changes are done, the code can be checked if is ready to be pushed with
> just check-ready