-
Notifications
You must be signed in to change notification settings - Fork 11.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test-adapter: add named variable for transaction digests #20937
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## Description For each transaction that is executed by the test adapter, record a named variable, `digest_$task` where `$task` is the transactional testing task number (the same number that appears in the first position of fake object IDs). This is to allow later tasks to refer to transaction digests as variables in their bodies (in particular JSON RPC and GraphQL requests). This is in preparation for adding tests for querying transaction blocks by their digest. The initial implementation assumes that all transactional test commands execute at most one transaction (which is true for now). The adapter will panic if this is ever not true, at which point we can do something more sophisticated, if we need/want to. ## Test plan Run existing transactional tests and make sure they all still pass (makes sure the assumption we made about the number of transactions run holds): ``` $ cargo nextest run \ -p sui-indexer-alt-e2e-tests \ -p sui-graphql-e2e-tests \ -p sui-adapter-transactional-tests ``` A follow-up change will introduce tests that make use of the new feature.
## Description This was just hanging out in the `api` module -- creating a dedicated `data` module to house it, in anticipation for adding more data-related things there. ## Test plan CI
## Description Re-organising how errors are propagated from various parts of the systme up into JSON-RPC errors. The original idea was for each sub-system to include a domain-specific error type, which had a `From<...>` impl to convert into JSON-RPC's `ErrorObject`. This doesn't work well for three reasons: - It requires a lot of boilerplate (the `From` impls). - The code that decides what kind of JSON-RPC error each internal error maps to sits with the system that produces the error, but there are scenarios where the same internal error may be interpreted as different JSON-RPC error codes (for example not finding a value may be an invalid parameter issue, or it may be an internal error depending on whether we expect the value to exist or not). - There are some error types that we can't implement `From<...>` for. In particular, when working with `DataLoader`s we need to return a cloneable error type. The most general way to do this is to wrap the underlying error type in an `Arc`, but then we can't implement `From<...>` on it because of orphan impl rules. The new strategy is to provide generic conversion operators that take the message from an error and wrap it in a JSON-RPC error code. The bodies of JSON-RPC methods are responsible for calling these functions to convert errors, this works around all the above issues. ## Test plan CI
## Description Require that queries have static lifetime parameters, to simplify our set-up. ## Test plan CI
The latest updates on your projects. Learn more about Vercel for Git ↗︎
2 Skipped Deployments
|
amnn
temporarily deployed
to
sui-typescript-aws-kms-test-env
January 21, 2025 21:08 — with
GitHub Actions
Inactive
## Description Move `*Watermark` types under `models`, as that's where they were before the introduction of a framework. ## Test plan CI
wlmyng
approved these changes
Jan 22, 2025
## Description Introduce the DataLoader pattern and use it to implement the package resolver (which will be used to decorate transactions and values based on type information), and its associated system package task. As there are now three different ways to query for data, a wrapping `Context` type has been created to pass to each `RpcModule` to give it access to a direct SQL connection, a data loader, or a package resolver. Later it will also be augmented with access to the KV store. ## Test plan CI + confirm the system package task eviction is working by running the indexer and RPC together from genesis -- shortly after genesis, a log message will appear showing that the RPC has detected the first epoch change and has evicted system packages from its cache accordingly.
## Description The package resolver is passed in an `Arc` in case the callee needs to issue concurrent requests (which each need their own package resolver), but it does not need to be passed by value down to the leaves. It can be passed by reference and cloned to get a value if that is necessary (most of the time, it is not). ## Test plan CI
## Description When running `generate_schema`, for the indexer, it should also run the indexing framework's migrations, but then exclude the framework's tables when printing the schema. This mirrors the behaviour when the indexer runs migrations, and it allows the indexer's migrations to depend on framework tables existing. ## Test plan A later change will introduce a migration that expects the `watermarks` table to exist. That fails before this change (when run on a clean DB), but now succeeds.
## Description Implement `sui_getTransactionBlock`, excluding: - `showBalanceChanges` - `showObjectChanges` Which will take a bit more work and will appear in follow-up PRs. ## Test plan New E2E tests: ``` sui$ cargo nextest run -p sui-indexer-alt-e2e-tests ```
amnn
temporarily deployed
to
sui-typescript-aws-kms-test-env
January 22, 2025 07:40 — with
GitHub Actions
Inactive
amnn
temporarily deployed
to
sui-typescript-aws-kms-test-env
January 22, 2025 07:41 — with
GitHub Actions
Inactive
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
For each transaction that is executed by the test adapter, record a named variable,
digest_$task
where$task
is the transactional testing task number (the same number that appears in the first position of fake object IDs).This is to allow later tasks to refer to transaction digests as variables in their bodies (in particular JSON RPC and GraphQL requests).
This is in preparation for adding tests for querying transaction blocks by their digest.
The initial implementation assumes that all transactional test commands execute at most one transaction (which is true for now). The adapter will panic if this is ever not true, at which point we can do something more sophisticated, if we need/want to.
Test plan
Run existing transactional tests and make sure they all still pass (makes sure the assumption we made about the number of transactions run holds):
A follow-up change will introduce tests that make use of the new feature.
Release notes
Check each box that your changes affect. If none of the boxes relate to your changes, release notes aren't required.
For each box you select, include information after the relevant heading that describes the impact of your changes that a user might notice and any actions they must take to implement updates.