-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
query batching #126
Comments
Hi! Thank you for submitting this issue, and especially for providing us with very clear instructions on how to reproduce the behavior! The query that is sent is effectively part of a batch: [{"operationName":"test","variables":{},"query":"query test {\n me {\n id\n __typename\n }\n}"}] It is indeed not supported yet. Let's get back to you once we have a roadmap available, and hopefully an ETA on query batching support. |
BodyDeserializerError
when using Apollo Client's BatchHttpLink
@o0Ignition0o Please can you help us with ETA for the query batching support in the road map |
Also interested in understanding the timeline for batching support |
it is not in the roadmap for now, we will come back with an update once it is there |
@Geal Any update on the same, most of the clients are using batched requests, with this migration to rust would be tougher |
Hi @Geal @o0Ignition0o any updates wrt. the batching support? If not already in the roadmap, could one of you guide us towards contributing this feature? |
any updates on this? |
Addition of this feature would be much appreciated. |
created this draft PR to add support for batching at router_service layer. |
Curious as to when support for batching will be available. Any update on this would be really appreciated! |
For link-ability, this issue is tracking support of accepting an array of GraphQL requests in the HTTP body from the client to the Router. We have another issue for making batched requests from Router to subgraphs |
Things needing attention - [x] Configuration to match the design - [x] Decide if cloning `parts` is acceptable (in review) It isn't, so not cloned but re-built. - [x] Decide if cloning context for each `SupergraphRequest` is acceptable (in review) - [x] Should we do special handling if `@defer` or `subscription` detected in batch? - [x] Metrics - [x] Find someone happy to test from an appropriately configured Apollo Client and verify it works - [x] Modify Apollo telemetry to create separate root traces for each batch entry Fixes #126 <!-- start metadata --> --- **Checklist** Complete the checklist (and note appropriate exceptions) before the PR is marked ready-for-review. - [x] Changes are compatible[^1] - [x] Documentation[^2] completed - [x] Performance impact assessed and acceptable - Tests added and passing[^3] - [x] Unit Tests - [x] Integration Tests - [x] Manual Tests **Exceptions** Manual testing was performed with `curl` and [apollo client](https://www.apollographql.com/docs/react/api/link/apollo-link-batch-http/) **Notes** [^1]: It may be appropriate to bring upcoming changes to the attention of other (impacted) groups. Please endeavour to do this before seeking PR approval. The mechanism for doing this will vary considerably, so use your judgement as to how and when to do this. [^2]: Configuration is an important part of many changes. Where applicable please try to document configuration examples. [^3]: Tick whichever testing boxes are applicable. If you are adding Manual Tests, please document the manual testing (extensively) in the Exceptions. --------- Co-authored-by: Edward Huang <[email protected]> Co-authored-by: Geoffroy Couprie <[email protected]> Co-authored-by: Maria Elisabeth Schreiber <[email protected]>
…#126) * Update instructions for releasing supergraph component 1. Use `--debug` in `cargo xtask dist` so `Cargo.lock` can be updated. Without that param, you'll get the following error of sadness: ``` ➜ cargo xtask dist Finished dev [unoptimized + debuginfo] target(s) in 0.04s Running `target/debug/xtask dist` info: running `cargo build --release --locked --target x86_64-apple-darwin` in `/Users/weatherman/code/apollographql/federation-rs/federation-1` info: running `cargo build --release --locked --target x86_64-apple-darwin` in `/Users/weatherman/code/apollographql/federation-rs/federation-2` info: running `cargo build --release --locked --target x86_64-apple-darwin` in `/Users/weatherman/code/apollographql/federation-rs/federation-2/router-bridge` error: the lock file /Users/weatherman/code/apollographql/federation-rs/federation-2/Cargo.lock needs to be updated but --locked was passed to prevent this If you want to try to generate the lock file without accessing the network, remove the --locked flag and use --offline instead. Error: Could not run command in `/Users/weatherman/code/apollographql/federation-rs/federation-2/router-bridge` Caused by: 0: Encountered an issue while executing `cargo build --release --locked --target x86_64-apple-darwin` 1: Exited with status code 101 ``` 1. Add instructions for tagging `latest-2` release so the supergraph component is downloaded by rover
This project is an extension of the existing work to support [client side batching in the router](#126). The current implementation is experimental and is publicly [documented](https://www.apollographql.com/docs/router/executing-operations/query-batching/). The additional work to enable batching requests to subgraphs is captured in [this issue](#2002). Currently the concept of a batch is preserved until the end of the RouterRequest processing. At this point we convert each batch request item into a separate SupergraphRequest. These are then planned and executed concurrently within the router and re-assembled into a batch when they complete. It's important to note that, with this implementation, the concept of a batch, from the perspective of an executing router, now disappears and each request is planned and executed separately. This extension will modify the router so that the concept of a batch is preserved, at least outwardly, so that multiple subgraph requests are "batched" (in exactly the same format as a client batch request) for onward transmission to subgraphs. The goal of this work is to provide an optimisation by reducing the number of round-trips to a subgraph from the router. Additionally, the work will address an [unresolved issue](#4019) from the existing experimental implementation and promote the existing implementation from experimental to fully supported. Fixes #2002 <!-- start metadata --> --- **Review Guidance** This is a fairly big PR, so I've written these notes to help make the review more approachable. 1. The most important files to review are (in order): - [.changesets/feat_garypen_2002_subgraph_batching.md](https://github.com/apollographql/router/pull/4661/files#diff-6376c91cfdd47332a662c760ac849bb5449a1b6df6891b30b72b43f041bd836f) - [docs/source/executing-operations/query-batching.mdx](https://github.com/apollographql/router/pull/4661/files#diff-617468db3057857f71c387eaa0d1a6161e3c1b8bf9fcb2de6fc6eafedc147277) - [apollo-router/src/services/router/service.rs](https://github.com/apollographql/router/pull/4661/files#diff-544579a213fda1bff6313834d30fe1746a8a28ffd7c0d6dfa1081fa36a487355) - [apollo-router/src/services/supergraph/service.rs](https://github.com/apollographql/router/pull/4661/files#diff-5d72a88a68962a5926fb5bb115ea3efc186904612f74e697d72e3f009669c733) - [apollo-router/src/query_planner/plan.rs](https://github.com/apollographql/router/pull/4661/files#diff-21a82d277d12e8f21b6b71398d62e95303a117130cc4a27510b85ebfceeb8208) - [apollo-router/src/services/subgraph_service.rs](https://github.com/apollographql/router/pull/4661/files#diff-6ef5a208ca8622f30eef88f75c18566e0304d59856b66293dcd6811555e6382e) - [apollo-router/src/batching.rs](https://github.com/apollographql/router/pull/4661/files#diff-3e884074ecad8176341159a2382aa81c49d74b851894b8ade9fa4718c434dec6) First read the documentation. Hopefully that will make clear how this feature works. I've picked these files as being most important (and ordered them for review) because: router service => This is where we spot incoming batches and create context `BatchQuery` items to manage them through the router. We also re-assemble them on the way back and identify any batches which may need to be cancelled. supergraph service => Here we pick up the information about how many fetches we believe each `BatchQuery` will need to make. plan => The new `query_hashes()` does this fetch identification for us. This is the most important function in this feature. subgraph service => Here's is where we intercept the calls to subgraphs and park threads to wait for batch execution to be performed. We do a lot of work here, so this is where most of the intrusive changes are: assembling and dis-assembling batches and managing the co-ordination between a number of parked tasks. batching => This is the implementation of batch co-ordination. Each batch has a task which manages a variety of channels to facilitate communication between the incoming batches, waiting tasks and outgoing (to subgraph) batches. I'm suggesting reading this *after* reading through the service changes because it should mainly just be implementation details and you will be able to follow what is happening without knowing all this detail initially. Once you understand the changes to the services, you will need to read this code. Feel free to peek ahead though if that's how you like to review stuff. 2. There are still a couple of TODOs which will be resolved early next week. They are both related to how we handle context cloning, so a decision is still pending there. Obviously all the files need to be reviewed, but the remaining files should be fairly mechanical/straight-forward. --- **Checklist** Complete the checklist (and note appropriate exceptions) before the PR is marked ready-for-review. - [x] Changes are compatible[^1] - [x] Documentation[^2] completed - [x] Performance impact assessed and acceptable - Tests added and passing[^3] - [x] Unit Tests - [x] Integration Tests - ~[ ] Manual Tests~ **Exceptions** *Note any exceptions here* **Notes** [^1]: It may be appropriate to bring upcoming changes to the attention of other (impacted) groups. Please endeavour to do this before seeking PR approval. The mechanism for doing this will vary considerably, so use your judgement as to how and when to do this. [^2]: Configuration is an important part of many changes. Where applicable please try to document configuration examples. [^3]: Tick whichever testing boxes are applicable. If you are adding Manual Tests, please document the manual testing (extensively) in the Exceptions. --------- Co-authored-by: Nicholas Cioli <[email protected]> Co-authored-by: Edward Huang <[email protected]>
Describe the bug
In our project we use BatchHttpLink. When attempting to run against
router
, an error is thrownTo Reproduce
Steps to reproduce the behavior:
broken
branchyarn install
yarn start:router
yarn start
router
(see Outputs below)main
branch and refresh the browser where you can observe the query succeeded.Expected behavior
I would assume a successful query response.
Output
Nov 10 21:05:43.343 DEBUG hyper::proto::h1::conn: incoming body is content-length (104 bytes) Nov 10 21:05:43.343 DEBUG hyper::proto::h1::conn: incoming body completed Nov 10 21:05:43.343 DEBUG warp::filters::body: request json body error: invalid type: map, expected a string at line 1 column 1 Nov 10 21:05:43.343 DEBUG warp::filter::service: rejected: Rejection([BodyDeserializeError { cause: Error("invalid type: map, expected a string", line: 1, column: 1) }, MethodNotAllowed])
Desktop (please complete the following information):
Additional context
Delta between two branches in sample repo
The text was updated successfully, but these errors were encountered: