Skip to content

Commit

Permalink
bubble up errors to the next nullable field (#528)
Browse files Browse the repository at this point in the history
* aggregate subgraph errors, update their paths to match the merged response and remove the message to hide private infra information
* use the schema and the query to validate the response data, inserting a null where there is an invalid value, and for non null types, replacing the parent object or list with a null, bubbling up in the response until the next nullable type

* handle the response validation inside response formatting

this makes more sense to validate while we are already exploring the
response tree.
It is also necessary to have the selection sets availalble when
validating, otherwise we get errors on nonNull fields even if they are
not present

This introduces a limited kind of query validation, since we need to
check that some elements of the query match the schema. That validation
returns a None on failure instead of a meaningful error message. We
should wait for apollo-rs semantic analysis to provide good error
messages instead (and delete all of that code)


* error propagation has a side effect: now all response formatting tests
will need a valid schema

* clean up the situation around query operation names,
where passing a query with an invalid name or no name could result in
the response object being returned without filtering

* support fragments at the top level

this requires a special case when applying selections, since the
top level object will not provide a __typename field

* deduplicate selections

if a field is requested multiple times, it should only be obtained once

* if we already know the type of a selection, we might not get __typename

in some cases, an object might not contain a typename. Example:

schema:
type Query {
  get: Thing
}

type Thing {
  name: String
}

query {
  get {
    ... on Thing {
      name
    }
  }
}

Here the operation will directly be handled by the subgraph, and it
might not return a __typename field, since we already know which type
will be returned (while for a union or interface we will get a
__typename).

Now when parsing the query we need to store the "current type", which is
known because the Selection::from_ast function is called from the parent
selection creation. If that type and the fragment's type condition
match, we won't need to check it.
There's an additional subtlety here: fragments used as fragment spread
might be defined in the query, not the schema, and might be later in the
query. Since we're lazily going through the query's AST, it makes sense
here to store the current type, then when applying the fragment spread,
look up the actual fragment and check the expected type there

* the type condition is optional on inline fragment

The following query is valid:

query {
  topProducts {
    ... {
      inStock
    }
}

this commit also fixes fragment spread to use FieldType::inner_type_name
on fragment spreads, otherwise lists and non null fields would not apply
  • Loading branch information
Geal authored Mar 3, 2022
1 parent 7bc6878 commit 5b9bf21
Show file tree
Hide file tree
Showing 8 changed files with 2,329 additions and 266 deletions.
11 changes: 7 additions & 4 deletions apollo-router-core/src/query_cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,15 @@ pub struct QueryCache {
}

/// A resolver for cache misses
struct QueryCacheResolver;
struct QueryCacheResolver {
schema: Arc<Schema>,
}

#[async_trait::async_trait]
impl CacheResolver<String, Option<Arc<Query>>> for QueryCacheResolver {
async fn retrieve(&self, key: String) -> Result<Option<Arc<Query>>, CacheResolverError> {
let query_parsing_future = tokio::task::spawn_blocking(|| Query::parse(key));
let schema = self.schema.clone();
let query_parsing_future = tokio::task::spawn_blocking(move || Query::parse(key, &schema));
let parsed_query = match query_parsing_future.await {
Ok(res) => res.map(Arc::new),
// Silently ignore cancelled tasks (never happen for blocking tasks).
Expand All @@ -30,8 +33,8 @@ impl CacheResolver<String, Option<Arc<Query>>> for QueryCacheResolver {

impl QueryCache {
/// Instantiate a new cache for parsed GraphQL queries.
pub fn new(cache_limit: usize) -> Self {
let resolver = QueryCacheResolver;
pub fn new(cache_limit: usize, schema: Arc<Schema>) -> Self {
let resolver = QueryCacheResolver { schema };
let cm = CachingMap::new(Box::new(resolver), cache_limit);
Self { cm }
}
Expand Down
37 changes: 27 additions & 10 deletions apollo-router-core/src/query_planner/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -112,11 +112,12 @@ impl PlanNode {
Box::pin(async move {
tracing::trace!("Executing plan:\n{:#?}", self);
let mut value;
let mut errors = Vec::new();
let mut errors;

match self {
PlanNode::Sequence { nodes } => {
value = parent_value.clone();
errors = Vec::new();
let span = tracing::info_span!("sequence");
for node in nodes {
let (v, err) = node
Expand All @@ -136,6 +137,7 @@ impl PlanNode {
}
PlanNode::Parallel { nodes } => {
value = Value::default();
errors = Vec::new();

let span = tracing::info_span!("parallel");
let mut stream: stream::FuturesUnordered<_> = nodes
Expand Down Expand Up @@ -176,18 +178,21 @@ impl PlanNode {
.await;

value = v;
errors.extend(err.into_iter());
errors = err;
}
PlanNode::Fetch(fetch_node) => {
match fetch_node
.fetch_node(parent_value, current_dir, context, service_registry, schema)
.instrument(tracing::info_span!("fetch"))
.await
{
Ok(v) => value = v,
Ok((v, e)) => {
value = v;
errors = e;
}
Err(err) => {
failfast_error!("Fetch error: {}", err);
errors.push(err.to_graphql_error(Some(current_dir.to_owned())));
errors = vec![err.to_graphql_error(Some(current_dir.to_owned()))];
value = Value::default();
}
}
Expand Down Expand Up @@ -346,7 +351,7 @@ pub(crate) mod fetch {
context: &'a Context,
service_registry: &'a ServiceRegistry,
schema: &'a Schema,
) -> Result<Value, FetchError> {
) -> Result<(Value, Vec<Error>), FetchError> {
let FetchNode {
operation,
operation_kind,
Expand All @@ -370,7 +375,7 @@ pub(crate) mod fetch {
.body(
Request::builder()
.query(operation)
.variables(Arc::new(variables))
.variables(Arc::new(variables.clone()))
.build(),
)
.unwrap()
Expand Down Expand Up @@ -401,18 +406,30 @@ pub(crate) mod fetch {
});
}

self.response_at_path(current_dir, paths, response)
// fix error path and erase subgraph error messages (we cannot expose subgraph information
// to the client)
let errors = response
.errors
.into_iter()
.map(|error| Error {
locations: error.locations,
path: error.path.map(|path| current_dir.join(path)),
message: String::new(),
extensions: Object::default(),
})
.collect();

self.response_at_path(current_dir, paths, response.data)
.map(|value| (value, errors))
}

#[instrument(skip_all, level = "debug", name = "response_insert")]
fn response_at_path<'a>(
&'a self,
current_dir: &'a Path,
paths: Vec<Path>,
subgraph_response: Response,
data: Value,
) -> Result<Value, FetchError> {
let Response { data, .. } = subgraph_response;

if !self.requires.is_empty() {
// we have to nest conditions and do early returns here
// because we need to take ownership of the inner value
Expand Down
2 changes: 1 addition & 1 deletion apollo-router-core/src/services/router_service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@ impl PluggableRouterServiceBuilder {
.ok()
.and_then(|x| x.parse().ok())
.unwrap_or(100);
let query_cache = Arc::new(QueryCache::new(query_cache_limit));
let query_cache = Arc::new(QueryCache::new(query_cache_limit, self.schema.clone()));

// NaiveIntrospection instantiation can potentially block for some time
let naive_introspection = {
Expand Down
32 changes: 31 additions & 1 deletion apollo-router-core/src/spec/field_type.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ use apollo_parser::ast;
pub(crate) struct InvalidValue;

// Primitives are taken from scalars: https://spec.graphql.org/draft/#sec-Scalars
#[derive(Debug)]
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub(crate) enum FieldType {
Named(String),
List(Box<FieldType>),
Expand Down Expand Up @@ -84,6 +84,36 @@ impl FieldType {
_ => Err(InvalidValue),
}
}

/// return the name of the type on which selections happen
///
/// Example if we get the field `list: [User!]!`, it will return "User"
pub fn inner_type_name(&self) -> Option<&str> {
match self {
FieldType::Named(name) => Some(name.as_str()),
FieldType::List(inner) | FieldType::NonNull(inner) => inner.inner_type_name(),
FieldType::String
| FieldType::Int
| FieldType::Float
| FieldType::Id
| FieldType::Boolean => None,
}
}

pub fn is_builtin_scalar(&self) -> bool {
match self {
FieldType::Named(_) | FieldType::List(_) | FieldType::NonNull(_) => false,
FieldType::String
| FieldType::Int
| FieldType::Float
| FieldType::Id
| FieldType::Boolean => true,
}
}

pub fn is_non_null(&self) -> bool {
matches!(self, FieldType::NonNull(_))
}
}

impl From<ast::Type> for FieldType {
Expand Down
37 changes: 25 additions & 12 deletions apollo-router-core/src/spec/fragments.rs
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
use crate::*;
use apollo_parser::ast;
use std::collections::HashMap;
use std::collections::{HashMap, HashSet};

#[derive(Debug)]
#[derive(Debug, Default)]
pub(crate) struct Fragments {
map: HashMap<String, Fragment>,
}

impl From<&ast::Document> for Fragments {
fn from(document: &ast::Document) -> Self {
impl Fragments {
pub(crate) fn from_ast(document: &ast::Document, schema: &Schema) -> Option<Self> {
let map = document
.definitions()
.filter_map(|definition| match definition {
Expand All @@ -21,12 +21,7 @@ impl From<&ast::Document> for Fragments {
.unwrap()
.text()
.to_string();
let selection_set = fragment_definition
.selection_set()
.expect("the node SelectionSet is not optional in the spec; qed")
.selections()
.map(Into::into)
.collect();

let type_condition = fragment_definition
.type_condition()
.expect("Fragments must specify the type they apply to; qed")
Expand All @@ -37,6 +32,24 @@ impl From<&ast::Document> for Fragments {
.text()
.to_string();

let mut known_selections = HashSet::new();
let mut selection_set = Vec::new();
for selection in fragment_definition
.selection_set()
.expect("the node SelectionSet is not optional in the spec; qed")
.selections()
{
let selection = Selection::from_ast(
selection,
&FieldType::Named(type_condition.clone()),
schema,
)?;
if !known_selections.contains(&selection) {
known_selections.insert(selection.clone());
selection_set.push(selection);
}
}

Some((
name,
Fragment {
Expand All @@ -48,7 +61,7 @@ impl From<&ast::Document> for Fragments {
_ => None,
})
.collect();
Fragments { map }
Some(Fragments { map })
}
}

Expand All @@ -58,7 +71,7 @@ impl Fragments {
}
}

#[derive(Debug, Clone)]
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub(crate) struct Fragment {
pub(crate) type_condition: String,
pub(crate) selection_set: Vec<Selection>,
Expand Down
Loading

0 comments on commit 5b9bf21

Please sign in to comment.