-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: New user and client facing API for graphql-ppx #133
Comments
Thanks for writing this up @jfrolich ! One thing that struck me when reading this is that I worry that I'm obviously colored by my experience building My 5 cents! Again, thanks for writing this up. |
This proposal is mainly trying to tackle the problems with having a
The previous API clients used ( Using the
I might be misunderstanding how you'd like a separate Let's talk about about a possible implementation with a This approach basically arrives at the same end result. But there are several downsides to this:
A small benefit of extension by But you might have other ideas in how this might work? |
I'm still digesting the full proposal, but here are some more thoughts:
Side note: I agree that other clients shouldn't add a PPX on top of this one. BuckleScript's PPX semantics are already terrible as they are, and we don't need something to make it even worse (PPX composition, etc.) |
Another approach could be that
We still have these problems:
Actually I think this is certainly quite powerful, and there might be clients that need to do things that are not possible in a functor. So there is a case to be made to offer this functionality for libraries where the benefits outweigh the costs ( But I also think that most libraries don't need this kind of power, and the cost doesn't stack up to the benefits for them. |
Yes I agree configuration is not great. But I'd like to add that configuration is necessary anyway because for instance for Apollo (most popular GraphQL client) we need to pass configuration to the
Yes I think that is superior to a tuple (also mentioned in the body text). (I do have mentioned some other downsides to passing a definition around in general.) If we ship a |
The functor is only for implementing clients. The user will not even know that the module extension happened because of a functor. They will just have a |
Configuration is not the problem I'm describing. Configuration that will result in a functor application based on the name of the module you write as a string is the problem I have. Having a small runtime code that's shipped as part of a PPX is no big deal -- PPXes do it all the time and users don't necessarily need to declare a dependency on it -- in this case Apollo would, so you'd get it as a transitive dependency.
Sounds like the user would have to write |
@anmonteiro Yes you have to provide the module name as a string. But that is the best that I could come up with, any ideas on how we can do this differently? |
@jfrolich I don't like the configuration bit either. I'm trying to come up with a way to determine the client automatically without creating coupling. A possibility could be a convention. Let's say graphql-ppx uses a dynamic extension point. Example:
If used with this postfix, it automatically extends the modules by way of using the postfix:
Where the names like I'm sure this has a lot of drawbacks as well that I'm not thinking of right now (it being 4:30 am and all), but I usually like convention over configuration. |
It's what I've been trying to suggest: by not lifting things to the module level, or at least by using first class modules, you can just pass values to the library from within the application (and therefore going by the regular type checking pipeline). It sounds like you're gonna have to ship some kind of "runtime" anyway, otherwise clients won't see your module type. I'd love to echo what some others have said too:
ApolloClient.use((module MyQuery)) You can also go a step further, and have GraphQL PPX generate something like: module MyQuery = [%graphql ... ]
// generates:
let my_query = (module MyQuery: GraphqlPPX.QueryType)
// usage:
ApolloClient.use(my_query) |
Very interesting approach. That is actually quite neat! (With the sidenote that probably we also probably need to keep |
You have written something about ApolloClient.use((module MyQuery)) having been discussed and disliked before. I agree with @anmonteiro though, it's the easiest to write in my opinion (easier than the definition way and esp. easier than manually applying functors). Generally I'm very much for keeping the module structure around. It feels more natural for me coming from a web based background. |
I do like this approach. But I would probably like Another option is to just have one configuration ( |
Here's another proposal that I just proposed in Discord to @jsiebern, that I like even more: Assuming for a second that module type Query_inner = {type t;};
module type Query = {
include Query_inner;
let this: (module Query_inner);
}; You can have this sort of recursive module that includes itself as a first class module for clients to use. The API that this allows becomes really natural: module MyQuery = [%graphql ...];
ApolloClient.use(MyQuery.this)
|
Love @anmonteiro proposal. I actually was thinking over the feedback of @zth, about the fact that some clients need to adjust the behavior of this ppx slightly. What if we have a configuration option for clients that need more customization. Instead of having an extra ppx that need to be maintained by the client that in some way is dependent on this ppx. What if we would have a So for instance we have This has a few advantages:
Let me know what you think! |
I like the idea in general and see your points on the advantages. What strikes me though is the coupling of these packages. I believe your first instinct to have been the best: Provide points of extension / variation that are safe from breaking when the PPX updates. I work with a lot of version coupled projects at work, this just gets nasty really fast. Providing some switches (like |
I agree with @jsiebern. Another option which I've seen other PPXs do is to release a library of GraphQL PPX tools. GraphQL PPX can itself depend on it and provide the primitive building blocks for interacting with GraphQL in OCaml / Reason, and this library could be used by other clients that wish to augment GraphQL PPX's functionality. |
Ok yes that is indeed a valid concern.
@zth Would this make sense for |
Ok naming wise, what do we think of |
See #138 for the implementation |
Why isn't |
Yes, that's probably better though in the end it's an implementation detail. It will be better especially when we generate a module type so we can actually hide the original module with this approach. |
Btw it's a detail but I'd encourage you to think about the naming convention. If users name modules after the thing that's about to be fetched for instance:
Then probably nicer name for |
The only thing is that query is already taken as the string with the "query". And we also have to think about other operations (mutation, subscription) or fragments. |
I'm fine with |
I'd prefer I agree with @jfrolich, that |
Operation doesn’t capture fragments 😅 |
I think “this” is better when it refers to the containing module if the code is in the module itself, and “self” is better when referring to a module when outside that module linguistically. |
It's also the terminology used by GraphQL, and OGS: |
Yes, that is also what we use in the |
I ran into an issue while implementing this: In order for clients to support this we need to have this at implementation site (generated by
When you are building a client that accepts the packed module you need to have the exact same annotation for unpacking:
This is a bit problematic because if we add something in the definition code that is generated by the Interestingly if we go for the functor approach, the only annotation is needed for the consumer. So we don't even have to annotate what is exposed. This means it will always be backwards compatible unless we break the type signature. It is stringly typed. But so are some other parts of the API (like decoders). Thoughts? |
Also interestingly, if we don't pre-pack into a value but just use this API:
We do not have this problem because in that use case the type is inferred and doesn't need to be annotated. |
It looks like packing syntax will improve. Actually this is already possible in Reason:
But this needs to become the default in The following will be the potential new syntax:
|
Because
graphql-ppx
is bringing about a lot of breaking changes I'd like to bring up the representation of GraphQL operations and fragments. This so we arrive at a solution that is the best representation for library authors free from historic / legacy considerations.Currently, this is what a typical query looks like:
This is compiled into:
As you can see it generates a number of functions (
parse
,serialize
,makeVariables
,serializeVariables
), values (query
,definition
), and types (Raw.t
,t
+ nested types andRaw.t_variables
andt_variables
+ possible input objects).There are also some utility functions that we might generate, such as for constructing input objects, or converting enums to strings (coming up).
Representation
There are a number of ways we can represent queries:
The types cannot just be generated, because we to have explicit access to them as records sometimes need annotation. This means every record type needs to be accessible to the user.
Value representation
This representation would be the ideal representation because values are easily passed to functions and very composable.
We can wrap the values and functions in a containing value. The most ideal candidate would be a record. Where the respective values are accessible by their names.
Note that we need to ship a global module with
graphql-ppx
to be able to do this as we cannot generate this record for each query as they are nominally typed. But that is not a problem as we need to this for other reasons already. The alternative is a tuple (like we have now).The problem with this representation is that the types are now dropped in the current module. And can clash with existing types, so we have to give them unique names for each query. Also because the
Raw
types and the parsed types are so similar it is better for type inference to wrap them in separate modules. This means that even if we inject the types in the current module, we still need to create a module to wrap theRaw
types.Jumping through all these hoops to avoid name clashes can be easily solved with the combined representation.
Module representation
In this case the contents are wrapped into a module. This is what we do currently sans the
definition
.Combined representation
This is what we do currently. We have the module that wraps all values and functions, so we avoid name clashes, and then we have the
definition
tuple that includes the "guts" of the module it is(query, parse, serialize)
and these are the values that most clients need to operate.Current state (value in module)
The
definition
tuple is nice because we can pass this to any client and the client will get the information it needs. It can infer the most important typest
andRaw.t
from the function signatures ofparse
andserialize
.This works like this:
Problems
When we are implementing a
use
function (most common use case) to execute the query, we pass in the definition, variables, and perhaps some other parameters.Because
definition
doesn't contain any variable related values, these functions cannot infer if the variables that are passed are actually the variables of this query (it might be a completely different data structure).To tackle this we have to put another value in the
definition
tuple, for instancemakeVariables
. Now it has access to thet_variables
andRaw.t_variables
.The problem here is that we need to keep adding values into this tuple (a breaking change for libraries), in order to make it possible for libraries to have access to certain types or values.
Another problem is around dead code elimination. When we bundle these values together in a data structure it potentially makes it harder to do dead code elimination or tree-shaking operations.
Alternatives
An alternative is not to have the definition tuple, but just the module. And find ways for libraries to consume the module and it contents. There are two ways to do this.
First-Class Modules
First-class modules allow you to pass a module just like a value. In case of a
use
function it would look like this:This has been discussed at length in both the Apollo and Urql clients, and was actually implemented in
reason-apollo-hooks
as an experiment. General consensus after this discussion seems to be that:This is leaking advanced language concepts to the end user, we'd like the API to be simple to understand
The syntax is slightly awkward with the extra braces and the need to add
module
Functors
Functors was the way most libraries used
graphql-ppx
previously (before thedefinition
).The main reason
definition
was added was that applying a functor to each query module was adding a lot of boilerplate. However eventual user-facing API was quite nice:Also functors are the most flexible solution (together with FCMs). They can access all types and all values within the module. And most importantly they don't break if we add new functionalities into GraphQL ppx because modules are structurally typed.
Optimal solution
First-class modules look like the best solution from a technical solution, it however is not great in terms of simplicity of the API, and it doesn't seem to have consensus from GraphQL library authors.
Value in module has the problem that their use is constraint in what we pass into the value. If we change the structure of the value, all GraphQL clients break, which can result in bad user experiences when people are using GraphQL in Reason. Arguably passing MyQuery.definition is also not the most beautiful API but opinions differ.
Functors provide a beautiful API for the user, they are also the most flexible solution and and they are less prone to cause breaking changes. But they have the downside of the added boilerplate of applying the functor for every Operation/Fragment.
Can we let the PPX do the heavy lifting?
The code necessary for applying the functor is the following:
Note we still need the
MyQuery
around because as we noted before, we need to have the types around to use in our implementation. The number of types are variable (dependent on the structure of the query), and not fixed so there is no fixed module type. This means the functor cannot extend the module.To extend the query properly (so we have a single module that contains the original query and the extension) we have to add even more boilerplate:
The result is that we have a single
Query
module that is incredibly extendible. For instance clients can add:Query.use
,Query.execute
, etc. etc.Prior art is
reason-relay
, this client doesn't usegraphql-ppx
and relies on a relay compiler plugin to generate modules types and values. It also has custom build GraphQLppx
, that is deeply integrated with the client. Users seem to love the simplicity and ease of use of theQuery.use
(and other) API.It seems very repetitive and not worth to let the user do this for every query. Especially since in a single project 99% of the case the user will use a single GraphQL client. Also just like FCM this is exposing advanced language features to the user.
So how about letting the PPX do the heavy lifting? We can hide the functor application from the user. And extend every operation or fragment automatically as part of the PPX.
The functor application is now an implementation detail. The most important result is the capability to extend modules that
graphql-ppx
generates, but without adding custom code to the PPX for each GraphQL client (that would be impractical).It would expose the advanced language features to the client author (that's fine).
The only downside to this is that we need to have some global configuration. We can configure
graphql-ppx
currently in three ways.bsconfig.json
We should be able to expose the functors in these three ways as well. The easiest way would be to use the
bsconfig.json
configuration.This is an extra step of configuration. But it's pretty easy to do, and the user already has to change
bsconfig.json
to set upgraphql-ppx
and the GraphQL client that they use.I think this strikes a balance between most flexibility for GraphQL clients and the nicest API for users, and being the most future proof going forward.
I am interested in your opinion about this as we cannot proceed without consensus as it will affect all GraphQL clients in the Reason ecosystem.
PS: The cool thing is that we can now extend an operation to add
definition
. If some clients prefer to keep using definition, they can simply ship with a simple functor that composes the values they need in adefinition
. This definition is now defined by the client. And will never break. If the client needs more values in the definition they can simply update the functor (and the ppx doesn't need updating). This will allow new versions of the client to have a newdefinition
and old versions of the library will keep working.The text was updated successfully, but these errors were encountered: