Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: New way to declare resolvers #88

Open
neelance opened this issue May 23, 2017 · 48 comments
Open

RFC: New way to declare resolvers #88

neelance opened this issue May 23, 2017 · 48 comments

Comments

@neelance
Copy link
Collaborator

I've just been to the GraphQL Europe conference and it got me thinking about how we specify resolvers. Using Go methods seemed nice at first, but is that really the best way? Please take a look at this alternative:
https://github.com/neelance/graphql-go/blob/new-resolvers/example/starwars/starwars.go#L291

Advantages:

  • no resolver wrappers, especially good for list results
  • API easier to learn
  • clean shortcuts like ResolverField
  • most type checking at startup is still possible
  • explicit name mapping between GraphQL and Go
  • automatic handling of GraphQL type assertions

Disadvanatages:

  • no type checking on return value if type is a GraphQL interface (they map to Go's interface{})

What do you think?

/cc @F21 @nicksrandall @acornejo @bsr203 @EmperorEarth

@bsr203
Copy link

bsr203 commented May 23, 2017

looks much simpler and easier for new users and looks great. I use relay "Node" interface a lot and not sure how much headache would it be not having validation. But, I am ok with the tradeoff considering the overall simplification. Thanks for your continuing effort on improving the library. Cheers.

@tonyghita
Copy link
Member

This looks like a good way to lower the entry to barrier.

Would the existing way to define resolvers still be supported?

It could be that I'm used to the current method of writing resolvers, but I have a few concerns about doing away with the existing method and going only with this approach:

  • would force the creation of API-specific models if the underlying data does not match the GraphQL schema.
  • would make it harder to unit test, unless the functions are named.
  • function names would have to be more verbose (maybe include the name of the object being resolved?) because we lose the ability to attach functions to types
  • it becomes a bit unclear how to organize graphql-related code inside your project (what if I want the registration to be separate from the function definitions?)
  • we lose some type checking
  • ResolverField is convenient for simple use cases but in advanced use cases (i.e. lazy-loading of fields) maybe not so much

However, the only drawback I can think of if this method of specifying resolvers lives alongside the existing is that multiple ways to do the same things can be confusing.

@neelance
Copy link
Collaborator Author

Would the existing way to define resolvers still be supported?

No, but you can emulate it via https://golang.org/ref/spec#Method_expressions.

  • would force the creation of API-specific models if the underlying data does not match the GraphQL schema.

What do you mean by that?

  • would make it harder to unit test, unless the functions are named.
  • function names would have to be more verbose (maybe include the name of the object being resolved?) because we lose the ability to attach functions to types

I think you can still have them attached to a type by using https://golang.org/ref/spec#Method_expressions.

  • it becomes a bit unclear how to organize graphql-related code inside your project (what if I want the registration to be separate from the function definitions?)

Please also explain some more.

@F21
Copy link

F21 commented May 23, 2017

I like it!

However, I feel that the downside is that if there is a missing resolver, it might not be caught.

For example, if I just have a schema, and have no resolvers, will the missing resolver only be caught during runtime?

s := graphql.ParseSchema(schemaIDL)
// no resolvers here.

@neelance
Copy link
Collaborator Author

Missing resolvers will throw an error at application startup, just like they do right now.

@tonyghita
Copy link
Member

tonyghita commented May 23, 2017

No, but you can emulate it via https://golang.org/ref/spec#Method_expressions.

Okay, that's perfect! I didn't realize methods could be used like this. This would remain largely the same as existing in this case.

would force the creation of API-specific models if the underlying data does not match the GraphQL schema

From the example given, it looks like the functions return the actual type rather than a resolver.

Here's an example

// This type seems to be specific to the API, used instead of resolver wrapper
type starship struct {
  ID     graphql.ID
  Name   string
  Length float64
}

// Imagine this is the type contained by starshipData
type legacyStarship struct {
  ID        int
  BrandName string
  Model     string
  Length    float32
}

s.Resolver("Query", "starship", func(r *Resolver, args *struct{ ID graphql.ID }) *starship {
  legacy := starshipData[args.ID]

  return &starship{
    ID:     graphql.ID(strconv.Itoa(legacy.ID)), // previously each of these would have a function that contains this logic 
    Name:   strings.Join(" ", legacy.BrandName, legacy.Model),
    Length: float64(legacy.Length),
  }
})

Is it the case that you can no longer return a resolver type? If so, do we lose the ability to lazy-load fields?

it becomes a bit unclear how to organize graphql-related code inside your project (what if I want the registration to be separate from the function definitions?)

This is a moot point since I can just use method expressions, like you mentioned.

@neelance
Copy link
Collaborator Author

Is it the case that you can no longer return a resolver type?

Sure you can. Why shouldn't it be possible?

@tonyghita
Copy link
Member

tonyghita commented May 23, 2017

Ah I was confused... This seems great! I don't have any more concerns. Out of curiosity are there any performance implications to using this method?

@neelance
Copy link
Collaborator Author

I did not do any performance tests yet. However, ResolverField might be faster because it does no call.

@euforic
Copy link

euforic commented May 25, 2017

It would be nice if you could define multiple field resolvers together to make it easier to keep track of the fields defined for your resolver

// graphql.FieldMap  = map[string]string

s.ResolveFields("Droid", graphql.FieldMap{
  "id": "ID",
  "name": "Name",
  "appearsIn": "AppearsIn",
  "primaryFunction": "PrimaryFunction",
})

or use struct tags

type droid struct {
	ID              graphql.ID   `gql:"id"`
	Name            string       `gql:"name"`
	Friends         []graphql.ID
	AppearsIn       []string     `gql:"appersIn"`
	PrimaryFunction string       `gql:"primaryFunction"`
}

If the struct tag route is up for discussion it wold be nice to just reuse json tags too if possible

Thanks for all the great work!

@nathanborror
Copy link

How do you plan to thread context?

@neelance
Copy link
Collaborator Author

neelance commented Jun 2, 2017

@euforic I agree that such helpers may be nice. They should be easy to add on top of the simpler base API.

@nathanborror Sorry, but I don't understand your question.

@yohcop
Copy link

yohcop commented Jun 13, 2017

I much, much, prefer the struct-based approach.

From your list of advantages:

  • API easier to learn

I don't think it's easier. The API currently is essentially 2 functions (from the examples):

graphql.ParseSchema(starwars.Schema, &starwars.Resolver{})
http.Handle("/query", &relay.Handler{Schema: schema})

And graphql.ID for resolvers That's about it.

In fact the API is very clean, and doesn't "overreach" in the app code. One way to see this is that the only dependency on graphql for the big chunk of the example code, is on graphql.ID. And the server implementation only calls graphql.ParseSchema and relay.Handler. That's it.

  • clean shortcuts like ResolverField

No shortcuts is cleaner. But I agree that for simple fields, maybe some json-like annotations like it was suggested in this issue and another one, could make things a tiny bit easier. That's also cleaner than ResolverField I think.

You suggested that method expressions can be used. Yes, but instead of just adding a method to a struct (which I need anyway), now I also have to add one line of code, calling ResolverField with string arguments.

To quote @neelance on a related discussion ;)

I think that the following happened a lot during the design of the Go language:
Is it possible? Yes.
Would it give the option to write less code? Yes.
Should we add it? Probably not.

(Which made me laugh, because I think it's probably true :) )

And I think this that ResolverField one of those too. You save 2 lines of code with the shortcut, but it's not cleaner than the function which gives you more flexibility.

  • most type checking at startup is still possible

"most" is not as good, so this is a step backwards (and a big one in my opinion)

  • explicit name mapping between GraphQL and Go
  • automatic handling of GraphQL type assertions

I either don't really understand what you mean, or don't see the difference/improvement with how things are now. Passing strings as function arguments don't map any better than function names (ok, they match the case). In fact to me those strings are at least as magical as method names. If we were using something like ResolveField(schema.Human, schema.Friends, &Human.Friends) maybe, but again, this is just repeating what the Human.Friends struct/function is saying currently.

The other thing is that code organization is a bit more messy. The initSchema function is huge, and must have a direct dependency on every piece of the implementations, all the resolvers, all the fields. Of course I could pass around s, but it's still not as clean.

Ultimately, there is a much tighter link between the app code and the graphql-go library. Testing is not as easy (before, my tests would pretty much not know about graphql-go, just import my structs and my resolvers, and use that directly). Testing was mentioned before by @tonyghita, but another thing to add is that now testing has to include this initSchema function, and the implementation of my resolvers. Before it was only the resolvers.

But losing some type safety is sort of the biggest issue I have. Actually, I think it's already a pretty big problem that some of that checking is done at startup time, and not build-time. It is a tradeoff to make the API simpler (pass the schema as a string, and not an object that you have to build). Ideally it should be possible to build the schema only from the top level resolver struct (api would look like graphql.BuildSchema(&starwars.Resolver{}) as long as it doesn't require some sort of schemaBuilder.Object like here)

Obviously all of these arguments can become moot with a single helper function that does reflection, and makes all the s.Resolver() and s.ResolverField() calls for me (which is roughly what is happening currently)

@neelance
Copy link
Collaborator Author

@yohcop Thanks a lot for your feedback. Some thoughts:

About the type safety: It is really only about GraphQL interfaces. I also wasn't fully happy about how they worked before, it was more complicated than it needs to be.

About ResolverField: This is not only a shortcut, but performance relevant. Calling a function is much more expensive than reading a field. ResolverField will not just be a wrapper around Resolver.

About your "single helper function that does reflection": That won't work easily. graphql-go currently traverses the schema and the Go types in parallel to figure out mappings between GraphQL types and Go types. This is quite implicit. With the new API this will be much clearer to understand.

I have started working on the new approach and the graphql-go implementation already became a bit less complicated, which I think is good. It will also make it easier to add some features requested in other issues. Please take a look at how the new API currently looks like, I have iterated on it a bit: https://github.com/neelance/graphql-go/blob/new-resolvers/example/starwars/starwars.go#L289

I hear your concerns, but right now this still looks like the right way to go.

@yohcop
Copy link

yohcop commented Jun 14, 2017

Fair enough :) Actually I would take performance improvements over almost anything. I would be wary of promising improvements before benchmarks though (unless you have some already?).

Ultimately it's also your library, and you know it better than anyone.

Looking at the last iteration, I think I can live with it. It is a little complex in a way, here are a few questions:

  • what is the root struct? What's the purpose of it? I see it mirrors the more concrete types human, droid, etc. Is it just to fit the API?
  • I see the Schema() function returns b.Build(&root{}). If root is not really necessary, could this be replaced with b.Build() (or passed a dummy root object) and instead &root{} would be build at query time, in the http handler. It could then be used to pass information about the query through resolvers for Queries and Mutations (e.g. logged in user info). Currently one has to use the Context.WithValue/Context.Value in a non-typesafe way, but a user-defined struct would be much nicer. Another mechanism would be welcome too, if root doesn't work here.

I'll keep following, keep us updated 👍

@ijsnow
Copy link

ijsnow commented Jun 27, 2017

Moving from #15 (comment)

When do you think that this'll be done with?

Also, can you use tags/releases so we can use a dependency management tool to manage the versioning as you develop more and change the API?

@theduke
Copy link

theduke commented Jun 29, 2017

@neelance what's the decision on this? Is the new way final and just blocked on you finishing the work?

@arnos
Copy link

arnos commented Jun 30, 2017

I like the approach it definitively would make sense to go in that direction. It would take a good chunk of my boiler-plate code away

Also thanks @neelance for https://golang.org/ref/spec#Method_expressions

@neelance
Copy link
Collaborator Author

neelance commented Jul 4, 2017

I found time to work on graphql-go today and made good progress on the new resolvers. My plan is to push a version tomorrow that is not fully finished, but ready for testing and feedback.

@arnos
Copy link

arnos commented Jul 4, 2017 via email

@neelance
Copy link
Collaborator Author

neelance commented Jul 4, 2017

All right, the new version on branch new-resolvers passes all tests. Feel free to give it a try. It still needs better error handling in a lot of places and code cleanup. I'm also not really happy about the top-level value (the argument passed to Build). One option would be to remove it entirely. Ideas on this and other feedback are welcome.

@neelance
Copy link
Collaborator Author

neelance commented Jul 4, 2017

@arnos I'm still reluctant to put a version label on it, since the API does not yet feel stabilized to me. I'm still exploring the GraphQL world.

@ijsnow
Copy link

ijsnow commented Jul 4, 2017

That tells me that this isn't really production ready :/

@neelance
Copy link
Collaborator Author

neelance commented Jul 4, 2017

@ijsnow Depends on your definition of "production ready": If you mean "use it and expect bug fixes without occasionally having to invest work to adopt some API changes" then no, it is not this "production ready". If you mean "it does not have bugs all over the place", then yes, it is this "production ready".

@metalmatze
Copy link

I just tested the new version.
It seems to weird in some places compared to the old style: b.Resolvers("Query", (*root)(nil), map[string]interface{}{

But not having a func for every field which is just a getter is a lot better! 👍

@neelance
Copy link
Collaborator Author

neelance commented Jul 4, 2017

@metalmatze Thanks for giving it a try. Any ideas on how I could improve the new API? It also feels a bit weird to me, but maybe we're just not used to it.

@arnos
Copy link

arnos commented Jul 7, 2017

@neelance tried it out some observations:

  • It works well and with Method expression you linked to earlier minimal changes are required
	SB.Resolvers(
		"Queries", (*Resolver)(nil), map[string]interface{}{
			"flavor": (*Resolver).Flavor,
			"flavors": (*Resolver).Flavors,
		},
	)
  • I like being able to use fields directly, (I have to keep custom wrapper function to convert time.Time to graphql.Time)
	SB.Resolvers(
		"Flavor", (*FlavorResolver)(nil), map[string]interface{}{
			"id":    "Id",
			"name":   "Name" ,
			"createdAt":    (*FlavorResolver).CreatedAt,
			"updatedAt":    (*FlavorResolver).UpdatedAt,
		},
	)
  • Using the clean architecture my models reside in a Domain package separate from the GraphQL package (I end up duplicating more code ... but that's a function of my design not the library) as a result the only code that goes away are the simple field resolvers.

In terms of changes

  • a dead simple change is to extract the map, resolver type and not have it in the Resolvers method, this will make the API cleaner (IMHO) compare my query resolver example above with the following
	queryMap := map[string]interface{}{
		"flavor": (*root).Flavor,
		"flavors": (*root).Flavors,
	}
       rootResolver := (*root)(nil)
	
       SB.Resolvers("Queries", rootResolver, queryMap,)
       SB.Resolvers("Flavor", flavorResolver, flavorMap) 

Overall I like the change it brings some duplication for me with the clean architecture

@andycondon
Copy link

If you did make a version tag for the old API before you merge to master, it'd be really helpful for those of us using Glide and the like. We'd have better control of when we port our code to this new way of implementing resolvers.

@ijsnow
Copy link

ijsnow commented Jul 20, 2017

Also what about trying to get complete feature support before changing the API? Like subscriptions

@saward
Copy link

saward commented Jul 21, 2017

@neelance Do you think that the package is likely to go with this new way?

@acornejo
Copy link

acornejo commented Jul 30, 2017

I browsed through the code a bit today. I see a lot of nice changes that happened behind the scenes where things seem more elegant.

But from the perspective of someone who would use the exterior API and not a maintainer, I don't see much benefit.

In fact, I would argue that the old way of doing things had less magic and was closer to the spirit of go, where we were relying on the type system explicitly (behind the scenes you still rely on this through reflection, but now the user only sees strings).

FWIW The new API does NOT seem easier to learn, it doesn't seem any harder either, but I can't see any objective argument by which the new way would be easier.

If you wanted to, I think you could probably keep a lot of the behind the scene changes while leaving the old type based API alone.

@neelance what did you set out to do with this refactor? I see you mention ResolverField but I see no such class/function in the API, do you mean the fact that for simple fields you can just use the string as a shortcut for a function that returns the type?

Is there a concrete example which you feel the new API simplifies, a code snippet would be great.

@arnos
Copy link

arnos commented Aug 30, 2017

Ok with go 1.9 alias types I believe we have an even more interesting use case for the new resolvers structure. I have some time this week and beginning September to re-factor.

If I'm right I'll eliminate the bulk of boiler plate code

@SpencerCDixon
Copy link

@neelance any word on the progress of this?

@wyattjoh
Copy link

wyattjoh commented Oct 3, 2017

I completely agree with some of the earlier comments:

But from the perspective of someone who would use the exterior API and not a maintainer, I don't see much benefit... In fact, I would argue that the old way of doing things had less magic and was closer to the spirit of go...

The lack of "magic" of the current resolver system I think makes things clear to understand.

It would be neat if an approach could be taken similar to what grpc does (where they take a proto file and generate interfaces) where we could take a graphql file and generate interfaces that can make it even more go-like in that the provided resolver types must implement the given interfaces.

To me, using something like interface{} everywhere (like in the proposed new version) is a pretty strong anti-pattern given the simplicity of code generation that is available now.

Just my 2 cents.

@sandeepone
Copy link

Any update on this?

@neelance
Copy link
Collaborator Author

neelance commented Dec 1, 2017

Hi everyone. I'm sorry that there currently is no progress on this library. This is because I am focused on bringing WebAssembly to the Go compiler, see golang/go#18892 (comment).

Regarding the new way to declare resolvers: I liked the approach mentioned above better than the previous one, but I still wasn't fully happy with it. I did some more sketches and what I ended up was this:

func resolveHuman(req *HumanRequest, h *store.Human) *HumanResponse {
	var resp HumanResponse

	resp.ID = h.ID
	resp.Name = h.Name
	resp.AppearsIn = h.AppearsIn

	resp.Height = make([]float64, len(req.Height))
	for i, f := range req.Height {
		resp.Height[i] = convertLength(h.Height, f.Args.Unit)
	}

	if req.Mass && h.Mass != 0 {
		f := float64(h.Mass)
		resp.Mass = &f
	}

	if req.Friends != nil {
		for _, id := range h.Friends {
			resp.Friends = append(resp.Friends, resolveCharacter(req.Friends, id))
		}
	}

	if req.Starships != nil {
		for _, id := range h.Starships {
			resp.Starships = append(resp.Starships, resolveStarship(req.Starships, store.Starships[id]))
		}
	}

	return &resp
}

The idea is that GraphQL handlers should be similar to http.Handler. The "resolvers" API that I borrowed from graphql-js now feels like too much magic and indirect control flow and not really fitting for Go.

Right now this is all I can show to you, but I hope to continue working on this project at some point. Until then the project unfortunately is on hold, at least on my end. Can't clone myself. ;-)

@tonyghita
Copy link
Member

That's a really interesting approach. I can't wrap my head around how the selection set would be efficiently executed... what am I missing?

@0xSalman
Copy link
Contributor

0xSalman commented Jan 4, 2018

The idea is that GraphQL handlers should be similar to http.Handler

@neelance, is there a branch that has these changes? Would you be willing to share your progress in a branch? Thank you

@nfisher
Copy link

nfisher commented Jan 7, 2018

@neelance I like the library a lot. TBH I'm not sure I've thought through for feasibility/benefit yet but here's some ideas;

  1. generate the IDL schema from code.
  2. use the model in stdlibs sort interface not sure what that would look like?
  3. create an IDL to code generator to minimise some of the overhead. As an example Cucumber will generate empty stubs that need to be implemented.

Generate Schema

I'm thinking there could be a high-level NewSchema(q, m interface{}) which accepts a pointer entry for query and mutation. From there reflect can be used to inspect the parameters and generate the schema. If people want to see the IDL schema GenIDL().
There are two potential downsides I see to this;

a. It might be difficult to provide some of the meta-data specifically descriptions for queries and modifiers.
b. I think there's some benefit in that the IDL encourages you to "design" before you code. By generating it you lose that.

One benefit that I see with the current function per resolver field is that you can easily introduce a fine-grained authorisation layer. So if "role X" isn't allowed to see "field Y" you can do that with an injected function/struct.

Sort Style Interface

Something like this that provides context;

type Resolver interface {
  Resolve(sc SchemaContext)
}

Where I reference SchemaContext in the above I mean more like Apache Beams ProcessContext rather than the typical Go Context.

https://beam.apache.org/get-started/wordcount-example/

@nfisher
Copy link

nfisher commented Jan 7, 2018

I like the Handler interface! 👍

@edsrzf
Copy link

edsrzf commented Mar 19, 2018

I've been working on a new GraphQL server package and a large part of the reason why is that I'm unhappy with the interface of this package. I'm now re-thinking this and wondering if I can merge some of my work into this package, as it seems like it's emerging as the GraphQL package of choice.

My thinking about resolvers mostly rests on these principles:

  • There should absolutely be a Go interface for resolvers. This is the Go way and allows for nice things like composability and type safety.
  • Code generation should not be required (but optional is fine)
  • The resolver should be the one to decide whether a field is resolved asynchronously. It doesn't make sense to kick off an entire goroutine just to read a struct field.
  • Things should be as type-safe as possible (though there will undoubtedly be some sacrifices made here)

Currently I'm working with this interface:

// Resolver represents a GraphQL object type. It allows the query executor to resolve fields.
type Resolver interface {
	// Resolve returns a value for a field, according to the given field name and args.
	// If the field is itself a value with its own fields, another Resolver must be returned.
	// If the field is a list, a ListIterator should be returned.
	// Resolve is assumed to be a blocking function by default, but it may return a chan interface{}
	// if the field is being resolved asynchronously. The field value must be sent on the channel and
	// is treated the same way as if it had been returned directly from Resolve.
	// If the field name is not recognized by the Resolver, then Resolve's behavior is unspecified
	// and it may panic.
	Resolve(ctx context.Context, field string, args map[string]interface{}) (interface{}, error)

	// TypeName returns the name of the GraphQL type that the Resolver is resolving fields for.
	TypeName() string
}

I'm still not totally happy with this. For one thing, this Resolver differs from http.Handler in that Resolver objects tend to be constructed dynamically, during the GraphQL query, rather than ahead of time. (http.Handler objects can be constructed during requests too, but typically they're not.)

The reason is that there's no place to pass things like data loaders or other state. It wouldn't be hard to fix this by adding another interface{} parameter or similar, but that doesn't feel very elegant or type safe.

This interface does enable some pretty nice things. For example, I've got a helper function that can create a Resolver from a struct that has graphql tags on its struct fields:

// NewStructResolver returns a Resolver that resolves fields from a struct's fields.
// s must be a struct or a pointer to a struct.
// The returned Resolver resolves only fields that have the graphql tag. For example:
//
//	type Object struct {
//		StringField string `graphql:"string_field"`
//	}
func NewStructResolver(typeName string, s interface{}) (Resolver, error)

So now servers have the option of doing reflection magic to implement their resolvers, but aren't forced to. And the resolvers can compose nicely, so you can use a struct-based Resolver for most of your object fields, but have custom logic for the remaining ones.

I'm really interested in trying to engage with the community to solve this problem and working on this. Let me know what you think.

@eric-am
Copy link

eric-am commented Mar 20, 2018

FWIW, my workplace has used the https://github.com/Applifier/graphql-codegen to stamp out all our resolvers for use with this library.

It works, but we're probably about to fork it to make some enhancements to the way it does types, and here's why: right now, that codegen emits interface{} wildcard type fields in the Go structures it generates whenever there's a Union declaration in your graphql schema. And that's kind of unfortunate.

It looks to me like this graphql-go server library can handle interfaces just fine there, so, we should be able to make an interface (just with a marker method; in practice our code will still do type casing, because there's not much for real semantic interfaces here) to match each Union in our graphql schema. This would make a lot of our nearby code quality higher; interface{} is the least useful thing for getting the compiler to give us any helpful checks at all.

This is orthogonal to all the other concerns about e.g. whether or not every resolver deserves a goroutine, etc (and for a datapoint, yes, my group too is almost entirely full of codegen'd resolvers which are just dereferencing struct fields).

@EmperorEarth
Copy link

EmperorEarth commented Apr 20, 2018

Hopefully it's not too late to contribute.

This discussion reminds me of Cheng Lou's talk on the Spectrum of Abstraction (start ~17:28). It comes down to how well defined you think Resolvers are for users of your library. The more well-defined it is, the more you move down the ladder of abstraction, say from functions to structs, and get some perf wins. In the link, Cheng Lou gives his understanding of a few libraries, the decisions they made, and how its affect their usage.

Personally, I don't think resolvers are well-defined yet, but this may be because I'm a mediocre programmer without the experience to reliably predict how I will use GraphQL resolvers. Right now, I'm using them in two ways:

  1. Ideally, I grab a Viewer from the resolver struct and invoke a method on the Viewer type. In this case, my resolver is a thin adapter, which is how I like to think of my GraphQL layer: a thin API adapter. Most of the decision making, such as authorization, happens inside my core business logic.
    func (r *FooResolver) Bar() (string, error) { return r.v.ViewFoo(r.ctx, r.db) }
  2. More commonly, since I'm optimizing my feature/hour rate at the expense of maintainability, I'm shoving my authorization, db calls, etc into my resolvers. Code omitted for brevity.

I think the first will reliably be a best practice, but I'm quite unsure. But I know I'll be prototyping using the second way for awhile, so a certain level expressiveness seems necessary. I'm not sure a few perf wins and key strokes/codegen saved are worth adding to the API size.

  • there's social value in keeping everyone speaking the same "language" so to speak, when talking about your library
  • especially as a full stack developer, each individual library is fine adding a bit of API. But in the aggregate, it's just too much. It's one of the smaller reasons that drew me to Go in the first place. It wasn't C++
  • with the idea that graphql is a thin layer, sure a faster library is better than a slower one all things considered, but I'm not sure getting 100% faster for a 70% larger effective API is worth it given how small this library's perf footprint is. (Numbers are unsourced.)

@ilackarms
Copy link

i'm specifically looking for the ability to define resolvers at runtime (without using reflection & compiled-in types). i've started working on porting this library to support that use case. however, if there is a branch actively being developed i'd be happy to contribute there to make this a reality.

can anyone point me in the right direction for this?

@eric-am
Copy link

eric-am commented May 23, 2018 via email

@ilackarms
Copy link

@eric-am I've been using it as well. I've taken the schema and query parsing logic and basically made a custom "ExecutableSchema" that allows me to register / reregister new schemas (and their resolvers) at runtime

@andy9775
Copy link

I'm not sure what the final approach with this feature will be, but just thought I'd throw my opinion in.

I don't mind the current approach to defining resolvers but like this issue, I do have a problem with basic accessor methods running as resolvers. I also have a concern with just passing through a struct to fetch the fields from. For example, if I want to implement a permissions model which should be used to determine if a client can read a particular field having a method here is key. So I propose that we have "opt in" accessor methods.

For example:

The following would be an accessor:

func (u *UserResolver) FirstName() *string {
	return &u.User.FirstName
}

While the following operates as a resolver:

func (u *UserResolver) FirstName() (*string, error) {
	return &u.User.FirstName, nil
}

Note returning an error.

This allows me to do something like:

func (u *UserResolver) FirstName() *string {
	if u.currentUser.hasPermissions("read_first_name") {
		return &u.User.FirstName
	}
	return nil
}

Without having to worry about the number of resolvers running in parallel.

I'm also open to different method signatures, but I'd prefer to not do away with accessor methods in favour of just using struct fields. Not sure if this is entierly possible since I haven't dug too deep into the library.

@egeniegithub
Copy link

egeniegithub commented Jul 18, 2019

It would be nice if you could define multiple field resolvers together to make it easier to keep track of the fields defined for your resolver

// graphql.FieldMap  = map[string]string

s.ResolveFields("Droid", graphql.FieldMap{
  "id": "ID",
  "name": "Name",
  "appearsIn": "AppearsIn",
  "primaryFunction": "PrimaryFunction",
})

or use struct tags

type droid struct {
	ID              graphql.ID   `gql:"id"`
	Name            string       `gql:"name"`
	Friends         []graphql.ID
	AppearsIn       []string     `gql:"appersIn"`
	PrimaryFunction string       `gql:"primaryFunction"`
}

If the struct tag route is up for discussion it wold be nice to just reuse json tags too if possible

Thanks for all the great work!

Hey I just came upon this issue. I have mobile apps deployed in the production.
we had to change the name of the field for an update. now previous users are going to face the crash due to this change.

they are going to call the account_id and now the field is changed to account_sso_id.
"account_sso_id": &graphql.Field{ Type: graphql.String, },

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests