-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Tracing support #179
Comments
Would you like to work on this? I think it would be a nice feature to add. |
Sure, will try to complete a first revision over the weekend. |
Precious information may be found in the apollo-tracing repository. @tHBp could you please share your work? Maybe I could continue it. |
I'd like to flag that the only solution here isn't just Apollo's approach, but also the tracing could be implemented with OpenTelemetry. Adding an OpenTelemetry exporter would be a viable way of supporting many more tracing backends in a performant way. Wrapping all resolvers in a Schema to achieve detailed tracing (by analysing the GraphQL A viable approach here may be:
It might be that timing every resolver is possible, but performance impacts should be kept in mind (Apollo is certainly able to achieve this in Apollo Engine / Graph Manager). I can provide some additional info on how you might go about wrapping the schema if either of you is interested in this. |
I would actually prefer adding OpenTelemtry support. |
I think we will just need some documentation that explains how to use opentelemetry with mercurius. Some packages are about to be released for graphql instrumentation: @opentelemetry/instrumentation-graphql. I've not tested this package yet. |
It works ! I had to clone opentelemetry repo to access 'use strict'
const api = require('@opentelemetry/api')
const { NodeTracerProvider } = require('@opentelemetry/node')
const { SimpleSpanProcessor } = require('@opentelemetry/tracing')
const { ZipkinExporter } = require('@opentelemetry/exporter-zipkin')
const { GraphQLInstrumentation } = require('@opentelemetry/instrumentation-graphql')
module.exports = serviceName => {
const provider = new NodeTracerProvider()
const graphQLInstrumentation = new GraphQLInstrumentation()
graphQLInstrumentation.setTracerProvider(provider)
graphQLInstrumentation.enable()
provider.addSpanProcessor(
new SimpleSpanProcessor(
new ZipkinExporter({
serviceName
})
)
)
provider.register()
return api.trace.getTracer(serviceName)
} |
Could we ship that as a small module with a few tests in this org? |
I'm not sure this should be a module, from what I understand about opentelemetry, the tracer definition is for the end user. What we can do on our end is to add some mercurius specific span, maybe on loaders, cache, jit ? https://open-telemetry.github.io/opentelemetry-js/#library-authors |
I think it would be awesome to wrap resolvers and loaders. |
I think it needs more experimentation but the resolvers are already covered automatically via graphql-instrumentation, a parameters allows to configure the depth of the tracing |
I have a feeling it won't work for graphql-jit, but maybe I'm too pessimistic. |
Yes it wont. About fastify and opentelemetry, the current landscape is a bit confusing because there is multiple initiative like https://github.com/autotelic/fastify-opentelemetry and https://github.com/mkinoshi/fastify-opentelemetry. see open-telemetry/opentelemetry-js-contrib#41 fastify-opentelemetry doesn't seem up to date with the latest version of fastify (it uses deprecated hooks). I haven't tested autotelic version yet. @mcollina Is there any plan to move this kind of integration under the fastify org ? |
https://github.com/autotelic/fastify-opentelemetry should do the trick. |
Consider adding tracing support to enable checking runtime performance of queries and downstream API/Database calls.
Apollo supports the same via a plugin.
It can be either directly part of the core, or be implemented as a plugin. But for the latter, I guess we'd have to roll out a plugin architecture.
The text was updated successfully, but these errors were encountered: