-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v8: GraphQL Instrumentation increases latency on Apollo Server + Express app #12092
Comments
Hey, I think that makes sense. I'll update our integration so that we disable resolve spans by default, allowing to opt-in to showing them for users who may want that! |
Users can pass `ignoreResolveSpans` and `ignoreTrivalResolveSpans` to the integration now, which is passed to the instrumentation. They default to `true`. This should reduce noise and perf overhead. Closes #12092
Ahh. So that is why my production server is crashing when i added sentry yesterday. On big requests the gateway dies. |
Sorry about that! We'll cut another release soon. In the time being you can filter the |
No need for appology! :) Thank you !!! |
I currently filter out the Sentry integration and set it up via otel, since I am testing out the dataloader integration anyway:
|
Is there an existing issue for this?
How do you use Sentry?
Sentry Saas (sentry.io)
Which SDK are you using?
@sentry/node
SDK Version
8.2.1
Framework Version
No response
Link to Sentry event
No response
SDK Setup
Sentry.init({
enabled: process.env.NODE_ENV === 'production',
dsn: sentryDsn,
release: sentryRelease,
environment: sentryEnv,
tracesSampleRate: 1,
autoSessionTracking: true,
});
Steps to Reproduce
Make a GraphQL request that usually takes 300-500ms (a list of items is returned with some depth) with Sentry tracing enabled in v8
Please see the following issues in otel:
open-telemetry/opentelemetry-js-contrib#1739
open-telemetry/opentelemetry-js-contrib#1686
for context on why Apollo Server would behave weirdly in this case (it sets every resolver to a custom one) which drastically increases memory on large requests when otel instrumentation is enabled. Perhaps passing ignoreResolveSpans: true open-telemetry/opentelemetry-js-contrib#1858 may help resolve this issue, since currently the graphql integration it is unusable in production in its current state (or perhaps allowing sentry users to pass this)
In v7 i setup a custom instrumentation plugin for Apollo and noticed the same issue and had to remove it due to too many spans being created, I was hoping the otel integration would have solved this differently and it seems like disabling resolve span instrumentation is the answer (especially since Java instrumentation has this as the default)
Expected Result
Response time does not drastically change
Actual Result
Response time ranges between 7s-11s
The text was updated successfully, but these errors were encountered: