Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a means for capturing stack trace for messages that are not errors #338

Closed
goodspark opened this issue Apr 13, 2019 · 28 comments
Closed

Comments

@goodspark
Copy link

goodspark commented Apr 13, 2019

... without relying on the logging integration.

SDK v0.7.10

Currently, the only way to do this is via the logging integration and issuing a log at the configured level or higher with exc_info=True. I don't think this is a good substitute because:

  • I don't want to classify these events as errors (the end goal being errors should drive alerts)
  • Sometimes there are interesting or unexpected events that I want to debug more and leveraging Sentry's stack traces with locals helps in this regard
  • I don't want to lower our logging integration level to warning or similar, as that may spam our project with a lot of warnings/events that are not interesting or useful

Ideally, the Sentry SDK could add an additional argument to the and capture_message APIs to make this simple from a user perspective:

sentry_sdk.capture_message("Unexpected event", level="warning", stack_trace=True)

I'm trying to get this working locally with the SDK, and it's leading to incomplete event data (see picture) and also leading to pretty hairy code.

sentry_sdk.capture_event({"message": "oops 2", "level": "warning", "threads": [{"stacktrace": sentry_sdk.utils.current_stacktrace(with_locals=True), "crashed": False, "current": True}])

This was more or less copied from the code I see for adding stack traces in Client._prepare_event.

Screenshot from 2019-04-13 14-39-24

@untitaker
Copy link
Member

We (or at least I) are trying to move people away from instrumenting our SDK explicitly and instead use logging mechanisms that are native to the platform (in this case Python). In my view there's potential to make Sentry error reporting through logging a better experience than using capture_* everywhere explicitly. For once because users will get a lot of value out of the box from Sentry (assuming they already use logging to some degree) and also because existing customers would probably apprechiate not tying their codebase to Sentry too much. Of course Raven already had integrations but I think we can go further than that. Perhaps this goal is not achievable in a way that makes it possible to fine-tune behavior where necessary, but it's worth a try IMO. I hope this somewhat explains why the current API is how it is.

So you say the logging integration is not a good substitute, but I want to try making it one. Given your constraints, perhaps your actual problem is that the integration logger level is not configurable per-logger? We still expose logging handlers that you can explicitly register: https://docs.sentry.io/platforms/python/logging/ So I wonder if in your case it's appropriate to register those handlers explicitly (with individual handler levels) and disable the logging integration altogether.

The part about threads vs stacktraces is a different topic, I will respond separately to that, but I gtg right now

@untitaker
Copy link
Member

So about threads vs stacktrace:

We are trying to deprecate stacktrace because in its current form it's a quite limited attribute, and it overlaps with the data threads can provide. I think there's room for improvement wrt how we show information in the UI. For example if the Thread ID is missing I think we should just show nothing instead of id n/a.

@goodspark
Copy link
Author

We (or at least I) are trying to move people away from instrumenting our SDK explicitly and instead use logging mechanisms that are native to the platform (in this case Python).

This feels highly incongruous, because I still have to call Sentry to add tags to events.

with sentry_sdk.configure_scope() as s:
  s.set_tag("blah", 1)
log.error("something happened")

@untitaker
Copy link
Member

Yes. However, you'll find that tags are a thing we never set in any integration, because indexing by them is a serverside cost we only want to bear if the user is explicitly asking for it. There are multiple requirements we have to serve here, and if those requirements are in conflict we have to pick one for lack of a compromise. As far as I understand tags are rather an exception to the rule.

Another idea is that you use before_send to filter out warning events that come from a logger you don't care about.

@goodspark
Copy link
Author

goodspark commented Apr 15, 2019

Another use case that's also popped up recently: structlogs & errors/non-exception events with stack traces

I've been trying to integrate the new SDK with another Python project that uses structlog instead of stdlib logging. Due to grouping issues (similar to the discussions in #228), I wound up writing my own processor for Sentry. The code is very similar to hynek's SentryProcessor in https://gist.github.com/hynek/a1f3f92d57071ebc5b91, but I have a branch for error-level logs and have been trying to preserve the option to include stack traces.

The old SDK let me use extra={"stack": True} in the log call, which eventually made it's way to stack=True to the captureMessage call. With the new SDK, exc_info=True means there's no distinction between exceptions and errors, much less warnings and levels below where I might want stack traces.

class SentryProcessor:
  def __call_(self, log, method, event):
    # In the future I might want to expand this to support arbitrary events with stack traces
    if event['level'] not in {'error', 'exception'}:
      return event

    stack = event.pop('stack', False)

    # This came from log.exception
    if 'exc_info' in event:
      self._sentry.captureException(...)
    # This came from log.error
    else:
      self._sentry.captureMessage(..., stack=stack)  # The stack kwarg is key here

    return event

def some_other_code():
  log.error('oh no', stack=True)

  log.exception('oh no')  # exc_info=1 gets added by stdlib, which then calls "error"
  log.error('oh no', exc_info=True)  # Thus this will look exactly like an exception log to structlog's processors

@untitaker
Copy link
Member

So to summarize, I see three distinct issues here:

  • You want to send a warning-level event. The way to do this right now is to use the logging integration, but you want to avoid setting the logging integration's level to warn, understandably. There are two workarounds to this: Use explicit logging configuration by registering handlers explicitly, or use before_send to drop warning-level events you don't care about.
  • structlog support
  • you want to record the caller stack and attach an exception at the same time

Do you think this is correct?

@goodspark
Copy link
Author

goodspark commented Apr 15, 2019

For the third, point, it's more like:

  • I want to record the caller stack in cases where an exception may not be present or in cases where I don't want to record the exception.

Basically, decoupling stack traces from exceptions.

@untitaker
Copy link
Member

This is kind of a stupid workaround, but would the following work for you?

sys.exc_clear()
log.error(..., exc_info=True)

Not suggesting that this is great, just trying to understand.

@goodspark
Copy link
Author

It would work for stdlib logging cases, but not for structlog, where I can't distinguish between exception or error calls.

@untitaker
Copy link
Member

@goodspark I want to close this issue because the structlog support is already discussed/tracked elsewhere and I see the rest more as feedback on the API rather than actual features. I gave you a few workarounds for your problems with the logging integration, but that aspect of the discussion never really went anywhere.

It would work for stdlib logging cases, but not for structlog, where I can't distinguish between exception or error calls.

I don't quite understand how this is relevant. If you want to funnel this through structlog then this would indeed take some effort, but you don't have to either. As far as I understand you are saying that structlog does not have a format for attaching a caller stack and an exception stack, but I don't see how the SDK can help with that.

@goodspark
Copy link
Author

  1. If this is feedback on the API, what are the next steps to support my use cases?
  2. The workarounds as you've mentioned yourself, aren't great. This really should be easier from a user point of view, regardless if it's an SDK or API issue.
  3. Struct logging definitely isn't required to integrate with Sentry. But then I'd prefer some cleaner APIs to decouple stack traces from exceptions, which leads us back to square one, right? And if I choose to continue using structlog, I still have the problem where I'm unable to attach stack traces to non-exception events without a bit of Trick's.

@untitaker
Copy link
Member

untitaker commented Apr 22, 2019

As it stands you're the first one I know who both has a usecase like this and is sufficiently dissatisfied with the UX around it to want additional API surface. So you have to understand that this is not something we want to change right now, especially considering that this thread contains multiple workarounds that are not even fragile or rely on unstable API, instead they are just verbose.

The simplest suggestion with before_send, that I think I didn't elaborate enough on, would look like this (untested):

import logging, sys

from sentry_sdk.integrations.logging import LoggingIntegration
from sentry_sdk import *

def before_send(event, hint):
    if event.get('level') not in ('debug', 'info', 'warning') or event['extra'].get('sentry_force_send_event'):
        return event

init(DSN, integrations=[LoggingIntegration(event_level=logging.WARN)], before_send=before_send)

#sys.exc_clear()
logging.warn("Hi! I am an event", sentry_force_send_event=True, exc_info=True)

Admittedly I don't really understand whether you can't or don't want to use the logging integration to do this. It seems to me like the above would work perfectly fine regardless of whether you use structlog besides it.

@goodspark
Copy link
Author

Yeah the workarounds will suffice. Thanks for your time

@untitaker
Copy link
Member

@goodspark I hope this didn't come across as harsh. I do value your feedback but I think we need to push back a bit on features as well to avoid feature creep.

@mecampbellsoup
Copy link

@untitaker how come I can't just do sentry_sdk.capture_message('Foo', extra=arbitrary_extra_data_i_want)? This seems extremely suboptimal.

@untitaker
Copy link
Member

@mecampbellsoup This issue is old, things have changed since then. extras, tags, user, etc are supported keyword arguments.

@mecampbellsoup
Copy link

Ah @untitaker thank you so much! I don't think this is well documented at all FWIW.

@untitaker
Copy link
Member

Yeah, I just noticed: #1086

@mecampbellsoup
Copy link

Another thing I would mention @untitaker is that with_locals=True doesn't seem to apply to capture_message which seems not very intuitive either. If I'm explicitly calling capture_* I would want the SDK to send any/all local state like it does with (unhandled?) exceptions.

@untitaker
Copy link
Member

capture_message does not send a stacktrace by default (locals belong to a frame, no frame if no stacktrace), are you saying you turned attach_stacktrace option on and it still does not send?

@mecampbellsoup
Copy link

@untitaker not quite - I am saying that when calling capture_message('Foo') my expectation was that the SDK would include local variables in scope at that point in time; but that was not the case.

image

image

@untitaker
Copy link
Member

Right, you don't have a stacktrace attached here, that's why you also get no locals. We only attach local variables in the context of a stacktrace.

@mecampbellsoup
Copy link

mecampbellsoup commented Apr 14, 2021

@untitaker hmmm is this not supported API?

>>> sentry_sdk.capture_message('Test', attach_stacktrace=True)
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/code.py", line 90, in runcode
    exec(code, self.locals)
  File "<console>", line 1, in <module>
  File "/usr/local/lib/python3.9/site-packages/sentry_sdk/api.py", line 87, in capture_message
    return Hub.current.capture_message(message, level, scope=scope, **scope_args)
  File "/usr/local/lib/python3.9/site-packages/sentry_sdk/hub.py", line 344, in capture_message
    return self.capture_event(
  File "/usr/local/lib/python3.9/site-packages/sentry_sdk/hub.py", line 319, in capture_event
    scope = _update_scope(top_scope, scope, scope_args)
  File "/usr/local/lib/python3.9/site-packages/sentry_sdk/hub.py", line 69, in _update_scope
    final_scope.update_from_kwargs(**scope_kwargs)
TypeError: update_from_kwargs() got an unexpected keyword argument 'attach_stacktrace'

@untitaker
Copy link
Member

@mecampbellsoup attach_stacktrace is an option you would pass into sentry_sdk.init(), same with with_locals.

@blueyed
Copy link
Contributor

blueyed commented Jun 3, 2021

It would be useful if you could override this on single calls though.
The use case I have is to have it enabled by default, because it provides more info (with logging in particular), but would like to avoid it with a custom sentry_sdk.capture_message("Initialized Sentry") (where/when logging is not configured yet, but also there it would not be possible to turn it off for a particular logging message anyway).

@mirono
Copy link

mirono commented Nov 6, 2023

From all this I wasn't able to understand how to attach a stack trace to capture_message(...)
Can someone clarify this?
Note that I cant use logging.xxxx functions - I have a list of log lines provided from another system and I need to pass errors with stack trace to sentry.
I have the message, and the stack trace as strings. WHat will be the code to send them to Sentry?

@vanschelven
Copy link
Contributor

From all this I wasn't able to understand how to attach a stack trace to capture_message(...)

It's not possible in the current SDK, and it's not planned either

@vanschelven
Copy link
Contributor

I created an ad-hoc workaround to attach stacktrace to capture_message and documented it on my blog.

As in the blog: if this is more generally useful, I can upgrade to an actual library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants