-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Signal safe demangling #221
Comments
It would be tough without switching to something like a pre-reserved arena that we could allocate from during the stack walk. |
That is often what happens, but for unwinding (i.e. getting the list of PC addresses, which are easy to preallocate for). Do we also need an arena for the demangler itself? |
I took a quick look at this. The demangler does use Box, Vec, and String, so it does need the ability to allocate. Box and Vec have configurable allocators so if an arena could be made to work through Allocator that could probably be threaded through everything in cpp_demangle, but as far as I can tell String does not yet take an Allocator. |
That would require reserving space for all demangled symbols at the start of the process. I'm imagining something more like an API that accepts a |
Demangling C++ symbols at a minimum requires a variable size substitution table that depends on the symbol parsed so there's no way to demangle them without obtaining memory from either a preallocated block or e.g. malloc. |
I've seen people write allocators that are signal safe because they just use In general, better to save symbols somewhere and demangle them elsewhere, outside of the signal handler. |
It would be very useful if we could use cpp_demangle in signal handlers, where allocation is not allowed, for handling things like segfaults and aborts in C++ code. This is usually achieved by printing demangled text directly to stderr.
Given that the crate requires alloc today, I assume this is not currently supported. Is adding support a possibility?
EDIT: Also see discussion in rust-lang/rust#72981 (comment)
The text was updated successfully, but these errors were encountered: