-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to analyze a Linux "core" file (i.e. crash dump) #199
Comments
I may be mistaken, but it appears from the above snippet that you are only telling gdb where the core file is, but not where the executable is that produced the coredump. What you need to do is this:
Which in my example '$test crash' produces this: (which seems to be what you are looking for)
|
Yes, that was the missing piece of the puzzle. It is working fine now. I've updated my opening comments with the missing information. Thanks again! |
Minor nitpick: you stated
which strictly speaking isn't completely correct: although this does enable core dumps, more specifically it (ulimit -c size) specifies the maximum allowed filesize of a coredump. Any size specification is allowed (apparently in blocks of 1024kb), although effectively it is also limited by the maximum file (any) size allowed (ulimit -f size). Setting it to 'unlimited' allows a coredump of any filesize, and setting it to '0' would have the effect of setting the maximum size to zero bytes, and thereby effectively disabling coredump creation completely. |
Second nitpick(s) :
Traditionally, (and regardless of the user producing the coredump) a core dump is written to the current working directory of the process (which does not necessarily need to be the working directory the process was started in). On more modern
Traditionally, the words "hidden file" mean something specific in Unix-speak, namely any file that starts with a |
I see no reason to keep this issue open. |
How to get a core dump
ulimit
is a shell builtin, and thus only affects the current shell and processes started by that shell. To set limits permanently or for all processes, edit the file/etc/security/limits.conf
and reboot. The examples in the limits.conf manpage are fairly good. You just need to add something like:Where are "core" files written?
If you run Hercules as root, they appear to be placed in the current directory as a hidden file (because the owner is root):
If you run Hercules as a regular user I'm guessing they'll be placed in the current directory as a regular file.
Obtaining a backtrace from a "core" file
You can open a core file with gdb like this:
which should then display exactly where the crash occurred and which thread it was.
For the Hercules case, since it uses libtool, the path to the executable is typically something like
.libs/lt-hercules
.To see what each thread was doing when the crash occurred (including the one that crashed), use the command:
which should then display a full backtrace for each thread, identifying the source file and line number of each of the thread's function calls.
If you want to save the output of your gdb session to a log file, issue the following command as your very first gdb command (e.g. before your backtrace command):
For more information regarding gdb's logging capabilities, please see:
Obtaining a backtrace by running Hercules directly under gdb itself
If the crash is reproducible, you can start Hercules directly from the gdb debugger as follows:
To avoid signal noise (e.g. if gdb breaks on a SIGUSR2 event):
It appears you can do either:
c
to continue whenever the SIGUSR2 break occurs.handle SIGUSR2 noprint nostop
when gdb is first started.Ref: "Avoiding gdb signal noise."
The text was updated successfully, but these errors were encountered: