Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault (armed timer destroyed by GC?) #978

Closed
japplegame opened this issue Feb 8, 2015 · 7 comments
Closed

Segmentation fault (armed timer destroyed by GC?) #978

japplegame opened this issue Feb 8, 2015 · 7 comments

Comments

@japplegame
Copy link
Contributor

This code causes segmentation fault on my system (CentOS 7 x64_86, dmd 2.066.1, vibe.d 0.7.22, libevent).

app.d
import vibe.d;

class Foo {
    Timer timer;
    this() {
        timer = setTimer(10.seconds, {});
    }
}

shared static this() {
    runTask({
        sleep(100.msecs);
        auto foo = new Foo();
        exitEventLoop();
    });
}
dub.selections
{
    "fileVersion": 1,
    "versions": {
        "vibe-d": "0.7.22",
        "libevent": "2.0.1+2.0.16",
        "openssl": "1.1.3+1.0.1g",
        "libev": "4.0.0+4.04",
        "libasync": "0.6.5"
    }
}
gdb backtrace:
Program terminated with signal 11, Segmentation fault.
#0  0x00000000007db52d in invariant._d_invariant() ()
(gdb) backtrace
#0  0x00000000007db52d in invariant._d_invariant() ()
#1  0x000000000068d26f in vibe.core.drivers.libevent2.Libevent2Driver.releaseTimer() (
    this=<error reading variable: Could not find the frame base for "vibe.core.drivers.libevent2.Libevent2Driver.releaseTimer()".>, 
    timer_id=<error reading variable: Could not find the frame base for "vibe.core.drivers.libevent2.Libevent2Driver.releaseTimer()".>)
    at ../../../.dub/packages/vibe-d-0.7.22/source/vibe/core/drivers/libevent2.d:392
#2  0x00000000006856e2 in vibe.core.core.Timer.__dtor() (this=<error reading variable: Could not find the frame base for "vibe.core.core.Timer.__dtor()".>)
    at ../../../.dub/packages/vibe-d-0.7.22/source/vibe/core/core.d:725
#3  0x0000000000664151 in app.Foo.__fieldDtor() (this=<error reading variable: Could not find the frame base for "app.Foo.__fieldDtor()".>) at source/app.d:3
#4  0x00000000007dc498 in rt_finalize2 ()
#5  0x000000000083da40 in gc.gc.Gcx.fullcollect() ()
#6  0x000000000083b913 in gc.gc.GC.fullCollectNoStack() ()
#7  0x0000000000826e89 in gc_term ()
#8  0x00000000007db17b in rt_term ()
#9  0x00000000007db47a in rt.dmain2._d_run_main() ()
#10 0x00000000007db406 in rt.dmain2._d_run_main() ()
#11 0x00000000007db387 in _d_run_main ()
#12 0x000000000068076d in main ()
@Geod24
Copy link
Contributor

Geod24 commented Feb 8, 2015

@japplegame
Copy link
Contributor Author

It's definitely related.
It seems that vibe.utils.hashmap.HashMap.remove() uses GC-memory operations.

@s-ludwig
Copy link
Member

The mechanism is the same as in the thread, but I was wrong about HashMap. The actual cause is the assertion in Libevent2Driver.releaseTimer (it allocates an AssertError). The assertion fires because the Foo object gets destroyed after vibe.d's static destructors have already been run and thus the driver has already been deleteed.

The question is now how to go about this. There are not many possibilities to cleanly fix this on the library side. So my initial reaction would be to simply have the user responsible for cleaning up properly. Of course a way has to be found to avoid the InvalidMemoryOperationError or segfault and to output a proper error message instead.

Alternatively, a quick fix would be possible by changing the assertion to debug assert(!m_ownerThread || m_ownerThread == Thread.getThis()); (i.e. simply ignore the fact that the driver has already been destroyed). However, this also shows that the Timer implementation is currently unsafe, because it accesses a dangling reference to the driver, so that would also need to be fixed (probably simply by using destroy() instead of delete).

@etcimon
Copy link
Contributor

etcimon commented Feb 11, 2015

There are not many possibilities to cleanly fix this on the library side.

I haven't tried, but wouldn't a global vibe.core.core property and an if ( ! isUnwinding ) in the destructor be a good way to check if the driver may have been destroyed?

@s-ludwig
Copy link
Member

The check used to determine a shut down library is now if (s_core). This is the last thing in the library destruction sequence, but happens before the runtime performs the final collection.

@etcimon
Copy link
Contributor

etcimon commented Feb 11, 2015

If a Timer should be thread-safe, it would involve checking existence in some global Driver array, however leaving it unmanaged by the GC

@etcimon
Copy link
Contributor

etcimon commented Feb 11, 2015

Btw, I think it will be possible to safely move timer management to a custom allocator RBTree here: https://github.com/etcimon/memutils/blob/master/source/memutils/rbtree.d

Also, I'm designing a replacement for new T(), which will be New!T() but uses a stack of (scoped) pool allocators to free the memory rather than the GC. Meaning, you can have a

Json obj;
{
    auto pool0 = scopedPool();
    {
        auto pool1 = scopedPool();
        ScopedPools.freeze(1); // or pool1.freeze()
        obj = deserializeJson(str); // uses pool0 because pool1 is frozen
    } // scope end calls ScopedPools.pop(); in the ScopedPool dtor, which frees the entire mem
    writeln(obj["element"]); // works
}
// writeln(obj["element"]); segfault, or undefined behavior
obj = deserializeJson(str); // uses the GC, because ScopedPools has an empty stack

The pools are overhead, so basically the New!T() operation allocates on the top of the stack (except those frozen).

It would be easy to use a pool at the base of a fiber and add a ubyte[] read(size_t); operation in the streams. Having a max size for a pool will make the allocations fall back on the previous, or an option can cause an exception to prevent memory attacks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants