Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Moved value gets dropped if reassigned #42903

Closed
carols10cents opened this issue Jun 25, 2017 · 23 comments · Fixed by #42931
Closed

Moved value gets dropped if reassigned #42903

carols10cents opened this issue Jun 25, 2017 · 23 comments · Fixed by #42931
Assignees
Labels
I-crash Issue: The compiler crashes (SIGSEGV, SIGABRT, etc). Use I-ICE instead when the compiler panics. I-unsound Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness regression-from-stable-to-nightly Performance or correctness regression from stable to nightly. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.

Comments

@carols10cents
Copy link
Member

carols10cents commented Jun 25, 2017

Updated summary:

Under some conditions, drops are generated if a previously moved value is reassigned. This was bisected to #39409.

In the following minimised code for example:

pub struct DropHasLifetime<'a>(&'a ());

impl<'a> ::std::ops::Drop for DropHasLifetime<'a> {
    fn drop(&mut self) {}
}

fn move_and_return<T>(val: T) -> T {
    val
}

struct Wrapper<'a>(DropHasLifetime<'a>);

fn main() {
    static STATIC: () = ();

    let mut wrapper = Wrapper(DropHasLifetime(&STATIC));

    wrapper.0 = move_and_return(wrapper.0);
}

wrapper.0 is dropped twice, see #42903 (comment) for complete mir.

-- Edited by @TimNN


Original Description:

There are now 3 instances of strange crashes/segfaults/hangs/error messages that happen using nightly 2017-06-20 (or it might be 2017-06-19) that don't happen with rustc 1.19.0-beta.2 (a175ee5 2017-06-15) reported on this reddit thread.

I wanted to start an issue for discussing and investigating this because I think it's clear there's something wrong, but I apologize that we haven't yet gotten an isolated reproduction. I also don't know if these cases are the same bug or different bugs yet, please let me know if I should split this out into more issues.

Or perhaps these already have issues! I didn't see any reported issues that looked similar enough to these and in the right time frame :-/

Incident 1 - unit tests crashing with "error: An unknown error occurred"

Via reddit user fulmicoton:

My unit tests have been crashing since rustc 1.19.0-nightly (0414594 2017-06-19) on MacOS.

Previous version do not crash. Later versions do crash with "error: An unknown error occurred"

My project is quite large and contains unsafe code, so this could be a bug that got surfaced by a change in the compiler.

Will ask if they can share more details.

Incident 2 - crates.io's tests (at least one in particular)

Described in pull request rust-lang/crates.io#795, but I'm seeing weirdness on master too. All tests are passing using stable Rust on crates.io master.

On Wednesday, @jtgeibel was seeing one of cargo's tests (krate::index_queries) intermittently segfaulting and crashing with (signal: 4, SIGILL: illegal instruction)

On master, I'm seeing the tests fail in a really really bizarre way: it looks like serialized JSON is being corrupted???? The test is making a request to the crates.io server, which returns JSON, and then the test attempts to deserialize the JSON and make assertions on it. The JSON returned by the server is valid JSON, but one of the keys looks like it's set to the value of the beginning of the JSON???? Here's the output that won't deserialize, pretty-printed and a bunch of stuff that looks fine removed to make the problem more obvious, also with comments that I've added here even though JSON doesn't have comments:

{
    "crates": [
        {
            // removed lots of stuff in here that looks fine
            "name": "BAR_INDEX_QUERIES",
            "repository": null,
            "versions": null,
            "{\"crates\":": "2017-06-25T16:06:06Z" // <- LOOK AT THIS KEY
        },
        {
            // removed lots of stuff in here that looks fine
            "name": "foo_index_queries",
            "owner_tea": 0,
            "repository": null,
            "updated_at": "2017-06-25T16:06:06Z", // <- THE WEIRD KEY ABOVE SHOULD LOOK LIKE THIS, this is a good result in the same request response though!!!!
            "versions": null
        }
    ],

To reproduce:

  • Check out and set up crates.io (involves installing postgres, sorry sorry, if this is too complex, I am happy to run experiments/patches on my setup)
  • On master, run cargo test krate::index_queries. You've reproduced the problem if you see:
running 1 test
test krate::index_queries ... FAILED

failures:

---- krate::index_queries stdout ----
	thread 'krate::index_queries' panicked at 'failed to decode: MissingFieldError("updated_at")
{"crates":[{"badges":[],"categories":null,"description":null,"documentation":null,"downloads":0,"exact_match":false,"homepage":null,"id":"BAR_INDEX_QUERIES","keywords":null,"license":null,"links":{"owner_team":"/api/v1/crates/BAR_INDEX_QUERIES/owner_team","owner_user":"/api/v1/crates/BAR_INDEX_QUERIES/owner_user","owners":"/api/v1/crates/BAR_INDEX_QUERIES/owners","reverse_dependencies":"/api/v1/crates/BAR_INDEX_QUERIES/reverse_dependencies","version_downloads":"/api/v1/crates/BAR_INDEX_QUERIES/downloads","versions":"/api/v1/crates/BAR_INDEX_QUERIES/versions"},"max_version":"0.99.0","name":"BAR_INDEX_QUERIES","repository":null,"versions":null,"{\"crates\":":"2017-06-25T16:16:04Z"},{"badges":[],"categories":null,"created_at":"2017-06-25T16:16:04Z","description":"description","documentation":null,"exact_match":false,"homepage":null,"id":"foo_index_queries","keywords":null,"license":null,"links":{"owner_team":"/api/v1/crates/foo_index_queries/owner_team","owner_user":"/api/v1/crates/foo_index_queries/owner_user","owners":"/api/v1/crates/foo_index_queries/owners","reverse_dependencies":"/api/v1/crates/foo_index_queries/reverse_dependencies","version_downloads":"/api/v1/crates/foo_index_queries/downloads","versions":"/api/v1/crates/foo_index_queries/versions"},"max_version":"0.99.0","name":"foo_index_queries","owner_tea":0,"repository":null,"updated_at":"2017-06-25T16:16:04Z","versions":null}],"meta":{"total":2}}', src/tests/all.rs:169
note: Run with `RUST_BACKTRACE=1` for a backtrace.


failures:
    krate::index_queries

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 101 filtered out
running 1 test
NOTICE:  text-search query doesn't contain lexemes: ""
NOTICE:  text-search query doesn't contain lexemes: ""
NOTICE:  text-search query doesn't contain lexemes: ""
test krate::index_queries ... FAILED

failures:

---- krate::index_queries stdout ----
	thread 'krate::index_queries' panicked at 'assertion failed: `(left == right)` (left: `0`, right: `1`)', src/tests/krate.rs:171
note: Run with `RUST_BACKTRACE=1` for a backtrace.


failures:
    krate::index_queries

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 98 filtered out

and a crash that looks like:

running 1 test
error: process didn't exit successfully: `/Users/carolnichols/rust/crates.io/target/debug/deps/all-f4f07d01d0cee92f krate::index_queries` (signal: 11, SIGSEGV: invalid memory reference)

Incident 3 - trust-dns

Via @bluejekyll, see their description on this issue: bluejekyll/trust-dns#152

To reproduce:

/cc @Mark-Simulacrum @jtgeibel @bluejekyll

@TimNN
Copy link
Contributor

TimNN commented Jun 25, 2017

The trust-dns regression was introduced by #39409 according to @Mark-Simulacrum's awesome bisect-rust.

@Mark-Simulacrum
Copy link
Member

cc @pnkfelix -- regression due to #39409

@bluejekyll
Copy link

Woh. I need to read about that tool, that's awesome!

BTW, I tried to clarify in the bug report on the trust-dns repo; it's not that OpenSSL and ring are affected, it's that I reproduced the issue with both OpenSSL and/or ring dependencies in use.

@carols10cents
Copy link
Member Author

Thanks @bluejekyll ! I've changed my description to just reference your bug and not duplicate what your description used to be :)

@nagisa nagisa added I-wrong regression-from-stable-to-nightly Performance or correctness regression from stable to nightly. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. I-crash Issue: The compiler crashes (SIGSEGV, SIGABRT, etc). Use I-ICE instead when the compiler panics. labels Jun 25, 2017
@TimNN
Copy link
Contributor

TimNN commented Jun 25, 2017

Running the trust-dns test under GDB I get:

  • When running all tests in the binary: a segmentation fault in jemalloc (Edit: reproduced with the system allocator as well):

    #0  0x00005555559a157c in arena_dalloc_bin_locked_impl (arena=arena@entry=0x7ffff6712280, chunk=0x7ffff5e00000, ptr=0x7ffff5e0d008, junked=true, bitselm=<optimized out>)
        at /checkout/src/liballoc_jemalloc/../jemalloc/src/arena.c:327
    #1  0x00005555559a2ec5 in je_arena_dalloc_bin_junked_locked (arena=arena@entry=0x7ffff6712280, chunk=<optimized out>, ptr=<optimized out>, bitselm=<optimized out>)
        at /checkout/src/liballoc_jemalloc/../jemalloc/src/arena.c:2746
    #2  0x00005555559b444b in je_tcache_bin_flush_small (tsd=tsd@entry=0x7ffff5dff6a0, tcache=tcache@entry=0x7ffff64af000, tbin=tbin@entry=0x7ffff64af028, binind=binind@entry=0,
        rem=rem@entry=0) at /checkout/src/liballoc_jemalloc/../jemalloc/src/tcache.c:132
    #3  0x00005555559b4e53 in tcache_destroy (tsd=0x7ffff5dff6a0, tcache=0x7ffff64af000) at /checkout/src/liballoc_jemalloc/../jemalloc/src/tcache.c:364
    #4  0x00005555559b5052 in je_tcache_cleanup (tsd=0x7ffff5dff6a0) at /checkout/src/liballoc_jemalloc/../jemalloc/src/tcache.c:403
    #5  0x00005555559b5665 in je_tsd_cleanup (arg=0x7ffff5dff6a0) at /checkout/src/liballoc_jemalloc/../jemalloc/src/tsd.c:82
    #6  0x00007ffff7106439 in __nptl_deallocate_tsd.part.4 () from /lib/x86_64-linux-gnu/libpthread.so.0
    #7  0x00007ffff7107878 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
    #8  0x00007ffff6c2acaf in clone () from /lib/x86_64-linux-gnu/libc.so.6
    
  • When filtering by the given test name:

    Cannot find user-level thread for LWP 20839: generic error
    (gdb) [Thread 0x7ffff63ff700 (LWP 20839) exited]
    [Inferior 1 (process 20835) exited with code 0145]
    

Edit: Here are some valgrind logs, they look... worrisome https://gist.github.com/TimNN/039cefc6cfe601faa1d129398583fd39

@bluejekyll
Copy link

If you want to take OpenSSL out of the picture, I believe the same issue can be triggered with openssl disabled:

cargo test --no-default-features --features=ring

@fulmicoton
Copy link

fulmicoton commented Jun 25, 2017

Incident 1 mentionned in this bug description is in tantivy. I tried to trim down the unit test to something simpler, smaller that stills fails without success. Introducing a single print statement is enough for the bug to disappear. The code does contain a lot of unsafe calls so this is probably a bad example to investigate.

In case this is still useful, termdict::tests::test_stream_range_boundaries in tantivy's master branch fails consistently.
The backtrace is as follows

arena_run_tree_remove(rbtree=0x0000000101513340, node=0x0000000102001008) + 186 at arena.c:76, name = 'termdict::tests::test_stream_range_boundaries', stop reason = EXC_BAD_ACCESS (code=1, address=0x8000000)
  * frame #0: 0x00000001007a5a7a tantivy-d77d51b41e725268`arena_run_tree_remove(rbtree=0x0000000101513340, node=0x0000000102001008) + 186 at arena.c:76
    frame #1: 0x00000001007a6215 tantivy-d77d51b41e725268`arena_run_dalloc + 335 at arena.c:1857
    frame #2: 0x00000001007a60c6 tantivy-d77d51b41e725268`arena_run_dalloc(arena=0x0000000101511d80, run=<unavailable>, dirty=true, cleaned=<unavailable>, decommitted=<unavailable>) + 566 at arena.c:1945
    frame #3: 0x00000001007a1574 tantivy-d77d51b41e725268`arena_dalloc_bin_locked_impl [inlined] arena_dalloc_bin_run + 41 at arena.c:2679
    frame #4: 0x00000001007a154b tantivy-d77d51b41e725268`arena_dalloc_bin_locked_impl(arena=<unavailable>, chunk=<unavailable>, ptr=<unavailable>, bitselm=<unavailable>, junked=<unavailable>) + 491 at arena.c:2731
    frame #5: 0x00000001007b4c91 tantivy-d77d51b41e725268`je_tcache_bin_flush_small(tsd=0x000000010168a108, tcache=<unavailable>, tbin=0x00000001016c5048, binind=1, rem=0) + 369 at tcache.c:132
    frame #6: 0x00000001007b5a92 tantivy-d77d51b41e725268`tcache_destroy(tsd=0x000000010168a108, tcache=0x00000001016c5000) + 98 at tcache.c:364
    frame #7: 0x00000001007b5a1a tantivy-d77d51b41e725268`je_tcache_cleanup(tsd=0x0000000101511d80) + 26 at tcache.c:403
    frame #8: 0x00000001007b5fd0 tantivy-d77d51b41e725268`je_tsd_cleanup(arg=0x000000010168a108) + 48 at tsd.c:82
    frame #9: 0x00000001007b6b7a tantivy-d77d51b41e725268`je_tsd_cleanup_wrapper(arg=0x000000010168a100) + 26 at tsd.h:609
    frame #10: 0x00007fffb6e1c4c5 libsystem_pthread.dylib`_pthread_tsd_cleanup + 470
    frame #11: 0x00007fffb6e1c249 libsystem_pthread.dylib`_pthread_exit + 152
    frame #12: 0x00007fffb6e1aab6 libsystem_pthread.dylib`_pthread_body + 191
    frame #13: 0x00007fffb6e1a9f7 libsystem_pthread.dylib`_pthread_start + 286
    frame #14: 0x00007fffb6e1a221 libsystem_pthread.dylib`thread_start + 13

@sfackler
Copy link
Member

One debugging step that might help is to switch over to the system allocator, which has better runtime corruption checks, and then maybe run in valgrind.

@fulmicoton
Copy link

Following @sfackler advise, I used the system allocator. The failure happens when deallocating a Vec<u8> buffer.

 malloc: *** error for object 0x100202a30: pointer being freed was not allocated
 thread #1: tid = 0x24c63, 0x00007fffb6d31dda libsystem_kernel.dylib`__pthread_kill + 10, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
  * frame #0: 0x00007fffb6d31dda libsystem_kernel.dylib`__pthread_kill + 10
    frame #1: 0x00007fffb6e1d787 libsystem_pthread.dylib`pthread_kill + 90
    frame #2: 0x00007fffb6c97420 libsystem_c.dylib`abort + 129
    frame #3: 0x00007fffb6d87097 libsystem_malloc.dylib`free + 530
    frame #4: 0x000000010001011f test-tantivy`alloc::raw_vec::{{impl}}::dealloc_buffer<u8,alloc::heap::HeapAlloc>(self=0x00007fff5fbff208) + 223 at raw_vec.rs:637
    frame #5: 0x000000010001cced test-tantivy`alloc::raw_vec::{{impl}}::drop<u8,alloc::heap::HeapAlloc>(self=0x00007fff5fbff208) + 29 at raw_vec.rs:645
    frame #6: 0x0000000100014b65 test-tantivy`core::ptr::drop_in_place<alloc::raw_vec::RawVec<u8, alloc::heap::HeapAlloc>>((null)=0x00007fff5fbff208) + 21 at ptr.rs:60
    frame #7: 0x0000000100014870 test-tantivy`core::ptr::drop_in_place<alloc::vec::Vec<u8>>((null)=0x00007fff5fbff208) + 64 at ptr.rs:60
    frame #8: 0x0000000100013cb9 test-tantivy`core::ptr::drop_in_place<fst::raw::Stream<fst::inner_automaton::AlwaysMatch>>((null)=0x00007fff5fbff200) + 25 at ptr.rs:60
    frame #9: 0x00000001000140a5 test-tantivy`core::ptr::drop_in_place<fst::inner_map::Stream<fst::inner_automaton::AlwaysMatch>>((null)=0x00007fff5fbff200) + 21 at ptr.rs:60
    frame #10: 0x0000000100013789 test-tantivy`core::ptr::drop_in_place<tantivy::termdict::fstdict::streamer::TermStreamerImpl<u32>>((null)=0x00007fff5fbff1f8) + 25 at ptr.rs:60
    frame #11: 0x0000000100020761 test-tantivy`test_tantivy::main + 5857 at main.rs:75
    frame #12: 0x000000010005d88d test-tantivy`panic_unwind::__rust_maybe_catch_panic + 29 at lib.rs:98
    frame #13: 0x000000010005cd85 test-tantivy`std::rt::lang_start [inlined] std::panicking::try<(),closure> + 51 at panicking.rs:433
    frame #14: 0x000000010005cd52 test-tantivy`std::rt::lang_start [inlined] std::panic::catch_unwind<closure,()> at panic.rs:361
    frame #15: 0x000000010005cd52 test-tantivy`std::rt::lang_start + 434 at rt.rs:59
    frame #16: 0x0000000100020fea test-tantivy`main + 42
    frame #17: 0x00007fffb6c03255 libdyld.dylib`start + 1

@withoutboats
Copy link
Contributor

petgraph has been failing recently, not sure if its the same regression

@TimNN
Copy link
Contributor

TimNN commented Jun 26, 2017

I switched trust-dns to the system allocator and indeed got much "better" / more precise valgrind results: https://gist.github.com/TimNN/5e77b22b795b9a28fb44d9349c197ad8

Looking at the logs, I believe the bad codegen happens in this code:

            let mut nsec_info: Option<(&Name, Vec<RecordType>)> = None;
            for key in self.records.keys() {
                match nsec_info {
                    None => nsec_info = Some((&key.name, vec![key.record_type])),
                    Some((name, ref mut vec)) if name == &key.name => vec.push(key.record_type),
                    Some((name, vec)) => {
                        // names aren't equal, create the NSEC record
                        let mut record = Record::with(name.clone(), RecordType::NSEC, ttl);
                        let rdata = NSEC::new(key.name.clone(), vec);
                        record.set_rdata(RData::NSEC(rdata));
                        records.push(record);

                        // new record...
                        nsec_info = Some((&key.name, vec![key.record_type]))
                    }
                }
            }

As far as I can tell, the problem is that vec in the third case is dropped in the nsec_info = line despite being moved a bit further up.

@TimNN
Copy link
Contributor

TimNN commented Jun 26, 2017

Checking the mir, it indeed seems to be the case that the nsec_info = line causes a drop:

    bb126: {
        _175 = const false;              // scope 22 at src/authority/authority.rs:1077:25: 1077:34
        drop(((_70 as Some).0: (&trust_dns::rr::Name, std::vec::Vec<trust_dns::rr::RecordType>))) -> [return: bb124, unwind: bb125]; // scope 22 at src/authority/authority.rs:1077:25: 1077:34
    }

Full output of the PreTrans.After mir.

@fulmicoton
Copy link

Here is the shortest piece of code I could write that reproduces. It is independant of tantivy but still calls the fst crate, so it still rather complex.

https://gist.github.com/fulmicoton/facc8d8218e9e78d05195fa8718e1c76

@TimNN
Copy link
Contributor

TimNN commented Jun 26, 2017

@fulmicoton: Thanks for that small sample!

It also generates a broken mir with an erroneous drop for somemethod:

// MIR for `<impl at src/main.rs:10:1: 15:2>::somemethod`
// source = Fn(NodeId(19))
// pass_name = PreTrans
// disambiguator = after

fn <impl at src/main.rs:10:1: 15:2>::somemethod(_1: A) -> A {
    let mut _0: A;                       // return pointer
    scope 1 {
        let mut _2: A;                   // "self" in scope 1 at src/main.rs:11:19: 11:27
    }
    let mut _3: fst::set::StreamBuilder;
    let mut _4: fst::set::StreamBuilder;
    let mut _5: A;

    bb0: {
        StorageLive(_2);                 // scope 0 at src/main.rs:11:19: 11:27
        _2 = _1;                         // scope 0 at src/main.rs:11:19: 11:27
        StorageLive(_3);                 // scope 1 at src/main.rs:12:23: 12:45
        StorageLive(_4);                 // scope 1 at src/main.rs:12:23: 12:34
        _4 = (_2.0: fst::set::StreamBuilder<'a>); // scope 1 at src/main.rs:12:23: 12:34
        _3 = const <fst::set::StreamBuilder<'s, A>>::ge(_4, const "doc0") -> [return: bb1, unwind: bb3]; // scope 1 at src/main.rs:12:23: 12:45
    }

    bb1: {
        StorageDead(_4);                 // scope 1 at src/main.rs:12:45: 12:45
        drop((_2.0: fst::set::StreamBuilder)) -> [return: bb5, unwind: bb4]; // scope 1 at src/main.rs:12:9: 12:20
    }

    bb2: {
        resume;                          // scope 0 at src/main.rs:11:5: 14:6
    }

    bb3: {
        drop((_2.0: fst::set::StreamBuilder)) -> bb2; // scope 0 at src/main.rs:14:6: 14:6
    }

    bb4: {
        (_2.0: fst::set::StreamBuilder) = _3; // scope 1 at src/main.rs:12:9: 12:20
        goto -> bb3;                     // scope 1 at src/main.rs:12:9: 12:20
    }

    bb5: {
        (_2.0: fst::set::StreamBuilder) = _3; // scope 1 at src/main.rs:12:9: 12:20
        StorageDead(_3);                 // scope 1 at src/main.rs:12:45: 12:45
        StorageLive(_5);                 // scope 1 at src/main.rs:13:9: 13:13
        _5 = _2;                         // scope 1 at src/main.rs:13:9: 13:13
        _0 = _5;                         // scope 1 at src/main.rs:13:9: 13:13
        StorageDead(_5);                 // scope 1 at src/main.rs:13:13: 13:13
        StorageDead(_2);                 // scope 0 at src/main.rs:14:6: 14:6
        return;                          // scope 1 at src/main.rs:14:6: 14:6
    }
}

Note how _2/self is moved to _4 / into ge in bb0 but nevertheless dropped in bb1.

@TimNN
Copy link
Contributor

TimNN commented Jun 26, 2017

The tantivy repro bisects to #39409 as well.

cc @nikomatsakis, @arielb1 since you reviewed that PR.

@TimNN TimNN added the I-unsound Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness label Jun 26, 2017
@fulmicoton
Copy link

fulmicoton commented Jun 26, 2017

Here is an even smaller sample code that faults. No dependency whatsoever, no unsafe code.
http://play.integer32.com/?gist=09cecfbc7bab96beb95b2c3ff63e4a6a&version=stable

@TimNN
Copy link
Contributor

TimNN commented Jun 26, 2017

Further minified:

pub struct DropHasLifetime<'a>(&'a ());

impl<'a> ::std::ops::Drop for DropHasLifetime<'a> {
    fn drop(&mut self) {}
}

fn move_and_return<T>(val: T) -> T {
    val
}

struct Wrapper<'a>(DropHasLifetime<'a>);

fn main() {
    static STATIC: () = ();

    let mut wrapper = Wrapper(DropHasLifetime(&STATIC));

    wrapper.0 = move_and_return(wrapper.0);
}

This no longer crashes but you'll see the erroneous drop if you look at the mir (slightly simplified, full version):

fn main() -> () {
    let mut _0: ();                      // return pointer
    scope 1 {
        let mut _1: Wrapper;             // "wrapper" in scope 1 at nodep.rs:16:9: 16:20
    }
    let mut _2: DropHasLifetime;
    let mut _3: &();
    let mut _4: &();
    let mut _5: DropHasLifetime;
    let mut _6: DropHasLifetime;

    bb0: {
        _4 = &(main::STATIC: ());        // scope 0 at nodep.rs:16:47: 16:54
        _3 = _4;                         // scope 0 at nodep.rs:16:47: 16:54
        _2 = DropHasLifetime<'_>::{{constructor}}(_3,); // scope 0 at nodep.rs:16:31: 16:55
        _1 = Wrapper<'_>::{{constructor}}(_2,); // scope 0 at nodep.rs:16:23: 16:56
        _6 = (_1.0: DropHasLifetime<'_>); // scope 1 at nodep.rs:18:33: 18:42
        _5 = const move_and_return(_6) -> bb1; // scope 1 at nodep.rs:18:17: 18:43
    }

    bb1: {
        drop((_1.0: DropHasLifetime)) -> bb2; // scope 1 at nodep.rs:18:5: 18:14
    }

    bb2: {
        (_1.0: DropHasLifetime) = _5;    // scope 1 at nodep.rs:18:5: 18:14
        _0 = ();                         // scope 0 at nodep.rs:13:11: 19:2
        drop((_1.0: DropHasLifetime)) -> bb3; // scope 0 at nodep.rs:19:2: 19:2
    }

    bb3: {
        return;                          // scope 0 at nodep.rs:19:2: 19:2
    }
}

Note how _1.0 is moved into _6 and then consumed by move_and_return however still dropped in bb1. One thing I find curious, which I guess could be the problem, is that _1.0 is extracted as DropHasLifetime<'_> but dropped as DropHasLifetime (note the missing lifetime specifier).

@TimNN TimNN changed the title Segfaults/Illegal instructions/memory corruption/something wrong in nightly around 06-19/06-20, not reduced yet sorry :( Moved value gets dropped if rassigned Jun 26, 2017
@TimNN TimNN changed the title Moved value gets dropped if rassigned Moved value gets dropped if reassigned Jun 26, 2017
@est31
Copy link
Member

est31 commented Jun 26, 2017

Fwiw, I've minified @fulmicoton 's example a bit, still crashes: https://is.gd/918r5N

@MaloJaffre
Copy link
Contributor

MaloJaffre commented Jun 26, 2017

Further minified crashing example, but only in debug mode: https://is.gd/f3xUjK

@TimNN
Copy link
Contributor

TimNN commented Jun 26, 2017

It appears as if ElaborateDrops got confused by #39409: Consider the following diffs from a good rustc to a bad rustc: https://gist.github.com/TimNN/717d3b64f7a461ec459a1944b88989c0:

Before ElaborateDrops they are almost identical however afterwards there is a bad drop.


Edit: The gist now includes the diff for EraseRegions: Some lifetimes are apparently no longer correctly erased.

@TimNN
Copy link
Contributor

TimNN commented Jun 26, 2017

I tried to make some sense of the debug logs, here is an excerpt:

DEBUG:rustc_borrowck::borrowck::mir::gather_moves: move paths for nodep.rs:13:1: 19:2:
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp0 = MovePath { lvalue: _0 }
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp1 = MovePath { first_child: mp9, lvalue: _1 }
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp2 = MovePath { lvalue: _2 }
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp3 = MovePath { lvalue: _3 }
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp4 = MovePath { lvalue: _4 }
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp5 = MovePath { lvalue: _5 }
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp6 = MovePath { lvalue: _6 }
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp7 = MovePath { lvalue: _7 }
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp8 = MovePath { parent: mp1, lvalue: (_1.0: DropHasLifetime<'_>) }
DEBUG:rustc_borrowck::borrowck::mir::gather_moves:     mp9 = MovePath { parent: mp1, next_sibling: mp8 lvalue: (_1.0: DropHasLifetime) }

Judging by the doc comment on MovePath the mp8 and mp9 paths seem to be "bad", with both referring to the same thing.

@arielb1
Copy link
Contributor

arielb1 commented Jun 26, 2017

I'll take this.

@nikomatsakis
Copy link
Contributor

@arielb1 this seems pretty important to track down; if you don't have time, do raise the bat signal.

arielb1 added a commit to arielb1/rust that referenced this issue Jun 28, 2017
The move gathering code is sensitive to type-equality - that is rather
un-robust and I plan to fix it eventually, but that's a more invasive
change. And we want to fix the visitor anyway.

Fixes rust-lang#42903.
bors added a commit that referenced this issue Jun 28, 2017
re-add the call to `super_statement` in EraseRegions

The move gathering code is sensitive to type-equality - that is rather
un-robust and I plan to fix it eventually, but that's a more invasive
change. And we want to fix the visitor anyway.

Fixes #42903.

r? @eddyb
arielb1 added a commit to arielb1/rust that referenced this issue Jul 27, 2017
Leaving types unerased would lead to 2 types with a different "name"
getting different move-paths, which would cause major brokenness (see
e.g. rust-lang#42903).

This does not fix any *known* issue, but is required if we want to use
abs_domain with non-erased regions (because the same can easily
have different names). cc @RalfJung.
frewsxcv added a commit to frewsxcv/rust that referenced this issue Jul 29, 2017
erase types in the move-path abstract domain

Leaving types unerased would lead to 2 types with a different "name"
getting different move-paths, which would cause major brokenness (see
e.g. rust-lang#42903).

This does not fix any *known* issue, but is required if we want to use
abs_domain with non-erased regions (because the same can easily
have different names). cc @RalfJung.

r? @eddyb
Mark-Simulacrum added a commit to Mark-Simulacrum/rust that referenced this issue Jul 30, 2017
erase types in the move-path abstract domain

Leaving types unerased would lead to 2 types with a different "name"
getting different move-paths, which would cause major brokenness (see
e.g. rust-lang#42903).

This does not fix any *known* issue, but is required if we want to use
abs_domain with non-erased regions (because the same can easily
have different names). cc @RalfJung.

r? @eddyb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I-crash Issue: The compiler crashes (SIGSEGV, SIGABRT, etc). Use I-ICE instead when the compiler panics. I-unsound Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness regression-from-stable-to-nightly Performance or correctness regression from stable to nightly. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging a pull request may close this issue.