-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove CTFE loop detector #54384
Comments
I am actually now also confused about why we are using "stable hash". According to what I learned in rust-lang/rustc-dev-guide#203, the purpose of "stable hash" is to not change between rustc invocations, which is important for incremental compilation where the hashes are stored to disk and loaded again by the next compiler invocation. Why does this help / why do we want this with the loop detector? |
@brunocodutra at #52626 (comment)
Ah I think I get it! The term "snapshot" threw me off, I was sure that would mean "do a copy". It does not. So "snapshot" is really, as you say, a "replace However, I am not sure if that's what is actually happening. Imagine for a moment we have an allocation with a pointer to itself, i.e., the relocations contain the In case of such a cycle, you want to end up with a cyclic structure of shared references: The allocation reference in the relocations should point back to the allocation itself. Creating such a structure is not even possible without interior mutability, so I am pretty sure this is not what currently happens. In However, I have some deeper architectural suggestions. Your Moreover, if the program accumulates dead locations in memory, while your comparison step is blissfully ignorant of those, the actual copy done to create I see two ways to solve the first problem, and one of them also solves the second:
The second solution has two advantages as I see it: First it actually achieves the goal of complexity linear in the size of reachable memory (the first solution still copies all of memory, even the unreachable part); second it is likely easier to implement since you only have to consider one machine at a time and how to make a copy of it; instead of having to consider two machines and how to compare their states. |
Uh okay my second solution is not quite as simple -- it still also needs care during the actual comparison. If you have two memory which both contain this cyclic allocation, if you are not careful, comparing them will loop endlessly. This will need some state to record pairs of allocations we have already reached and compared. At that point, actually, I do not see the value of replacing |
That's absolutely true, I did not consider loops in the current implementation, the only thing references solve is the problem of cloning the same
That was my first iteration at the solution, but it produced such scary handwritten equality code that @oli-obk suggested we attempted a different approach and that eventually led to the current design. I do want to point something out though: this whole cloning + equality comparison is expensive, however getting to actually executing it should be rare, because first of all, the loop detector only samples the evaluation context once every 256 steps and the whole thing is avoided if their hashes don't match. And that leads to me answering the remaining open question:
Stable hash is used because it pretty much does exactly what we need as far as hashing goes - it resolves |
Oh! That would explain it. Would be worth a comment. :) Though, couldn't the same be achieved by just manually implementing
Yeah it's not trivial. But so far, I have not seen a design that works with loops in memory and entirely avoids that complication. |
We actually hash the stack, that is
Conversely, reusing stable hash means the implementation for all of these types is already available, which means
Should I implement the algorithm I outlined so we can actually see how it looks? |
Ah dang I forgot again that we are hashing the original structure but comparing the snapshot. But then... the |
|
So what is the plan here? We now have a more traditional step limit for CTFE. @ecstatic-morse said they'd submit a PR to remove the loop detector once that lands, so I guess doing so is all that remains to be tracked. |
Yep. It'll be a few days, however. |
…op-detector, r=RalfJung Remove const eval loop detector Now that there is a configurable instruction limit for CTFE (see rust-lang#67260), we can replace the loop detector with something much simpler. See rust-lang#66946 for more discussion about this. Although the instruction limit is nightly-only, the only practical way to reach the default limit uses nightly-only features as well (although CTFE will still execute code using such features inside an array initializer on stable). This will at the very least require a crater run, since it will result in an error wherever the "long running const eval" warning appeared before. We may need to increase the default for `const_eval_limit` to work around this. Resolves rust-lang#54384 cc rust-lang#49980 r? @oli-obk cc @RalfJung
The plan changed to removing the loop detector.
Original issue
The code at https://github.com/rust-lang/rust/blob/master/src/librustc_mir/interpret/snapshot.rs leaves me puzzled and confused after wasting about two hours on it. Open questions are:
AllocIdSnapshot
anOption
? What does it mean when there is aNone
there and when does that happen?EvalSnapshot::new
make a deep copy, and then it has a methodEvalSnapshot::snapshot
that does a snapshot (of theEvalSnapshot
, confusing naming)? That method is called inPartialEq::eq
, doesn't that mean we do tons of allocations and copies every time we compare?I tried fixing the second point (making a snapshot when
EvalSnapshot
is created, not when it is compared), but failed to do so as the thing produced by theSnapshot
trait isn't actually a snapshot (which I would expect to be a deep, independent copy), but something full of references to somewhere (couldn't figure out where though, just saw the lifetime).Documentation (inline in the code) should be improved to answer these questions, and if the part about allocating during comparison is correct, that should be fixed.
Cc @oli-obk @brunocodutra
The text was updated successfully, but these errors were encountered: