-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poor codegen for derived ==
on simple 2-field struct
#117800
Comments
Miri reports no UB on the following code (Playground): use core::mem::MaybeUninit;
#[derive(Copy, Clone, PartialEq, Eq)]
#[repr(C)]
pub struct Entity {
g: u32,
i: u32,
}
#[no_mangle]
pub fn derived_eq(x: &Entity, y: &Entity) -> bool {
x == y
}
#[derive(Copy, Clone)]
#[repr(C)]
pub struct EntityUninit {
g: u32,
i: MaybeUninit<u32>,
}
fn main() {
let left_ref = &Entity { g: 1, i: 1 };
let right_uninit = EntityUninit {
g: 0,
i: MaybeUninit::uninit(),
};
let right_ref = unsafe { &*(&right_uninit as *const EntityUninit as *const Entity) };
dbg!(derived_eq(left_ref, right_ref));
} So LLVM is completely correct in its conservatism here. @rustbot label A-codegen T-opsem |
Agreed that LLVM is correct to be conservative for the currently-derived code; that's why I opened it as a "what can rust do to let it distinguish these cases". But it's also unnecessarily-conservative in the Here's a zulip lang thread I was just typing up to talk about what the Rust references rules should be to allow us to do something better, which I opened this issue to use to seed a discussion https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/Poison-ness.20behind.20references/near/401455323 (The zulip thread ended up with a very similar example https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=159748907525ea9c1f7d0c568ae75eb1 🙂) |
Note that the two assembly snippets seem 100% equivalent. The longer version is still doing the loads unconditionally (which makes sense since the arguments are Thus, even if that particular IR transformation would be invalid, it's still suboptimal for LLVM not to find some way to optimize this. (Perhaps by having the optimization insert |
(This is my first PR here, so I've probably missed some things. Please let me know what else I should do to help you as a reviewer!) # Objective Due to rust-lang/rust#117800, the `derive`'d `PartialEq::eq` on `Entity` isn't as good as it could be. Since that's used in hashtable lookup, let's improve it. ## Solution The derived `PartialEq::eq` short-circuits if the generation doesn't match. However, having a branch there is sub-optimal, especially on 64-bit systems like x64 that could just load the whole `Entity` in one load anyway. Due to complications around `poison` in LLVM and the exact details of what unsafe code is allowed to do with reference in Rust (rust-lang/unsafe-code-guidelines#346), LLVM isn't allowed to completely remove the short-circuiting. `&Entity` is marked `dereferencable(8)` so LLVM knows it's allowed to *load* all 8 bytes -- and does so -- but it has to assume that the `index` might be undef/poison if the `generation` doesn't match, and thus while it finds a way to do it without needing a branch, it has to do something slightly more complicated than optimal to combine the results. (LLVM is allowed to change non-short-circuiting code to use branches, but not the other way around.) Here's a link showing the codegen today: <https://rust.godbolt.org/z/9WzjxrY7c> ```rust #[no_mangle] pub fn demo_eq_ref(a: &Entity, b: &Entity) -> bool { a == b } ``` ends up generating the following assembly: ```asm demo_eq_ref: movq xmm0, qword ptr [rdi] movq xmm1, qword ptr [rsi] pcmpeqd xmm1, xmm0 pshufd xmm0, xmm1, 80 movmskpd eax, xmm0 cmp eax, 3 sete al ret ``` (It's usually not this bad in real uses after inlining and LTO, but it makes a strong demo.) This PR manually implements `PartialEq::eq` *without* short-circuiting, and because that tells LLVM that neither the generations nor the index can be poison, it doesn't need to be so careful and can generate the "just compare the two 64-bit values" code you'd have probably already expected: ```asm demo_eq_ref: mov rax, qword ptr [rsi] cmp qword ptr [rdi], rax sete al ret ``` Since this doesn't change the representation of `Entity`, if it's instead passed by *value*, then each `Entity` is two `u32` registers, and the old and the new code do exactly the same thing. (Other approaches, like changing `Entity` to be `[u32; 2]` or `u64`, affect this case.) This should hopefully merge easily with changes like #9907 that also want to change `Entity`. ## Benchmarks I'm not super-confident that I got my machine fully consistent for benchmarking, but whether I run the old or the new one first I get reasonably consistent results. Here's a fairly typical example of the benchmarks I added in this PR: ![image](https://github.com/bevyengine/bevy/assets/18526288/24226308-4616-4082-b0ff-88fc06285ef1) Building the sets seems to be basically the same. It's usually reported as noise, but sometimes I see a few percent slower or faster. But lookup hits in particular -- since a hit checks that the key is equal -- consistently shows around 10% improvement. `cargo run --example many_cubes --features bevy/trace_tracy --release -- --benchmark` showed as slightly faster with this change, though if I had to bet I'd probably say it's more noise than meaningful (but at least it's not worse either): ![image](https://github.com/bevyengine/bevy/assets/18526288/58bb8c96-9c45-487f-a5ab-544bbfe9fba0) This is my first PR here -- and my first time running Tracy -- so please let me know what else I should run, or run things on your own more reliable machines to double-check. --- ## Changelog (probably not worth including) Changed: micro-optimized `Entity::eq` to help LLVM slightly. ## Migration Guide (I really hope nobody was using this on uninitialized entities where sufficiently tortured `unsafe` could could technically notice that this has changed.)
Interestingly, the codegen for this seems to be quite dependent on what exact target-cpu it's compiled for: https://rust.godbolt.org/z/ebPnGToW3 . It seems that enabling AVX doesn't change LLVM's approach of using vector loads and just uses more compact AVX instructions, but using |
(This is my first PR here, so I've probably missed some things. Please let me know what else I should do to help you as a reviewer!) # Objective Due to rust-lang/rust#117800, the `derive`'d `PartialEq::eq` on `Entity` isn't as good as it could be. Since that's used in hashtable lookup, let's improve it. ## Solution The derived `PartialEq::eq` short-circuits if the generation doesn't match. However, having a branch there is sub-optimal, especially on 64-bit systems like x64 that could just load the whole `Entity` in one load anyway. Due to complications around `poison` in LLVM and the exact details of what unsafe code is allowed to do with reference in Rust (rust-lang/unsafe-code-guidelines#346), LLVM isn't allowed to completely remove the short-circuiting. `&Entity` is marked `dereferencable(8)` so LLVM knows it's allowed to *load* all 8 bytes -- and does so -- but it has to assume that the `index` might be undef/poison if the `generation` doesn't match, and thus while it finds a way to do it without needing a branch, it has to do something slightly more complicated than optimal to combine the results. (LLVM is allowed to change non-short-circuiting code to use branches, but not the other way around.) Here's a link showing the codegen today: <https://rust.godbolt.org/z/9WzjxrY7c> ```rust #[no_mangle] pub fn demo_eq_ref(a: &Entity, b: &Entity) -> bool { a == b } ``` ends up generating the following assembly: ```asm demo_eq_ref: movq xmm0, qword ptr [rdi] movq xmm1, qword ptr [rsi] pcmpeqd xmm1, xmm0 pshufd xmm0, xmm1, 80 movmskpd eax, xmm0 cmp eax, 3 sete al ret ``` (It's usually not this bad in real uses after inlining and LTO, but it makes a strong demo.) This PR manually implements `PartialEq::eq` *without* short-circuiting, and because that tells LLVM that neither the generations nor the index can be poison, it doesn't need to be so careful and can generate the "just compare the two 64-bit values" code you'd have probably already expected: ```asm demo_eq_ref: mov rax, qword ptr [rsi] cmp qword ptr [rdi], rax sete al ret ``` Since this doesn't change the representation of `Entity`, if it's instead passed by *value*, then each `Entity` is two `u32` registers, and the old and the new code do exactly the same thing. (Other approaches, like changing `Entity` to be `[u32; 2]` or `u64`, affect this case.) This should hopefully merge easily with changes like bevyengine#9907 that also want to change `Entity`. ## Benchmarks I'm not super-confident that I got my machine fully consistent for benchmarking, but whether I run the old or the new one first I get reasonably consistent results. Here's a fairly typical example of the benchmarks I added in this PR: ![image](https://github.com/bevyengine/bevy/assets/18526288/24226308-4616-4082-b0ff-88fc06285ef1) Building the sets seems to be basically the same. It's usually reported as noise, but sometimes I see a few percent slower or faster. But lookup hits in particular -- since a hit checks that the key is equal -- consistently shows around 10% improvement. `cargo run --example many_cubes --features bevy/trace_tracy --release -- --benchmark` showed as slightly faster with this change, though if I had to bet I'd probably say it's more noise than meaningful (but at least it's not worse either): ![image](https://github.com/bevyengine/bevy/assets/18526288/58bb8c96-9c45-487f-a5ab-544bbfe9fba0) This is my first PR here -- and my first time running Tracy -- so please let me know what else I should run, or run things on your own more reliable machines to double-check. --- ## Changelog (probably not worth including) Changed: micro-optimized `Entity::eq` to help LLVM slightly. ## Migration Guide (I really hope nobody was using this on uninitialized entities where sufficiently tortured `unsafe` could could technically notice that this has changed.)
Given a basic struct like this,
The generated
==
is suboptimal:https://rust.godbolt.org/z/1b1xsnzx6
For comparison, not using short-circuiting
gives a much-simpler codegen
This appears to be related to LLVM not knowing whether the second field is poison, as Alive2 confirms that LLVM isn't allowed to convert the former into the latter (at least for the optimized forms): https://alive2.llvm.org/ce/z/bAsJGN
Is there maybe some metadata we could put on the parameter attributes to tell LLVM that reading them isn't poison? It appears that just reading them first, like (same godbolt link above)
still isn't enough for it to remove the short-circuiting, as even though that emits the
!noundef
loads first, it seems like LLVM'sSROAPass
moves them behind the branch from&&
.FWIW, clang(trunk) has the same codegen difference: https://cpp.godbolt.org/z/bbaz196GP
It might not have a choice, though, since C++ references are mostly just pointers.
The text was updated successfully, but these errors were encountered: