-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proc_macro: stop using LEB128 for RPC. #59820
Conversation
@bors try |
(rust_highfive has picked a reviewer for you, use r? to override) |
⌛ Trying commit 6688b03 with merge 20daaaa5587a335a502951a2e9cd465973bed0a9... |
☀️ Try build successful - checks-travis |
@rust-timer build 20daaaa5587a335a502951a2e9cd465973bed0a9 |
Success: Queued 20daaaa5587a335a502951a2e9cd465973bed0a9 with parent 3750348, comparison URL. |
Finished benchmarking try commit 20daaaa5587a335a502951a2e9cd465973bed0a9 |
The comparison URL suggests it's a very slight win. But Cachegrind says it's a big win for
That's an almost 8% reduction. I admit I don't understand why this change causes an improvement... |
Local profiling results:
|
This is a classical "time vs memory" tradeoff: by encoding the original integer's bytes, with no "compression", the CPU has to spend roughly one instruction instead of a loop with several instructions per each 7-bit chunk of the integer, not to mention the branching itself. Because we're only serializing a fixed number of integers per request/response, the memory overhead is tiny and constant, so LEB128 was never really called for. Feel free to approve this PR if you think it helps, or get @alexcrichton to review it, I guess. |
I don't mind going either way on this, but I'd be fine waiting for a compelling use case to switch |
The profiling results above are compelling enough for me. @bors r+ |
📌 Commit 6688b03 has been approved by |
…cote proc_macro: stop using LEB128 for RPC. I'm not sure how much of an improvement this creates, it's pretty tricky to measure.
Rollup of 8 pull requests Successful merges: - #59781 (Remove check_match from const_eval) - #59820 (proc_macro: stop using LEB128 for RPC.) - #59846 (clarify what the item is in "not a module" error) - #59847 (Error when using `catch` after `try`) - #59859 (Suggest removing `?` to resolve type errors.) - #59862 (Tweak unstable diagnostic output) - #59866 (Recover from missing semicolon based on the found token) - #59892 (Impl RawFd conversion traits for WASI TcpListener, TcpStream and UdpSocket) Failed merges: r? @ghost
@eddyb sorry to bother you so long after this PR is closed, but still: does this change breaks ABI for the procedural macros compiled before this change? |
@fedochet There is no such thing as ABI stability here, Cargo should be recompiling your proc macros for you. |
@eddyb but if I am using precompiled procedural macros and their exported functions outside of compiler and load them by hand (basically to emulate procedural macro expansion)? |
Then you have to emulate what Cargo does and rebuild everything every time the output of |
Is there any chance that proc_macro ABI/API will be stabilized? I've seen a note in the rustc sources that in future procedural macros may be implemented as executables of some sort. They are now in fact, but without reliable ABI it's not safe to use them in external tools (my use case, for example, is providing autocompletion in IDE based on procedural macros expansion) |
@fedochet There are no plans for a stable ABI in Rust. In general, you must recompile everything built by |
@eddyb please take a look at #60593 - if this problem is real (i hope it's not just on my machine), it is very strange, because you've suggested that I should use the same compiler to compile everything, and from what I'm experiencing it is the only setup that doesn't work (which is, again, very strange) |
I'm not sure how much of an improvement this creates, it's pretty tricky to measure.