-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bitcode rewrite #19
Bitcode rewrite #19
Conversation
Benchmarks for those who are interested. Previous version of bitcode isn't shown here but it has speed similar to bincode, size simliar to new bitcode, and compressed size 20% worse than bincode.
|
This is remarkable, is there any particular part of the branch you could point me to so I could see how it's done? Side question, do you see any benefits in hinting on top of runtime determined? Hope it's okay to ask the questions in your PR :) thanks for making this library |
https://github.com/SoftbearStudios/bitcode/blob/5bdc22ba943d0ba8de092a763327b8167656611f/src/pack.rs
After adding hints to bitcode, I didn't use them as much as I thought I would because it was tedious. The types of "packing" I'm using in this new version are designed to quickly determine if they're applicable and pack the data. I'm probably not going to add manual hints back because most people don't benefit from them (me included). Also this new version prioritizes working with general purpose compression which some of the old hints got in the way of (e.g.
Yeah! I made this PR public before being finished to see what people think about it. |
Very impressive!!! |
I just released |
I'm interested in hearing opinions about making thread_local! {
static BUFFER: std::cell::RefCell<Buffer> = Default::default();
}
pub fn encode<T: Encode + ?Sized>(t: &T) -> Vec<u8> {
BUFFER.with(|b| b.borrow_mut().encode(t).to_vec())
}
pub fn decode<'a, T: Decode<'a> + ?Sized>(bytes: &'a [u8]) -> Result<T, Error> {
BUFFER.with(|b| b.borrow_mut().decode(bytes))
} Pros:
Cons:
You could always opt out by creating a new buffer for each encode/decode call. |
I just released
|
@caibear https://docs.rs/bitcode/0.5.0/bitcode/struct.Buffer.html#method.deserialize |
Yes this was removed. Currently you have to use Encode/Decode if you want to reuse allocations. Note: Saving allocations is an optimization that's usually 10% faster on large messages and 50% faster on small messages. |
@caibear Ahh I see. I'm using hecs(ecs) and other libs and would need to derive Encode/Decode on them as well so I'll stick with 0.5.0 in the meantime. |
Have you benchmarked 0.6 with allocations against 0.5 without allocations for your usecase? 0.6 with allocations might be faster if your messages are large enough. |
@caibear Wow, 0.6 is really that better huh? I have not benchmarked just yet. What do you mean by "large enough" in terms of size? My game sends position updates 30 times a sec and the average size is ~3-5kilobytes |
0.6 is generally faster/smaller than 0.5 across all benchmarks. The question here is if the gain in speed outweighs the additional allocations. I just benchmarked deserializing 5kb of messages and 0.6 is 30% faster. I don't know your exact structs, but this should be a good baseline. On a side note: I also benchmarked 0.6 derive and it's 7x faster than 0.6 serde. |
I rewrote the entire library. docs
It's been tested/fuzzed but still needs some work before release.
New features:
&str
Alpha release:
Beta release:
Full release:
bitcode::Buffer
Send + Sync#[bitcode(with_serde)]
(can only dobitcode::serialize
right now)#![forbid(unsafe_code)]
feature flag (serde only and slightly slower)std::net::{*Addr*}
#30)&[u8]