-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AsyncWrite support #15
Comments
Thanks for the issue, yeah I also thought about that. It would be nice to have, but I'm not sure how good it can be managed to have both in terms of code duplication. |
One solution that I could propose is:
In result Async version would be used as is. Pros:
Cons:
|
One significant problem right now is that the ecosystem is split regarding AsyncRead/Write. Every runtime has an its own (tokio, ..) set of traits and the |
A good idea may be to add lz4_flex support to https://github.com/Nemo157/async-compression |
The guts of my ugly workaround on the AsyncRead side is: const CHUNK_LENGTH: usize = /* 1 arbitrary * */ 1024 /* KiB */ * 1024 /* MiB */ ;
let compressed_chunk_max_length = lz4_flex::block::get_maximum_output_size(CHUNK_LENGTH);
let mut reader = lz4_flex::frame::FrameDecoder::new(ChunkReader::new());
loop {
while reader.get_ref().len() < compressed_chunk_max_length {
let Some(received) = stream.recv_data().await.context("receive data")? else {
break;
};
reader.get_mut().append(received);
}
let read_length = {
// FIXME: ideally wouldn't initialize
let buffer = if buffer.get_ref().remaining() > CHUNK_LENGTH {
buffer.get_mut().initialize_unfilled_to(CHUNK_LENGTH)
} else {
buffer.get_mut().initialize_unfilled()
};
reader.read(buffer).context("read chunk")?
};
buffer.get_mut().advance(read_length);
// break at end of stream
if read_length == 0 {
break;
}
} Where This attempts to read chunks off the connection asynchronously and decompress them piecemeal without ever blocking. Likewise on the AsyncWrite side: let mut writer = lz4_flex::frame::FrameEncoder::new(vec![].writer());
for chunk in buffer.chunks(
/* 1 arbitrary * */ 1024 /* KiB */ * 1024, /* MiB */
) {
writer.write_all(chunk).context("encoder write all")?;
let buffer = mem::take(writer.get_mut().get_mut());
stream.send_data(buffer.into()).await.context("send data")?;
}
let buffer = writer.finish().context("encoder finish")?.into_inner();
stream
.send_data(buffer.into())
.await
.context("send finished data")?; Where This attempts to periodically steal the compressed data from the encoder and asynchronously write it to the connection. I don't really like how these turned out or think they're a clean solution, but wanted to share what has been working for me in case it helps anyone else. |
There should be some API that does not depend on Then everyone can implement their own I looked into the code a bit but it seems that a lot of logic is currently inside the Good example is |
https://docs.rs/flate2/1.0.34/src/flate2/deflate/bufread.rs.html#116-118
|
async-compression does not use Same for the sync compressor you link to, it also uses sans-io Once you have something like |
Would be good to add impl of FrameEncoder/FrameDecoder where W: AsyncWrite (futures::io::AsyncWrite), R: AsyncRead
This would enable user the possibility to use compression in async data streams
The text was updated successfully, but these errors were encountered: