Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

async IO traits: thoughts on ecosystem splits #12

Open
nrc opened this issue Jun 29, 2022 · 17 comments
Open

async IO traits: thoughts on ecosystem splits #12

nrc opened this issue Jun 29, 2022 · 17 comments
Labels
A-stdlib Area: a standard library for async Rust

Comments

@nrc
Copy link
Owner

nrc commented Jun 29, 2022

Not sure an issue is the right place for this, but I wanted to record some thoughts on the subject of ecosystems splits in the context of the async IO traits.

The primary driver in doing this work is to fix an ecosystem split between runtimes. I think that is an important goal with many good effects beyond portability between runtimes. Any solution to the async IO traits should strive to avoid creating new ecosystem splits. However, one of the fundamental tensions in the traits design is that there are different constituencies with different needs and priorities: some users want an easy/ergonomic way to read which will be fast and efficient, but not necessarily cutting edge. Some users have stronger preferences for performance or memory usage over ergonomics (this is multiple groups, I think, with each group having a different requirement around performance). I think there is a really important question over how much of the ecosystem can be shared between these groups and how much we just have to accept some level of splitting.

Looking at Read, it feels like anything other than async fn read being the primary API for users and implementers is sub-optimal for the ergonomics-first group. However, there is no way to adapt such an API into the ready/non-blocking read API which seems necessary for the group which prioritises memory usage (the reverse adaption is possible). The only way to satisfy both groups is to have two sets of traits (i.e., there is a Read trait with a simple async fn read API, then there is a Ready and ReadinessRead: Ready set of traits for memory-optimal usage, with a blanket impl of Read for T: ReadinessRead). In this scenario, any library which uses Read bounds is usable by everyone, but any library which uses Ready or ReadinessRead bounds is only usable with resources which implement those traits. I think 'downcasting' for converting dyn Read to dyn ReadinessRead might be possible?

I wonder how this would work in practice? It feels like it could be OK if most libraries used Read and ReadinessRead was only used where absolutely necessary, and if most leaf resources implemented ReadinessRead rather than Read directly. However, if the whole Tokio ecosystem moves to ReadinessRead (since that is more natural given their priorities), then I think we just end up with a variation of the current ecosystem split but split by traits rather than dependencies.

Anyway, @rust-lang/wg-async, @rust-lang/libs-api I'd be interested if any of you have thoughts on this

@nrc nrc added the A-stdlib Area: a standard library for async Rust label Jun 29, 2022
@nrc
Copy link
Owner Author

nrc commented Jun 29, 2022

To be a little more concrete, the read traits would look something like

pub trait Read {
    async fn read(&mut self, buf: &mut [u8]) -> Result<usize>;
    async fn read_buf(&mut self, buf: &mut ReadBuf<'_>) -> Result<())> { ... }
    async fn read_exact(&mut self, buf: &mut [u8]) -> Result<()> { ... }
    async fn read_buf_exact(&mut self, buf: &mut ReadBuf<'_>) -> Result<()> { ... }
    async fn read_buf_vectored(&mut self, bufs: &mut ReadBufVec<'_>) -> Result<usize> { ... }
    async fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize> { ... }
    async fn read_to_string(&mut self, buf: &mut String) -> Result<usize> { ... }

    fn is_read_vectored(&self) -> bool { ... }

    fn by_ref(&mut self) -> &mut Self
    where
        Self: Sized,
    { ... }

    fn as_ready(&self) -> Option<&dyn ReadinessRead> {
        None
    }
}

pub trait Ready {
    async fn ready(&mut self, interest: Interest) -> Result<Readiness>;
}

// Strawman name
pub trait ReadinessRead: Ready {
    fn non_blocking_read_buf(&mut self, buf: &mut ReadBuf<'_>) -> Result<NonBlocking<()>>;
    fn non_blocking_read_buf_vectored(&mut self, bufs: &mut ReadBufVec<'_>) -> Result<NonBlocking<usize>> { ... }

    fn is_read_vectored(&self) -> bool { ... }

    fn by_ref(&mut self) -> &mut Self
    where
        Self: Sized,
    { ... }
}

impl<T: ReadinessRead> Read for T {
    async fn read(&mut self, buf: &mut [u8]) -> Result<usize> { ... }

    fn as_ready(&self) -> Option<&dyn ReadinessRead> {
        Some(self)
    }
}

@NobodyXu
Copy link

@nrc Honestly, while the trait Read is indeed portable, it might not have the best performance, especially for completion based async io-engine such as io-uring.

Currently, the most efficient way to work with io-uring is to used owned buffer.
While this is partly due to limitations of the async system (lack of async drop), even with it fixed,
io-uring will still provides better performance with buffers registered and owned by it.

Registered buffers provides better performance because:

  • io-uring doesn't have to check the validity of the buffer
  • io-uring has a provided buffer mode when reading data, where it automatically finds one unused buffer.
    This means that we can reuse the buffer without polling in Ready or any non-blocking io in ReadinessRead
  • io-uring now provides a multi-shot recv mode, where a recv io request is issued only once and it will keep recving data into the provided buffers.

Thus, I think it is absolutely necessary to rethink the Read traits if we want to get the best performance out of io-uring.
Otherwise, different async runtime would just provide different traits yet again and we would be locked into specific vendor to archive the maximum throughput.

@LukeMathWalker
Copy link

I've been subscribing to this repository for a while - I think (lack of) runtime interop is a significant issue in today's async ecosystem.
I would have expected to see active discussion in this repository, an opportunity to get an appreciation of the different positions and the rationale behind the different available design options.

Instead, it mostly looks like discussions are mostly happening somewhere else with you @nrc reporting back/synthesising/interpreting the viewpoints of the different "groups". See, for example, this paragraph:

some users want an easy/ergonomic way to read which will be fast and efficient, but not necessarily cutting edge. Some users have stronger preferences for performance or memory usage over ergonomics (this is multiple groups, I think, with each group having a different requirement around performance).

Am I missing other obvious public venues where these conversations are taking place where we can see descriptions of requirements/concerns coming first-hand by those groups of users?

@SabrinaJewson
Copy link

Thus, I think it is absolutely necessary to rethink the Read traits if we want to get the best performance out of io-uring.

I disagree, I think BufRead is a fine API for supporting io_uring with owned buffers (assuming the async drop problem is fixed) — it would support multi-shot recv too, as long as each I/O resource can hold a list of buffers (but that wouldn’t be difficult). It additionally has the benefit of being very similar to synchronous code.

@nrc
Copy link
Owner Author

nrc commented Jun 29, 2022

Honestly, while the trait Read is indeed portable, it might not have the best performance, especially for completion based async io-engine such as io-uring.

Sorry, somewhat unspoken here, but described in the main proposal (https://github.com/nrc/portable-interoperable/blob/master/io-traits/README.md), is that to get the most out of completion based systems you would use the BufRead (or proposed OwnedRead) traits, rather than the Read trait. So that is all somewhat orthogonal to the design around Ready (although it is also a place where I'm concerned about a potential soft split in the ecosystem)

Am I missing other obvious public venues where these conversations are taking place where we can see descriptions of requirements/concerns coming first-hand by those groups of users?

There are some discussions happening on Zulip, mostly with the async WG, but not so much. I've been having some 1:1 chats with various stakeholders, but otherwise discussion is mostly here and on Zulip. Where I'm 'reporting back' it's mostly from reading issues or code, or discussions with stakeholders, also I guess lots of my own thinking, research on existing systems, and iteration on design. Honestly, there has not been as much active discussion as I'd like.

@NobodyXu
Copy link

NobodyXu commented Jun 29, 2022

The BufRead might be a little bit limited for the io-uring's provided buffer mode.

In io-uring, you can specify the id of the group of the provided buffer to be used when issuing a read request and that group of buffers all have the same size.

Thus, it is possible to configure it to use a different size dynamically.

Perhaps we should also have methods for probing the size of the internal buffer and requests to change that?

@SabrinaJewson
Copy link

Internal buffer size modification seems like it would be appropriate more as inherent methods on types like TcpStream than on the BufRead interface to me, since it’s quite specific to io_uring and those particular types. Generic code generally just has to worry about “a reader” rather than the reader’s internal buffer.

@NobodyXu
Copy link

Internal buffer size modification seems like it would be appropriate more as inherent methods on types like TcpStream than on the BufRead interface to me, since it’s quite specific to io_uring and those particular types. Generic code generally just has to worry about “a reader” rather than the reader’s internal buffer.

Fair enough.

@NobodyXu
Copy link

I don't think having async IO traits like Read/Write/BufRead is enough to unify the ecosystem.

Many crates, like reqwest, http server, openssh-mux-client need to create a network socket (tcp/udp/unix socket) and without a way to create it in a portable manner, they will resort back to use a specific runtime.

There is also crates like tokio-pipe which wraps the pipe for tokio users, I think we need to have some way to create AsyncFd in a portable manner.

@NobodyXu
Copy link

Hmmm, I just notice that there is no way to pass an owning buffer for Write.

Perhaps we also need one for Write to fully utilize io-uring?

@nrc
Copy link
Owner Author

nrc commented Jun 29, 2022

These are good questions, but they are somewhat off-topic for this issue. I have proposed adding a BufWrite trait and perhaps OwnedWrite too, though I've mostly been focussing on reading. I very much appreciate that there is lots more than just the io traits to be done to make the ecosystem more interoperable. I think the IO traits are necessary but not sufficient.

@conradludgate
Copy link

    async fn read_buf(&mut self, buf: &mut ReadBuf<'_>) -> Result<())> { ... }

I was under the impression this signature had changed to buf: ReadBufMutWrapper<'_, '_> for the 'soundness questions' (eg mem::replace()). Did I miss this getting vetoed or was this just a copy-pasta?

@nrc
Copy link
Owner Author

nrc commented Jun 29, 2022

I was under the impression this signature had changed to buf: ReadBufMutWrapper<', '> for the 'soundness questions' (eg mem::replace()). Did I miss this getting vetoed or was this just a copy-pasta?

Yes that is correct, I'm basing this off the current sync design and will update as that evolves (it is, I think, orthogonal to the design questions around async).

@yoshuawuyts
Copy link

I guess I'm missing too much context to formulate a proper response to this. For example:

  • "Some users have stronger preferences for performance or memory usage over ergonomics (this is multiple groups [...]" — Which groups are these? Why do they have these preferences? Do preferences within this group vary? If so, how do they vary? What are examples of this?
  • Some users want an easy/ergonomic way to read which will be fast and efficient, but not necessarily cutting edge. — What do you mean by "cutting edge"? Who are these users? Why do they want this? What are examples of people doing this?
  • Looking at Read, it feels like anything other than async fn read being the primary API for users and implementers is sub-optimal for the ergonomics-first group. — Why do you feel this? Which alternatives have you considered?

I definitely have thoughts on ergonomics, performance, compatibility in async Rust. But in order to engage with this I need to better understand where you're coming from and which assumptions you bring. Because if we don't share a common understanding of the problem space, it's hard to come to shared solutions - especially in an unstructured medium like this. Does that make sense?

@SabrinaJewson
Copy link

SabrinaJewson commented Jun 29, 2022

Many crates, like reqwest, http server, openssh-mux-client need to create a network socket (tcp/udp/unix socket) and without a way to create it in a portable manner, they will resort back to use a specific runtime.

This is definitely a problem. I think a good solution would be to make Reqwest generic over a S: Socket type, where Socket would be a trait similar to socket2::Socket’s API, but asynchronous. Maybe something like context parameters could be used to avoid having to write out generic parameters everywhere. It would be better than separate TcpStream/UdpSocket/etc traits, because it’s more a powerful API, it makes it easier for implementors since it’s lower level, and has a smaller API surface. It’s also better than async_io::Async or tokio::io::unix::AsyncFd because the latter abstraction would be significantly harder to make into a trait (maybe needing HKTs?) as well as itself being harder to implement because the generic parameter means the underlying code can’t be trusted, and so it has to guard against that. There could be a lock_api-equivalent that wraps any S: Socket with high-level wrapper types, e.g. TcpStream<tokio::Socket>.

Hmmm, I just notice that there is no way to pass an owning buffer for Write.

Perhaps we also need one for Write to fully utilize io-uring?

Just having .flush() is enough to allow utilization of io_uring’s WRITE_FIXED mode (the equivalent of READ_FIXED). write would simply write to the in-memory registered-with-io_uring buffer, and .flush() would perform the actual work of submitting the I/O operation.

A slightly more general API, enabling sharing a single write buffer between multiple I/O resources, is this:

impl Runtime {
    pub fn buffer(&self) -> Buffer;
}
impl TcpStream {
    pub async fn write_from_buffer(&self, buf: &Buffer, range: Range<usize>) -> io::Result<usize>;
    pub async fn read_to_buffer(&self, buf: &mut Buffer, range: Range<usize>) -> io::Result<usize>;
}

Where Buffer is pretty much a Box<[u8]> (i.e. an unresizable mutable byte container). But I don’t think most generic code needs this, the automatic buffering provided by Read and Write should be enough.

Edit: Actually, I do see the use case for a BufWrite which supports fn buffer(&mut self) -> &mut [u8]. Might be useful then.

@nrc
Copy link
Owner Author

nrc commented Jun 29, 2022

"Some users have stronger preferences for performance or memory usage over ergonomics (this is multiple groups [...]" — Which groups are these? Why do they have these preferences? Do preferences within this group vary? If so, how do they vary? What are examples of this?

I think you may be over-indexing on the grouping here. I simply mean that some users have requirements that make memory usage a very high priority and therefore it is essential for such users to be able to minimise memory allocation. For other users their requirements mean that minimising latency is very high priority and therefore it is essential to support zero-copies. These are obviously very rough groupings and there will be many other differences between members of the group, I'm just grouping users for whether a certain aspect of performance is a top priority or if ease of use is more important.

Some users want an easy/ergonomic way to read which will be fast and efficient, but not necessarily cutting edge. — What do you mean by "cutting edge"? Who are these users? Why do they want this? What are examples of people doing this?

By cutting edge I mean that they don't care about every last cycle or bit of memory, they only care about orders of magnitude performance. Examples of such users are somebody replacing part of a web backend from Ruby to Rust for performance. They care about performance, but not to the same degree as somebody implementing a load balancer for AWS or something.

Looking at Read, it feels like anything other than async fn read being the primary API for users and implementers is sub-optimal for the ergonomics-first group. — Why do you feel this? Which alternatives have you considered?

Why? From discussions with you and Josh among others, where simplicity and symmetry with the sync APIs seem paramount. Alternatives - the earlier proposal of Ready::ready, Read::{read, non_blocking_read} and its variations, having just ready and non_blocking_read, or using polling are the main alternatives on this axis. The other alternatives in the proposal doc are somewhat relevant too.

@NobodyXu
Copy link

This is definitely a problem. I think a good solution would be to make Reqwest generic over a S: Socket type, where Socket would be a trait similar to socket2::Socket’s API, but asynchronous. Maybe something like context parameters could be used to avoid having to write out generic parameters everywhere. It would be better than separate TcpStream/UdpSocket/etc traits, because it’s more a powerful API, it makes it easier for implementors since it’s lower level, and has a smaller API surface. It’s also better than async_io::Async or tokio::io::unix::AsyncFd because the latter abstraction would be significantly harder to make into a trait (maybe needing HKTs?) as well as itself being harder to implement because the generic parameter means the underlying code can’t be trusted, and so it has to guard against that. There could be a lock_api-equivalent that wraps any S: Socket with high-level wrapper types, e.g. TcpStreamtokio::Socket.

Yeah, that is similar to what I thought, but I think that it is also necessary to have a Runtime trait to group them together.

Checkout #13 , a very rough scratch of what I want.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-stdlib Area: a standard library for async Rust
Projects
None yet
Development

No branches or pull requests

6 participants