-
Notifications
You must be signed in to change notification settings - Fork 738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Windows UDP sends aren't pipelined #913
Comments
Yes, the windows system should probably be rewritten. Unfortunately, this is probably not going to happen unless someone volunteers to take it on. |
This adds support for performing non-blocking network operations, such as reading and writing to/from a socket. The runtime API exposed is similar to Erlang, allowing one to write code that uses non-blocking APIs without having to resort to using callbacks. For example, in a typicall callback based language you may write the following to read from a socket: socket.create do (socket) { socket.read do (data) { } } In Inko, you instead would (more or less) write the following: import std::net::socket::TcpStream let socket = try! TcpStream.new(ip: '192.0.2.0', port: 80) let message = try! socket.read_string(size: 4) The VM then takes care of using the appropriate non-blocking operations, and will reschedule processes whenever necessary. This functionality is exposed through the following runtime modules: * std::net::ip: used for parsing IPv4 and IPv6 addresses. * std::net::socket: used for TCP and UDP sockets. * std::net::unix: used for Unix domain sockets. The VM uses the system's native polling mechanism to determine when a file descriptor is available for a read or write. On Linux we use epoll, while using kqueue for the various BSDs and Mac OS. For Windows we use wepoll (https://github.com/piscisaureus/wepoll). Wepoll exposes an API that is compatible with the epoll API, but uses Windows IO completion ports under the hoods. When a process attempts to perform a non-blocking operation, the process is registered (combined with the file descriptor to poll) in a global poller and suspended. When the file descriptor becomes available for a read or write, the corresponding process is rescheduled. The polling mechanism is set up in such a way that a process can not be rescheduled multiple times at once. We do not use MIO (https://github.com/tokio-rs/mio), instead we use epoll, kqueue, and wepoll (using https://crates.io/crates/wepoll-binding) directly. At the time of writing, while MIO offers some form of support for Windows it comes with various issues: 1. tokio-rs/mio#921 2. tokio-rs/mio#919 3. tokio-rs/mio#776 4. tokio-rs/mio#913 It's not clear when these issues would be addressed, as the maintainers of MIO appear to not have the experience and resources to resolve them themselves. MIO is part of the Google Summer of Code 2019, with the goal of improving Windows support. Unfortunately, this likely won't be done before the end of 2019, and we don't want to wait that long. Another issue with MIO is its implementation. Internally, MIO uses various forms of synchronisation which can make it expensive to use a single poller across multiple threads; it certainly is not a zero-cost library. It also offers more than we need, such as being able to poll arbitrary objects. We are not the first to run into these issues. For example, the Amethyst video game engine also ran into issues with MIO as detailed in https://community.amethyst.rs/t/sorting-through-the-mio-mess/561. With all of this in mind, I decided it was not worth the time to wait for MIO to get fixed, and to instead spend time directly using epoll, kqueue, and wepoll. This gives us total control over the code, and allows us to implement what we need in the way we need it. Most important of all: it works on Linux, BSD, Mac, and Windows.
@Ralith can you confirm this issue is solved on current master? Your first link now points to a call to |
Original link: https://github.com/tokio-rs/mio/blob/v0.6.x/src/sys/windows/udp.rs#L105 I'm going to guess that the answer is yes as we no longer do any double buffering. |
Then I'm going to close this as solved. |
https://github.com/carllerche/mio/blob/master/src/sys/windows/udp.rs#L105 marks the socket as unwritable until the send completes at https://github.com/carllerche/mio/blob/master/src/sys/windows/udp.rs#L394. This makes very inefficient use of the kernel's UDP send buffer, severely reducing throughput. Instead, the socket should only be marked as unwritable once the buffer is full. It's not totally clear how this should be accomplished, but there are at least two possibilities:
SetQueuedCompletionNotificationModes
. Then only unregister write readiness whenWSASendMsg
returnsWSA_IO_PENDING
instead of0
. However, some reports suggest thatWSA_IO_PENDING
can be returned prematurely.The text was updated successfully, but these errors were encountered: