-
Notifications
You must be signed in to change notification settings - Fork 3
Conversation
Thanks @aturon! Some thoughts:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments are inline. I will be posting a more comprehensive comment soon.
tokio-reform.md
Outdated
|
||
On the documentation side, one mistake we made early on in the Tokio project was | ||
to so prominently discuss the `tokio-proto` crate in the documentation. While | ||
the crate was intended to make it very easy to get basic protocol |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be "basic request/response oriented protocol implementations"
|
||
fn serve(addr: SocketAddr, handle: Handle) -> impl Future<Item = (), Error = io::Error> { | ||
TcpListener::bind(&addr, &handle) | ||
.into_future() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is into_future
needed here? Binding should be immediate?
tokio-reform.md
Outdated
|
||
fn serve(addr: SocketAddr) -> impl Future<Item = (), Error = io::Error> { | ||
TcpListener::bind(&addr) | ||
.into_future() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is into_future
needed here? Binding should be immediate?
tokio-reform.md
Outdated
|
||
### The `io` module | ||
|
||
Finally, there may *eventually* be an `io` modulewith the full contents of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, typo (modulewith).
Also, I would clarify that it would be "with a subset or the full contents". I think it could be entirely plausible that we don't re-export everything.
could build a solid http2 implementation with it; this has not panned out so | ||
far, though it's possible that the crate could be improved to do so. On the | ||
other hand, it's not clear that it's useful to provide that level of | ||
expressiveness in a general-purpose framework. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say that there are three paths forward
- Try to improve
tokio-proto
such thath2
wants to use it (low probability of success i think). - Significantly simplify tokio-proto to make it easy to use for simpler cases.
- Focus on ease of use over raw performance and features.
- This would most likely mean getting rid of streaming bodies
- This would also most likely mean that hyper wouldn't use it at all.
- Completely deprecate it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think there are any major protocols that are simple enough that a production implementation might use a significantly simplified version of tokio-proto?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
istm that it would be better to kill tokio-proto for now, and when we have some experience in h2 and Hyper, then try and factor out a useful library. Designing the library ahead of time seems doomed to failure in this context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are lots of good things in tokio-proto, such as the multiplexing code. I'd like to see that salvaged, if possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nrc the experience w/ h2 and hyper has been acquired and has informed my list of possible paths forward.
I do agree with @tikue that there is a lot of useful stuff in tokio-proto. As a lib, I don't think it can be used when one wants to implement the most efficient client / server possible, but I do think that it could be useful to get something done fast.
As such, I think that focusing on that case (getting something done fast) could be more successful. This would be admitting that performance sacrifices are fine for ergonomic wins.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I actually found really helpful even though I ended up not using tokio-proto
was that it suggested a model for layering the abstractions that I did end up following. It even provided names which would sound familiar to anyone who had looked at tokio-proto
before.
I think there is tremendous value in that alone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've just been building an RPC mechanism using tokio and I'm making use of tokio-proto for handling multiplexed messages. I'd be sad if that went away.
Thanks @aturon for writing this up. This was quite a good read and am quite happy with how this is turning out. Some thoughts follow.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the marvelous write-up! It's always a joy to read your RFCS.
tokio-reform.md
Outdated
.incoming() | ||
.for_each(move |(conn, _)| { | ||
let (reader, writer) = conn.split(); | ||
CurrentThread.spawn(copy(reader, writer).then(move |result| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the
current_thread
module differs in the leading examples and the detailed design.
Is this what you meant? This example is out of date?
Ok(()) | ||
})); | ||
Ok(()) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at this example, it does show a pattern that is extremely common for servers: spawning a listener and then spawning tasks for each accepting socket. I wonder if we could make this sort of thing easier to do, even without async/await.
fn serve(addr: SocketAddr) -> impl Future {
TcpListener::bind(addr).and_then(|listener| {
listener.incoming().for_each(|(conn, _)| {
let (reader, writer) = conn.split();
// specifically, just return a Future here
copy(reader, writer).map_err(|err| {
println!("echo error: {}", err);
})
})
})
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
extremely common for servers
Not so much when you need limit on the number of connections. This means either BufferedUnordered, or some kind of semaphor across all the spawned coroutines. Currently, it looks like former solution works fine (with tk-listen
extensions though)
This sounds nice!
Though, it sounds like this may not be as much of an issue anymore, since by default, epoll will be on its own thread, and tasks that would be using the |
It doesn't matter if you use the default event loop, but it does if you want to take an approach like Seastar where you have many threads that are fully isolated (reactor per thread, almost like a multi process architecture). |
My main pain point with tokio (besides the abstractions as described here) is that it's very hard to debug code using tokio. |
@antoyo that should definitely be on the roadmap, thanks! |
Example proposal: IRC bot/client. |
Please don't add a global event loop. I come from the Python world, and I think if you asked every twisted core developer, 90% of them would say having a global event loop was a mistake. It leads to the following problems:
There's plenty that's challenging about learning Tokio, but I've found creating a This is not the hard part of learning Tokio, and I think it's a bad place to optimize. Global state makes testing harder, it makes fuzzing harder, it makes reading code harder. |
@alex thanks for the feedback.
This argument applies to any default over configuration option. It's a balance to weigh. I think the ergonomics win especially since most of the time, you probably are fine w/ just using the default event loop.
I'm not sure I follow this given that two bits of code that use the same default event loop should be fairly independent. That said, again, changing the default event loop at an executor level will be possible. I would also add, that as a counterpoint to python, there are many other environments (Go, erlang, node / libuv, ...) in which a global event loop has been successful. Unfortunately, I don't know much about the async I/O story in python, but could the global event loop be a symptom and not the root cause? |
As far as Twisted goes, I think you can up that percentage to 100%. Comparing to asyncio is also instructive: I'm not sure how asyncio programmers feel about it, but asyncio has a sort of hybrid approach with a thread-local(?) event loop that can be switched out, which turns some things that would be impossible into Twisted into merely hard things, but I still think the result is far from ideal. |
I think it'd be helpful to spell out the cases where we anticipate actually using multiple Tokio event loops (which the Tokio team has viewed as fairly niche). @alex, could you say more about why this desire comes up frequently in Python? |
Cases that come up that are poorly served by Twisted's global reactor, and which tokio currently does well:
|
Thanks @alex! One point that I think is very important to note: this RFC makes a major shift in what an "event loop" even means. In particular, the event loop is no longer tied to task execution. I think this might be part of the disconnect. Lemme dig in here:
Can you spell this out a bit more? What's the motivation in more detail?
So, executor-level customization of the reactor should help with this. But also, note that if you're talking about task execution then none of this applies anyway -- you get totally separate executors.
Again, this piece is broken out of |
This does seem to improve usability a lot, and generally separate concerns more neatly, so that's good, but the increased reliance on thread-locals, global mutable state, and implicitness/magic is a little concerning. The default event loop(s) created by tokioPerhaps this could be clarified in the RFC, but I presume these are created the first time a
These are things to be made easier rather than things which should happen automatically, right? It's not clear from the phrasing. Duplication of methods taking
|
Thanks for the comments @alex! I figured I'd add to what @carllerche and @aturon mentioned already:
In addition to what @carllerche already mentioned I'll add to the idea that I don't think that this "con of Python today" is strictly derived from having a global event loop. One alternative we considered when hashing out this design was to actually continue to have all functions require a Notably, we'd still have a global event loop! You could call something like Even this, though, can have a downside! (and this one is more related to having a global at all) Let's say you've got a big application that didn't want to bother passing around handles, but all the libraries you use take handles. This means that in the bowels of your application you're calling Interesting thoughts! I've personally wavered on this design quite a bit, but I think it's relatively certain that we're going to want some form of a global event loop. It's just so darn painful in a lot of applications to pass handles everywhere, and a global event loop would solve that ergonomic pain. This does indeed mean that using functions like
Another very good point! Our hope is that we'd have strong conventions around APIs you provide, for example tokio-core provides "convenience" APIs in this proposal which don't take handles, and then fully expressive APIs which take all arguments (including handles). It's true though that not all third party libraries may follow this same pattern. It's worth pointing out though that you don't always have control over the third party library use case though. Even if it did get handles passed in everywhere you may want some of the third party library to happen on one event loop and some of it to happen on the other, but it may not provide that level of configuration through its API. This in general is where we started to conclude that multiple event loops is likely to a relatively niche use case, but if you've got some ideas we'd love to hear them!
This I think may be a python-ism rather than a Rust-ism. The global event loop here can't even have foreign code run on it (you can't spawn tasks on it), so in that sense it's totally plausible for an application to have tons of test threads all sharing the same event loop and they can all be executing concurrently/in parallel. Did you have some specific cases you were worried about, though?
One thing I like about this proposal is that it doesn't rule out any existing application architectures. In that sense it's always possible to have an event loop per thread (although @aturon has a good point that diving into the rationale here for this in the first place would be good), so I think it's important for me, at least, to acknowledge that this is mostly a question of ergonomics. Ergonomically I think that this definitely ties into your previous points about third party libraries and idioms (who's passing handles and who takes handles). This is where @carllerche's "change the default on an executor level" would also come in handy as each thread could be an executor and change its default event loop.
It's true that there's not complete 100% isolation between tests if there's a shared event loop, but because we're not running arbitrary code the only vector for bugs (I think) are bugs in the
To add to what @aturon mentioned about |
I'm against globals and particularly, the "spawn" pattern. While this pattern seems to be success in other languages, I consider Rust different, as we have powerful combinators and ownership system. I'm not fond of this RFC for the same reason. The ideal pattern I propose tries to avoid spawning entirely. All futures should be organized in a tree structure, by chaining asynchronous function responses, and at last running everything combined with Feel free to correct me if I'm wrong. |
The reactor module provides just a few types for working with reactors (aka | ||
event loops): | ||
|
||
- `Reactor`, an owned reactor. (Used to be `Core`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're changing the name anyway, could we call this EventLoop
(and the module event_loop
)? Reactor is really jargon-y and doesn't describe what the object does, witness every time it is mentioned in docs (or even this RFC) having "(aka event loops)" or something with it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One tiny downside to EventLoop
is that the desirable variable name is a keyword:
let loop_ = EventLoop::new();
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let event_loop = ...;
:-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The good news is that it shouldn't be in the learning on ramp. Even touching Reactor will be for more advanced users.
Also, Reactor is the parlance and has lots of precedent in other environments. As such, those who should be looking for that type probably are already familiar with the naming.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a shame the awesome name reactor::Core is becoming reactor::Reactor.
@ishitatsuyuki While that model (single state machine which owns all child futures) makes it easy to handle ownership and cancellation, it can be much less performant in many cases because the entire state machine must be polled every time an event is delivered from mio. It'd be really nice to have some performance guidance and heuristics around the thresholds at which it's best to spawn separate futures vs. maintaining a single state machine, in addition to some information about the performance tradeoffs of running multiple event loop threads. |
One class of examples that I would love to see (as I struggle with this every time I work on a tokio-related project) is an expansion of the basic echo server into something with multiple streams/futures. For example, an echo server that will listen on a UDP socket as well as a TCP socket, and have a timer future thrown in as well. How might you combine all 3 things into the same handler or event loop? |
Thanks @eminence! And please, everyone else reading this thread: if you have examples you'd like to see, toss 'em out! |
First question: Why does this all have to happen at the same time?
I'm not saying I agree or disagree with these decisions. It just all seems very fast and honestly forced, and I'm not sure what the rush is. Each of those could be their own follow-up RFCs with long discussions. We have waited this long for Rust's async io story, why not take it one step at a time and ensure it is appropriately thought through? Second question: Global event loops and global thread pools: Rayon has a concept of Additionally, Futures+Executors seem to have at-least superficial similarity with Rayon (which is Iterators+Executors), if nothing else, does it make sense to share the I have more questions, but I'll start with those. |
I'm not too familiar with Tokio, but I have done a bunch of work with Twisted. Twisted extends Python's unit-testing library, adding extra sanity checks like "when the test function returns, the reactor should have no registered file-descriptors or queued timers" (i.e. does the system-under-test clean up after itself). That's a very useful post-condition to check, and very difficult if many tests are running in parallel on the same reactor. |
@Screwtapello None of that would be needed in the proposed system. All handles (TcpStream, TcpListener, etc..) would be owned in the test, so when they go out of scope, they will be dropped (which means removed from the global reactor). |
Changing the crate name means these changes do not require an 0.2 release. Releasing a
As explained in the RFC, it significantly reduces the amount of concepts needed to get started w/ Tokio as well as improves ergonomics for the most common cases. It also allows all of the decoupling of the reactor (I/O driver, executor, timers), because most people won't have to set this up. Only those who care will have to learn how all those various components come together to make a runtime.
It does not, which is why I wrote futures-pool which is similar in spirit to rayon but geared towards futures. The reasoning:
|
What is the story for running |
@yazaddaruvala Did you just propose implicit parameters? 😄 |
This is appealingly consistent with use of
Strongly in favor of this. IMO the only reasonable alternative would be to not provide |
It's a bit of a pov think, but if you schedule work on a thread which just happens to be owned
you where faster than me 😉 It's still pretty bad if you pass a A typical example could be a library exposes a TaskRunner as a future (opaque with |
Thanks @cramertj. Clearly I need to improve my Google-fu. However, it is good to know this idea isn't as left field as I originally thought. |
Thoughts on timersThis comment represents an overview of my thoughts on timers. It will provide a bunch of context. There are generally two categories of timers. Coarse and high resolution timers. High resolution timers tend to be set for sub second delays and require nanosecond resolution. In the context of network programming, coarse timers appropriate. In this case, 1ms tends to be the highest level of resolution possible, but usually even 100ms resolution is sufficient. On top of that, even triggering the timeout with error margins as high as 30% is acceptable. This is because, in network programming, timers are usually used for:
The network is unreliable, as such it is usually OK for timers to be coarse. AssumptionsWhen implementing a timer, the following assumptions are safe to make. These assumptions have guided the design of timers across many projects (Linux kernel, Nety, …)
AlgorithmsThere are two common categories of algorithms for implementing timers. A heap and some variation of a hashed timer described in Hashed and Hierarchical Timing Wheels: Data Structures for the Efficient Implementation of a Timer Facility. Heap timerThis is using a heap data structure to store all timeouts, sorted by expiration time. Heap based timers have the following properties.
Because of the assumptions stated above, heap based timers are rarely appropriate for use in networking related scenarios. Hashed timerWhile hashed timers are fairly simple conceptually, a full description is out of scope for this comment. The paper linked above as well as the overview by Adrian Colyer are good sources. There are a number of variations of the general idea, all with different trade offs, but at a high level are pretty similar. Hashed based timers have the following properties:
The various implementation permutations provide differing behavior in terms of the coarseness of the timer, the maximum duration of a timeout, trade offs between CPU & memory, etc… For example, a hashed wheel timer could be configured to have a resolution of 100ms and only support setting timeouts that are less than 5 minutes into the future. Another option could be a hierarchical timer that supports a resolution of 1ms, support setting timeouts of arbitrary duration, but require some bookkeeping CPU work to happen every 3.2 seconds. In fact, one characteristic of hashed wheel timers is that, when they are tuned for general purpose cases (i.e. supporting a resolution of 1ms and arbitrarily large duration timeouts), they tend to require book keeping every so often that could block the thread (this will be important later). However, even with these cons, the various hashed timers are much better suited for network programming cases based on the assumptions listed above.
I/O reactorThis RFC proposes a default, global I/O reactor. Roughly speaking, its job is to spin in a loop, sleeping on Now, enters the question of timers. The RFC details that there is also a default, global timer. A default, global, timer will also require a runtime thread in order to sleep & fire The immediate thought would be that, since there already is a runtime thread required for the I/O reactor that this thread can be reused to drive the timer. This would not be ideal. If cross thread communication is required to drive a default, global timer, a dedicated thread should be used. This is because:
While pauses delaying triggering timers is not critical for timers (see previous section), if they are are run on the same thread as the I/O reactor, they could cause delays in dispatching I/O events, which is not acceptable. Running the timer on a dedicated thread solves these problems. ExecutorsNow, lets take a moment to talk about strategies for executing futures. Generally, these are the same considerations taken when scheduling actors or green threads. As such, lessons from other environments apply here. Locality, locality, locality! When executing small tasks across a set of threads, you want to move data around threads as little as possible. This improves cache hits, locality, reduces synchronization, etc… This guiding principle highly influenced the design of In fact, an efficient scheduling strategy is a pool of threads that keeps all futures local as much as possible. This is the work-stealing approach implemented by futures-pool. Now, given that locality is key. The way timers would fit in to So, given that I mentioned that the timer will have significant pauses. One could ask: wouldn’t it be bad to run on a future scheduler? No, because of the following reasons:
Lastly, you might ask, if it makes sense to have a timer per thread, shouldn’t we also have an I/O reactor per thread? In short, the answer is: ideally we would (see seastar but doing so requires a user land TCP stack. OS level async I/O primitives don’t provide the necessary flexibility to run an I/O reactor per thread efficiently (the exact reasons are out of scope of this comment). ConclusionThe point of this lengthy comment is to illustrate why, in most cases, it is better to keep the I/O reactor and the timer on separate threads. Specifically, the only time they would be on the same thread is when the entire system is single threaded. Thus, it does not make sense to bake in a timer to the I/O reactor or to pass an I/O reactor handle when setting a timeout. This links the timers to the I/O reactor, which, as this comment argues, is the opposite of what is ideal. I am opposed to any proposal that makes the Tokio |
@carllerche I am not an expert in this by a long shot, but have you considered letting the kernel handle the timers, e.g. |
|
@carllerche What timer implementations will be available in Tokio? Will it still be possible to have a single thread running a core, executor and high-res timers used in the futures scheduled on it? |
@tanriol I would like the timer implementation to be completely decoupled from the reactor. This would let you swap in whatever impl you want. And yes, the goal would be for it to still be possible to run everything (I/O reactor, timer, and executor) on a single thread. However, you cannot pair a high-res timer w/ tokio (and probably futures in general...). Assuming you mean a high-res timer in the sub millisecond granularity, this just isn't possible due to OS APIs being roughly ms granularity and up. |
Thanks @carllerche for the writeup about timers! I've been talking some with @alexcrichton and @carllerche, and want to propose we make some revisions to the RFC based on the feedback so far:
It's also clear that we need a much more crisp story about how to customize the defaults the library would provide. In particular: Hard constraints
Note: these hard constraints mean, in particular, that the library is fully "zero cost" in the sense of "pay only for what you use": if you want to exert full control over reactor and timer management, you can do so, and no threads or reactors will be created behind your back. But for the common case, you can use the defaults and have a good experience. Soft constraints
@alexcrichton is going to look into some concrete APIs for meeting the above constraints. It seems best for this RFC to include the basic customization story, to ensure that we have the bases fully covered. After that, I'll plan to update the RFC text itself. |
I've spent way too long reading this entire thread top to bottom twice now In the end, I'm generally for these changes, but before getting into that, Re: rendered RFC
This is only true if the reactor is used for multiple threads, right? If
How would this look in libraries that need to spawn or return tasks? I think
Out of curiosity, what recent changes is this referring to? I looked through
Why is this the case?
For my own curiosity, when is this useful?
How does this differ from before? I saw it a few times in this comment thread,
Does this API need to exist? Can it not just be a hidden thread local variable? Re: first comment
When and why do lost wakeups happen today?
I'm a bit confused here, so I'm going to write what I think it means, ask The global event loop should probably come with a timer because
Re: why multiple event loops
I agree with @alex's event loop per thread. Re: event loops, again
What would be unfortunate here is that all libraries that use the global event The compromise of overriding the default event loop seems to make this concern Re: different profile workloads
This is also something to worry about when doing a lot of file IO - we want to Summary Comment about how most concerns are resolvedThis comment agrees that a default that can be overriden solves a lot of issues
Timer's cannot take a
|
Yes and no. The question is if every person doing anything async should have to always think about keeping their tasks short running. It's not always obvious when a bit of async code consumes too much CPU. Even if you are running a reactor per thread, while you will keep the CPUs busy, you will starve other tasks that are waiting for time. Basically, running a reactor & executor on the same thread can be more efficient but is harder to get right.
This puts emphasis on
It is referring to work that has happened throughout most of 2017. re: executor enter
The API is intended to only by used by authors of executors, not end users.
It's a common issue that comes up in gitter / IRC. Users spawn tasks onto an executor but never start the executor. A local thread executor can't be implicitly started (unlike a thread pool), so this results in the spawned tasks never running and it is confusing to debug for new users. |
The duration only indicates the max duration to block. This is passed through to
No, epoll ops happen on the same thread. epoll is Re: go context, that is a separate feature and is similar to finagle's async task local variables (or whatever they call it). |
I've made several updates to the RFC based on the feedback here, but due to a Github glitch, I need to put this in a new PR. In any case, here are the key changes:
|
Thank you for the great work on Tokio. This is my favourite rust crate. I agree with @alex. Please don't add a global event loop, or anything global. (1) Almost every API I used had a hidden possibility to add an argument of loop=.... Each such API was a chance to shoot yourself in the foot. If you forgot to specify your own loop, the default loop will be used by default and nobody will tell you about it. Then the debugging hell begins. I remember one time it took me a few days to find the API that defaulted to the global loop. As a concrete example, take a look at this pythonic API (asyncio): asyncio.ensure_future(coro_or_future, *, loop=None) From here. This is the equivalent of (2) There are the worse cases of libraries that don't let you put in your own loop. They just assume that you will want to use the default one. It could become very difficult to test your code against those kind of libraries. One experience I had was testing asyncio python code that waits. I created a mock event loop that simulated the passage of time (Called asyncio time travel), because I couldn't afford having my tests wait a few minutes for a timeout. The author of a library I used decided to use the global event loop, and I couldn't replace it with my mock time travel loop. In the end I had to create a test environment that actually waits many minutes in order to run the tests. (3)
Rust is not the same. Rust is a systems programming language. I like Rust because it is very explicit. There are no hidden stuff. When I write code in rust I don't want implicit global things to happen. If I want an event loop, I will create one. I really liked the current rust Tokio way of creating a new loop and then putting your futures into it, and I prefer it every day over other magically global loops constructions. For me the hard parts about learning Tokio wasn't the event loop. The event loop code is usually just a few lines that you slap in the end, after you wrote all your futures. The hard parts were finding out that mixing futures with references and lifetimes doesn't go very well, and understanding how to transform the types correctly, so that I won't get huge cryptic compile messages about mismatch in the error type between two futures. |
This RFC proposes to simplify and focus the Tokio project, in an attempt to make
it easier to learn and more productive to use. Specifically:
Add a global event loop in
tokio-core
that is managed automatically bydefault. This change eliminates the need for setting up and managing your own
event loop in the vast majority of cases.
Handle
andRemote
intokio-core
by makingHandle
bothSend
andSync
and deprecatingRemote
. Thus, even working with custom event loops becomes simpler.Decouple all task execution functionality from Tokio, instead providing it
through a standard futures component. As with event loops, provide a default
global thread pool that suffices for the majority of use-cases, removing the
need for any manual setup.
Send
futures),provide more fool-proof APIs that help avoid lost wakeups.
Provide the above changes in a new
tokio
crate, which is a slimmed downversion of today's
tokio-core
, and may eventually re-export the contentsof
tokio-io
. Thetokio-core
crate is deprecated, but will remain availablefor backward compatibility. In the long run, most users should only need to
depend on
tokio
to use the Tokio stack.Focus documentation primarily on
tokio
, rather than ontokio-proto
. Provide a much more extensive set of cookbook-style examplesand general guidelines, as well as a more in-depth guide to working with
futures.
Altogether, these changes, together with async/await, should go a long
distance toward making Tokio a newcomer-friendly library.
Rendered