-
Notifications
You must be signed in to change notification settings - Fork 521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cats Effect 3.0 Proposal #634
Comments
Worth noting that
If you look at type PureIOF[E, A] = EitherK[StateF[Dispatch, ?], ExitCase[E, ?], A]
type PureIO[E, A] = Free[PureIOF[E, ?], A] Where Nevertheless, I think it's a very very useful one, and I think it might be worth adding to |
Hmm, this is interesting. So you want the ability to constrain your type signature such that it cannot be used with a type constructor which supports |
You could do it with no extra typeclass if we had higher ranks :P |
@SystemFw Technically you can do it without an extra typeclass even in current Scala by using the negated implicits trick. :-P |
Exactly! I recently had to work with a library for JDBC which required that you pin the current session (transaction) to the thread you're going to use for running queries, before you run these queries. And I really, really didn't want to mess with that myself. So I chose to stick to a synchronous implementation (which was fine, as there was no parallelism or other asynchrony anyway), and went with @djspiewak @SystemFw negated implicits would also require |
Why not just rely on local semantics for |
@djspiewak I believe I can't even shift to the same pool. Anyway, I guess we can discuss this abstraction (or the lack thereof) on a different issue in the future :) |
It has to be a single thread executor. Normally libraries with that restriction do a ThreadId check, so shifting to the same thread shouldn't cause any issues |
I have a concern about the proposed type class hierarchy. It just dawned on me that Cats-Effect IO 0.x (the one we had before cancellation) can no longer implement What's even more worrying is that, in fact, the new Again, this might be for good reasons, but we've been living just fine without If anything I'd like to see such restrictions relax, in order for the type classes to be able to abstract over more implementations instead of less. |
When you abstract over implementations, be it with OOP interfaces or type classes, the defined interfaces need to be a common denominator of existing implementations, instead of forcing existing implementations to evolve to implement the new interfaces, or worse, to wait for new implementations to happen after you define the interfaces. The later is imo doomed for failure. My proposal is:
|
While I think this is true for building a lot of abstractions, I think that there's also a slightly different school of thought which says that you should build the idealized abstraction from first principles, and then if that doesn't align with the implementation, find out why. That's kind of the approach I'm trying to take here. I think there will be some evolution, but I would rather define the abstractions according to what should be rather than according to what the implementations currently are, except for cases where there's a profound reason why the implementations are the way they are (in which case there was something wrong with the abstraction derivation) I definitely don't want to release CE3 until all three major datatypes have had a successful pass at implementing the new typeclasses.
We can't keep things separate like that without causing coherence issues. For example, if def foo[F[_]: Async: Concurrent, A](fa: F[A]) = fa.flatMap(_ => ...) // ambiguity I can definitely see logic in making
Why exactly? Cancelation is defined by |
Oh, missed the first comment, sorry:
This was raised in the stream discussion as well. Specifically, Doobie's In other words, for types which currently implement
This is very true. It's a capability that neither Monix or Cats Effect have implemented to date, but I think it's a very important one. The safe semaphore acquisition example is just really hard to talk about without this pair of functions. That's a use-case that we literally just don't support right now. Additionally, the lack of these functions forces tricks like
CE3's hierarchy is definitely more restrictive than CE1's, but that's by design. CE1 just allows the expression of too many problematic things. Sure we're living just fine in a sense, but we miss out on some very nice practical things that are possible with CE1 (such as pure I think it's easy for Cats Effect to represent a "least common denominator" between the three major implementations, and we could do that, but I think we can shoot higher. Cats Effect was always intended to define what it means to be an effect, which means defining and constraining the calculus of effectful datatypes according to first principles, rather than according to what happens to already be present. I definitely share your concerns about implementation, but I feel those are solvable problems, and we certainly have plenty of time in which to solve them. |
So I think we've talked about this before, but I'll mention it again — right now you can define type Cancelable = () => Unit
type IO[+A] = () => (Cancelable, Future[A]) Or if we want to avoid type Cancelable = () => Unit
type Callback[-A] = Either[Throwable, A] => Unit
type IO[+A] = Callback[A] => Cancelable This is THE definition for Asynchrony if you want to represent it as a type and there's plenty of prior art for this, including:
You're saying that:
And I could agree, except that the definition for async data types as described above is so pervasive and so common sense that I fear the type class proposal we have is not based on first principles! I think that any type class hierarchy that we come up with has to be able to work with the types I specified above and if it can't, then we have a problem. |
To answer some of the other points:
This is indeed true, however having ambiguity due to the subtyping encoding that we use is a very invalid reason for making it impossible to implement these type classes for certain data types. That's because subtyping and the current ambiguity is just an implementation detail that could be solved in the future by the compiler. I don't know how Dotty does lately, but I remember for example this proposal which would fix the issue: lampepfl/dotty-feature-requests#4 And I for one can work without the special syntax in generic code, especially if it means that I can abstract over more data types. And maybe we should just bite the bullet and use the Scato encoding. Anything is better than designing a type class hierarchy that doesn't allow for
I share this view, however there's also the timeless "principle of least power". Why is the In the proposed CE3 hierarchy, what datatype is able to implement If you can't think of one such datatype, then the split between them doesn't really exist. |
And speaking of the "principle of least power", I actually want us to still support non-cancelable async datatypes: type IO[+A] = () => Future[A] Or alternatively: type Callback[-A] = Either[Throwable, A] => Unit
type IO[+A] = Callback[A] => Unit This is what we had in Cats-Effect 0.x, before cancellation. And it should ring a bell, because not all async computations can be canceled. We've got the object Async {
def fromFuture[F[_], A](fa: F[Future[A]])(implicit F: Async[F], cs: ContextShift[F]): F[A] = ???
} To me this definition is beautiful, because it tells you exactly what's needed to convert a To me this smells more like a design based on first principles, compared with having PS: sorry for my insistence on this problem, but I think it's a very important one for going forward. Maybe we should start a separate issue about it. |
I really dislike making |
@Daenyth Do note that @alexandru Your points are all interesting (although I broadly subscribe to Daniel's view of abstraction building) Overall, I think that if we want maximum flexibility we need to forego the idea of a hierarchy altogether, and mix and match (biting the bullet of |
@SystemFw the argument from my PoV is that the new type class hierarchy doesn't currently work for any implementation we have in the Typelevel ecosystem, without changes and works only for ZIO, until proven otherwise (until somebody implements it we'll never know). That's a big red flag right there. Maybe we need to restate the project's goals, but with type classes that are too restricted, we essentially end up with just one And given the trouble we've had implementing the instances for the monad transformers for the current hierarchy, which is easier to implement, I'm pretty sure that we'll have even more problems when implementing those instances now. Just a hunch at this point, not absolutely sure. I really understand Daniel's argument ... building from first principles sounds nice, however given that we are restricting the implementation in such a way as to reject the most ubiquitous implementations out there, I'd like to see some really good arguments that go beyond "it's the fault of our subtyping-based encoding", because that won't cut it imo. |
Also to add to my chain of thoughts — we already have a minimal IO implementation that implements We use it for testing and it was implemented in less than 100 lines of code. The number of lines of code it takes to implement such a data type is a useful metric imo. Lets go through the exercise of implementing a minimal IO that implements the new |
And conversely, I do really understand all of your points, they are all very valid. As a related point, I used the phrase concurrency scheme on purpose, because there are several philosophical designs. I think the one we have now is frankly not that great: there is a single digit number of people that can write interruption safe code, the |
@SystemFw so I love your proposal for concurrency, with But I don't want it to restrict what data types can implement these type classes. So I'm all for Scato's encoding if that solves our issue. Syntax is overrated anyway. |
I'm heating up 🤒🙂 sorry about that, but I think this is the right time to bring up such concerns and we've got some time to get it right and agree on stuff. |
fwiw, I didn't perceive any heat, and I think all the concerns you have raised make a lot of sense. My main concern right now is just how to organise the discussion so that we don't get lost in a million points :) |
Maybe we should split this discussion in separate issues, since there are multiple parts to this proposal that could be talked about in parallel. |
I think we definitely should split some of the feasibility questions into a separate thread (I'll hold my responses until it's created to avoid more noise). @alexandru can you make a new issue which reiterates some of your points? Copy/paste is fine, just want the thread to be clear. I definitely don't think there's any undue heat here. :-) You raise really really important questions, and it's vital that we consider all of them carefully as part of this process. |
"Least power" is definitely a valid point here. In theory, For my part, I don't see too much of a problem with putting |
Is there a short description of what this means?
This is very important; at least from my perspective, it would be a severe pain point for my team if Does this approach make any sense:
I think it has some benefits:
It's perhaps not the most principled stance but I think the ergonomics are pretty critical. It's extremely tempting to litter the code with |
@Daenyth tl;dr it's the
I think the same. If you do full-blown tagless final in business logic you're going to hate the scato encoding if syntax doesn't just work.
I got used to the idea that Timer shouldn't really be mocked and there are better interfaces for this, e.g. https://github.com/ChristopherDavenport/cats-time (with precise java.time types) Also, let's split this please. The thread is long enough already... |
Very roughly… Instead of: trait Applicative[F[_]] extends Apply[F] with Functor[F] {
// ...
} You have: trait ApplicativeModule[F[_]] extends ApplyModule[F] with FunctorModule[F] {
val Apply: Apply[F]
val Functor: Functor[F]
// ...
}
// and in another part of town...
implicit def functorForApplicative[F[_]: Applicative]: Functor[F] = ...
implicit def applyForApplicative[F[_]: Applicative]: Apply[F] = ... You can get disambiguation out of this kind of encoding by fiddling with the prioritization order of these proxy implication implicits, which is one way you can solve the
How so? Creating one-off One way to think about this… In Scala, typeclasses should simply proxy and interpret function calls to the underlying type; they should never contain state, via closure or otherwise.
Agreed. @Daenyth Can you open this as a separate issue? (specifically |
While I understand the concern about syntax, a more severe pain point would be if the type classes do a poor job of abstracting over effect data types, making constrained parametric code working with It would be pretty bad if we started thinking about a Cats-Effect 4.0 already 😐 And really, in my code I end up doing this a lot: F.flatMap(F.attempt(fa)) {
case Left(_) => ???
case Right(_) => ???
} It's just a bunch of functions after all. And if this means that we do a better job of abstracting over data types, this is a pain that I can personally take. I'm not set on us using the Scato encoding — especially since experience has taught us that we should never bet against inheritance in Scala 😄 But maybe we can find some sort of middle ground, in case we are speaking of artifacts in the hierarchy that happen only due to the encoding chosen. Or another possibility is to just live with the conflicts and just forget about syntax in a Not everything has to be perfect and this conflict could be solved by a future version of the compiler anyway. I can certainly see some sort of annotation to make the compiler not trigger errors on conflicts, as is the case in lampepfl/dotty-feature-requests#4, which if it happens, would mean that we've chosen a certain hierarchy based on a certain compiler behavior that could easily be rectified in the future. |
Btw, this problem that we're trying to solve here is a tough one to solve (how to make these type classes granular, in spite of our encoding), I've thought about it in the past but am incapable of providing a solution, so I appreciate you all thinking about it 🥰 |
@djspiewak the second equivalency ( |
Yeah I realized the problem with those equivalencies the other day. I’m trying to think about the best way to resolve it, or if there’s a nicer pair of primitives which eliminate. I like the idea of allowing a lawful continual data type implementation, and this is a good step towards it, but there’s a piece missing. |
I have a few thoughts (and a few in depth links that explain how haskell
does it) but I'm at a conference right now
…On Sat, 21 Sep 2019, 16:37 Daniel Spiewak, ***@***.***> wrote:
Yeah I realized the problem with those equivalencies the other day. I’m
trying to think about the best way to resolve it, or if there’s a nicer
pair of primitives which eliminate. I like the idea of allowing a lawful
continual data type implementation, and this is a good step towards it, but
there’s a piece missing.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#634?email_source=notifications&email_token=AB3I33XAQIIHJDVYEBCM3V3QKYWR7A5CNFSM4IWJ4WYKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7IS6XQ#issuecomment-533802846>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB3I33XHUOKHVKMEQG2U37LQKYWR7ANCNFSM4IWJ4WYA>
.
|
I'm relatively certain that the new type classes actually do a better job of abstracting than the old ones, since the power is more granular. They definitely impose some stronger constraints on certain things, such as forcing datatypes to carry an
Well, this is less about We certainly can do this if we need to, but I just wonder about whether or not the gains are worth it. Specifically, moving
@SystemFw Yes, we should. Would you mind opening that up when you have a chance? Your explanations of those functions are always better than mine anyway. |
I will, I'm just trying to gather the mental energy for a longer writeup, if you know what I mean |
Note that this has evolved somewhat on the |
(moved from https://gist.github.com/djspiewak/fb7851cc4f804f851e70d15ba7c94bb1)
If you're someone for whom code speaks louder than words, here's a branch to ponder: https://github.com/typelevel/cats-effect/tree/ce3
The following is a proposal for Cats Effect 3, collaboratively devised over the last few months by several of the maintainers and contributors to the library. I will frequently use first-person in this document (because I'm lazy), which is posted under my name (ditto), but it really was a joint effort. As with any joint effort, there has definitely been some healthy debate about several of the concepts in this proposal. For reference, these are the things we're the least sure about:
Concurrent
now takes two type parameters, and thus is somewhat less ergonomicRegion
has been added, along with some complex associated machinerySafe.Case
typeSync
being stack-safe)ExecutionContext
appears in the APIExitCase
is a really terrible name (the names in general need some polishing)None of this is final! At all! Really. Now is the time for questions and debate and concerns and bike-shedding. Put forward your own ideas. Find fundamental problems with what we've done! The sky's the limit!
Preamble
Before diving into the exact changes that we're proposing, I want to briefly talk about Cats Effect, what it is and where it fits in the ecosystem. Cats Effect is really two things at once:
Both of these elements of the project are extremely important, though the success of the former in the ecosystem and in end-user applications has rendered the latter less important than it was at the conception of the project. Note that, from an ecosystem strategic standpoint, this implies that Cats Effect has three primary stakeholders:
main
function.main
, or at least something likeIOApp
.All three of these stakeholder categories exert equal pull on Cats Effect's design and future, though not all are concerned with the same things equally. Runtime Systems are almost entirely concerned with Cats Effect's abstractions and laws, for example, while End Users are concerned with functionality (one aspect of the abstractions, but often more relevantly the practical concrete
IO
) and syntax. Middleware often cares about both equally, although Middleware will often pull in exactly the opposite direction from Runtime Systems (desiring more functionality within a cleaner model which the runtime must find a way to implement). Sometimes, in order to satisfy the needs of one stakeholder, we need to compromise the desires of the other two. This is an unfortunate consequence of being pulled between so many worlds.There's another aspect of this, as well, which is that Cats Effect really defines what it means to be a functional effectful runtime in Scala. Its abstractions are the language of effects on the JVM, and thus it both enables and constrains the Scala FP design space as a whole. This role in the ecosystem is not taken lightly, nor with ego. It means that every design decision made in Cats Effect has profound consequences for every single line of practical functional code written in Scala for years to come, and thus should be undertaken with commensurate care. (note that this is distinct from Cats, which does occupy a more keystone position in the ecosystem but has far fewer design decisions to make; after all, how many ways are there to define
Functor
?)All of this is to try to lay down a clear and self-conscious vision of what Cats Effect is, where it fits in the ecosystem, and what its goals and constraints are. I've seen a lot of people discussing various ideas about what Cats Effect is and what it should do, based on their own biases and perspectives. This is what we, the maintainers and creators of Cats Effect, define it to be. These are the points which have shaped Cats Effect 3.
evalOn
andasync
In the original work designing Cats Effect, I really wanted to keep
Async#async
from auto-shifting. The reason being that it was, in my experience, not at all uncommon to want to retain the continuation of an asynchronous effect on its calling pool, and the performance hit and contention from the enforced context shift is not something to ignore. This design decision led to a large number of other design outcomes, includingshift
, the shifting (no pun intended) semantics ofIO.fromFuture
, and more.In retrospect, I think this was a mistake. While the use-cases that I had in mind certainly exist, they are vastly far in the minority compared to the more common use-case of wanting to take a continuation off of the calling pool as quickly as possible. More importantly, John De Goes makes an excellent point about local reasoning and
evalOn
. Ifshift
is eliminated andasync
auto-shifts when in anevalOn
continuation scope, then it is possible to lexically reason about thread pools. This is incredibly powerful, and it eliminates a very large class of common and hard-to-detect errors.Even at the time, this was a slightly odd design. Monix has had auto-shifting
async
and lexicalevalOn
from the very start (predating Cats Effect entirely). I still believe that non-shiftingasync
is useful, but I think it should be classified as what it is: unsafe. My proposal would be to eliminateshift
, enforce auto-shifting semantics onasync
, promoteevalOn
as a first-class solution to the problem, and encourage implementing datatypes (such ascats.effect.IO
) to optionally add their ownunsafeAsync
function, which captures a continuation without shifting. I believe MonixTask
already has something like this, so the concept is relatively proven.These semantics are already present in Monix and ZIO, and CE3 codifies them into law in the abstractions. To that end, it also will result in some very noticeable changes to the
cats.effect.IO
datatype, in order to comply with the new (safer) semantics.To that end, the
Async
typeclass in Cats Effect 3 adds a pair of functions for accessing and manipulating the scoped executor:You'll also note that
async
is actuallyasyncF
. This is simply becauseAsync
impliesSync
, meaning that you can always wrap your callback registrar indelay
to lift it intoF
. This type signature is strictly more general than the existing one, since it provides a mechanism by which callbacks may be registered themselves through asynchronous actions without blocking a thread. The performance cost of the addeddelay
is meaningful, but in our experience it is exceedingly rare to have repeated calls toasync
in the hot path, and in the event that you are doing things in the hot path, you probably shouldn't be working through a typeclass abstraction anyway.The pair of functions,
evalOn
andexecutionContext
, provide the mechanism for accessing theMonadReader
(for theExecutionContext
) constraint onF
.evalOn
takes an effect and ensures that all synchronous actions within that effect, save for anything further-nested in an innerevalOn
, will be evaluated on the specifiedExecutionContext
. The dual of this is theexecutionContext
function, which produces the currently-dominantExecutionContext
withinF
. This is useful for interoperating with APIs which require direct access to anExecutionContext
, such asFuture
.Abstraction,
Concurrent
, and ParametricityIn the very beginning, Cats Effect was based on the principle that concrete effects (like
IO
) should be atomic and should represent a fully opaque, runnable action. This concept actually comes from Paul Chiusano, who used it as a foundational idea in his original design of fs2 (the rewrite of scalaz-stream). Semantics such as preemption (cancellation) and concurrency were intended to be handled at a higher level in the stack, by frameworks defining a richer (and thus, more error-prone) calculus.To that end, early versions of Cats Effect really only needed to characterize a few things:
That last point, btw, is where
runAsync
came from.Given the requirements were exclusively limited to the above, a hierarchy like the following (which was implemented in Cats Effect 0.1) makes a lot of sense:
The above is sufficient if we assume that base effects are atomic and the core runtime does not define things like cancellation or concurrency, but unfortunately it turns out that this design principle is problematic when applied to the broader ecosystem. My favorite example of this comes from fs2:
Even without knowing fs2's API, it seems relatively clear from the types that
fa2
andfa *> fa
should be more or less equivalent. Or at the very least, should behave pretty much identically. Unfortunately, this wasn't quite the case with early versions of Cats Effect.The reason for this is the atomicity design. As intended, fs2 defines a mechanism and semantics for concurrent preemption of running streams. This means that, within the confines of fs2, the stream
s
could be safely interrupted by some concurrent mechanism (i.e. another stream), all resources and such would be properly handled, and evaluation would be terminated as soon as possible preventing leakage of CPU resources on wasted work. This worked fine within fs2 itself, but as soon as you compiled back to the underlying effect type, all those semantics would be lost.More specifically, assume
Stream.eval(fa2)
. This should be exactly the same ass
, but it wasn't becauses
could be interrupted at the++
boundary, whileStream.eval(fa2)
couldn't be interrupted at all, and thefa2
would run to completion. Concurrency and cancellation are linked together (due to the need to safely define what it means to handle errors inrace
), and omitting them from Cats Effect, which is the definition of the runtime, meant that all middleware needed to encode these semantics via captured side-effects in user space. This was safe, in a sense, but not composable.To that end,
Concurrent
,Fiber
, andBracket
were all added to the Cats Effect hierarchy, characterizing preemption and resource safety as part of the core calculus itself. These were much newer APIs thanSync
,Async
and friends (which had been vetted for over 5 years in Scalaz), but they had some precedent in Haskell's async libraries, in Monix, and in ZIO. Unfortunately, at the time the natural place to addConcurrent
was underAsync
(meaning that allConcurrent
types are alsoAsync
, and thus alsoSync
). This was a somewhat reasonable design, but it introduces a really annoying problem, which is that it is impossible to have aConcurrent[F]
without also havingSync[F]
andAsync[F]
. This is annoying precisely because of the power that those two typeclasses offer.Sync
is defined by a single function,delay
, which takes a thunk=> A
as a parameter and wraps it up in anF[A]
. In other words, it captures side-effects.Async
is analogous, except for callbacks. This isn't news, and it's exactly whatSync
/Async
were designed to do, but unfortunately it means that, in theory, any function which takes these constraints can itself do anything. Consider the following type signature:What can this function do? What implementation can we swap in for the
???
? There's almost no way to tell.Sync
does give us some reasoning power:foo
cannot involve asynchrony without blocking a thread, for example. Just to drive the point home, here's another function,bar
, which takes anApplicative
:There's only one implementation for this function:
a.pure[F]
. You really can't do anything else. Well, I mean you can (e.g.unit[F] *> a.pure[F]
), but it's all equivalent toa.pure[F]
. By contrast,foo
may bea.pure[F]
, or it could beSync[F] delay { println("hey there!"); a }
, or an infinite number of other things!This is unfortunate, as reasoning is reduced considerably whenever these constraints are present, but unfortunately
Concurrent
inheriting fromAsync
means that any time we want to talk about concurrency, we also have to talk about... literally everything (i.e. the space of all possible side-effects). This is too much, and the only way to resolve it is to flip things around, puttingConcurrent
at the top of the hierarchy and movingSync
andAsync
down below. (diagrams at the end)Moving
Concurrent
to the top of the hierarchy (well, underBracket
) allows us to discuss concurrency in a pure fashion, without bringing in the messiness of capturing side-effects. It allows us to write functions which have access to considerable power, including forking and cancelation, but do not have access to unlimited power. Even more excitingly, it allows us to write laws not in terms of capturedvar
s, but in terms of true substitutable equivalences. This in turn implies that we can define concrete implementations ofConcurrent
which are weaker thanIO
itself! This makes it possible to build tooling aroundConcurrent
which allow for rich and deterministic testing of things like race conditions, cancellation, and other concurrent logic. This is very, very, very exciting.As a brief aside, it's worth noting that there is still value in writing type signatures like
foo
's. Even though it's difficult to reason about the body offoo
, due to the power ofSync
, the generic signature gives us the flexibility to instantiateF
with different types. Not just different concrete runtimes (like Monix or ZIO), but different composable types on top of those runtimes, such as monad transformers. This expressive power has proven immensely useful in both the middleware and end user ecosystems of Cats Effect 1.Fixing
Fiber
In current Cats Effect,
Fiber
has the following definition:This is nice and simple, but it hides a subtle pitfall. What is the result of the following?
Should it throw an error? Should it just… hang? Should it depend on whether or not
fa
completed evaluation before thecancel
(say, iffa
is really really fast to evaluate)? What makes the most sense here?The types don't really give us anything to work with, and the semantics are equally obtuse and surprising. In Cats Effect 3, the goal is to do better, and we can do this by generalizing
Fiber
very slightly:Same basic signature, but now
join
returns anExitCase
. If you're familiar withExitCase
in Cats Effect 1, you'll notice that this is a slightly different type: specifically, it now includes the result type,A
. This is intentional, asExitCase
itself has been generalized to the following:In a very real sense,
ExitCase
is the algebra of an effect: it may produce a value as a result, or it may error, or some outside fiber may cancel it. Thus, producing this type fromjoin
gives users the power to do whatever they want with the results.However, surfacing cancellation as a value in this fashion yields some surprising consequences for certain use-cases. For example, one might want to
join
on aFiber
, then evaluate some logic in the event that the fiber is canceled, and then forward that cancellation on to the outer fiber (think: some sort of cleanup logic surrounding preemption). The signature ofjoin
allows us to do half of this, but critically it doesn't allow us to cancel the enclosingFiber
in which wejoin
ed.This is a bit weird, if you think about it. The first two of the
ExitCase
cases have corresponding constructors which can build effects that produce to those results:Completed
==>pure
Errored
==>raiseError
So, what about
Canceled
? We certainly can't create a cancellation "value" simply by creating a fiber, canceling it, and thenjoin
ing on it, because that would just result inExitCase.Canceled.pure[F]
! This is also the problem with the previously described concrete use-case.To resolve this, we introduce a new constructor,
canceled
, onConcurrent
, which allows us to create anF[A]
(for anyA
) which is already canceled.Note that, unlike
raiseError
, there is no way to "eliminate" cancellation within a fiber. If someone cancels you, you're done. Your finalizers will run (anything you've bracketed), but that's all. Other fibers which arejoin
ing on you can observe the cancellation and respond accordingly, but you yourself are done. This is consistent with the intended semantics of concurrent preemption.As an added nicety (and something which hints that we're on the right track), the
Concurrent
laws are much easier to define (and more comprehensive) whencanceled
is a value that can just be constructed directly.Cancelability
By default, all three major concrete Cats Effect datatypes define their
flatMap
function (and all lawfully-equivalent combinators, such asmap
) to be cancellation points. This means that they serve as a sort of "safe point" in the computation chain where some external fiber may preempt evaluation, aborting the sequence. As a simple example usingIO
:In the above, it is very likely that
computeA()
is run, but also quite likely thatcomputeB()
will not be run. This is because, conceptually speaking, the cancellation status of the fiber is "checked" by the*>
operation. We can't cancelcomputeA()
while it is running, since it is some sort of opaque side-effect that could be doing anything, but once it finishes we can check for the cancel signal before we move on tocomputeB()
, and so that's exactly what we do. In other words, cancellation isn't really magic, it arises naturally from the fact that we're in control of our sequential composition.This is the so-called "auto-cancelable model". The default is that
flatMap
is cancelable, and thus it is possible to cut off wasteful computation as early as possible in the event that something else has preempted it. This is in contrast to the "continual model", which is what was implemented by Monix prior to the introduction of cancellation into Cats Effect. In the continual model,flatMap
is not a safe point, and chains of sequential actions cannot be interrupted by default. There are some significant advantages to this model, in that you have a guarantee that potentially-dependent sequential effects are going to run, regardless of what's happening in other fibers, but it also can result in wasting time on computations which will ultimately be ignored, and even can give rise to deadlocks.These two models hint at a fundamental tension: the auto-cancelable model gives us better efficiency and avoids deadlocks in case of preemption, while the continual model provides better safety guarantees.
Ultimately, both models are necessary to some degree. There will always be critical regions of your program which must be evaluated in an "all or nothing" fashion, without permitting interruption between actions. However, in order to avoid leaking resources, particularly in looping chains (infinite fibers), cancelation must be respected. Cats Effect 1 addresses this issue by effectively assuming that all sequential chains are cancelable by default, while providing an "escape hatch" in the form of the
uncancelable
function.uncancelable
takes an action,F[A]
, and produces that same action with the added guarantee that cancelation will be ignored within that running action. Thus, if any component ofF[A]
is run, then all of it will be run. This function is implemented in terms ofbracket
, which is reasonable since resource acquisition (the first parameter of thebracket
function) should generally be assumed to be a critical region, and atomicity is the desired default. Unfortunately, this gives rise to another problem, which is that it becomes impossible to safely acquire a semaphore (or any similar pattern) in Cats Effect. For example:This seems quite reasonable at face value. We acquire the semaphore, use it, and then release it with the guarantee that the release will always run, even if our fiber is canceled. Unfortunately, there's a bit of a trap here, which is that the
sem.acquire
action is itself uncancelable! This means that if the semaphore is known to be unavailable, it's impossible for an outside fiber to cancel the acquisition, freeing up the resources of this fiber.This problem was discussed at some length earlier this year in the Cats Effect gitter room. In this discussion, Fabio Labella proposes that there is no "right answer" when choosing between the auto-cancelable and continual models. Neither is necessarily wrong, though auto-cancelation seems to be more convenient in practice. The more important thing is to provide a composable model for converting between models in a scoped fashion. Cats Effect 1 provides
uncancelable
, which allows users to move from the auto-cancelable model to the continual one. Fabio pointed out that what is missing is acancelable
function which converts from the continual model back to the auto-cancelable one. This proposal was later implemented in ZIO, where appears to be quite effective in practice!Cats Effect 3 fully implements this proposal with the following pair of functions on
Concurrent
:These functions eliminate each other, such that
cancelable(uncancelable(fa)) <-> fa
, anduncancelable(cancelable(fa)) <-> fa
, which is quite intuitive. Additionally, this pair of functions makes it possible for an implementing concrete datatype to define either the auto-cancelable or the continual models as their default, so long as they are able to lawfully implement these two "scoped mode shift" functions.Using these functions, we can solve the semaphore acquisition problem from earlier:
And now we have the best of both worlds: the semaphore is acquired interruptably (avoiding deadlocks), but if it is acquired we are guaranteed to safely release it (avoiding resource leaks).
As an aside, we can still safely provide a default implementation of
uncancelable
in the following way:This is, in fact, a law which relates
bracket
anduncancelable
(and by extension,cancelable
). Concrete implementations may have a more efficient version, however, and are always free to override this implementation with one of their own, so long as it complies with the stated equivalence.A Note on Fairness
In the context of functional effects, fairness is generally seen as equivalent to ensuring the current fiber yields its native thread back to the underlying executor in a timely fashion. Having a single logical thread hogging an underlying native thread for a long period of time inhibits responsiveness and can (in extreme cases) lead to thread pool starvation, as the application is unable to respond to incoming requests due to long-lived computations eating up all of the available resources and never yielding back control.
In the current version of Cats Effect, it isn't possible (without relying on implementation details) to write generic code which properly encodes fairness. This is particularly problematic when the underlying datatype (for example,
cats.effect.IO
) doesn't automatically self-optimize for fairness, and instead relies on users explicitly "opting in", which they cannot do through the abstractions.To resolve this, Cats Effect 3 adds the following function to
Concurrent
:All this does is introduce a fairness boundary. It yields control back to the underlying executor. Or rather, it is suggested that it does so. There are no laws here, nor really could there be. Implementations of
Concurrent
are expected to treat this as a serious hint from the user that this is an appropriate yield point for the current fiber, and fairness should be respected here. Implementations are certainly free to ignore this. As one plausible example, Monix automatically inserts fairness yield points (by default) once every 1024 actions. If one of those yields was automatically inserted just previous to the user sequencingyieldTo
, then perhaps the user-initiated yield could be ignored. Tuning would be required to figure out whether or not this is actually a good idea.Wall-Clock Time
It's important to remember that Cats Effect characterizes what it means to be a runtime. In some cases that means characterizing what it means to start a fiber, or register a callback, or atomically manage a resource. In other cases, it defines what it means to access certain system-level functions which can vary depending on how your application is running, and to do so in a way which is safe and interacts well with the rest of the runtime.
Interaction with wall-clock time, both in terms of timers and determining the current clock time, is an important (if unfortunate) part of many applications, and Cats Effect 3 characterizes this functionality with a typeclass which further-specializes
Concurrent
:These functions do exactly what it seems like they do. The former interacts with some underlying scheduler to asynchronously suspend the given duration, while the latter returns the offset since epoch according to the current wall clock. The only moderately odd thing here is the use of
FiniteDuration
innow
, which is mostly because I hate mixingscala.concurrent.duration
andjava.time
types in the same API, and the former has a much cleaner API in Scala.Monadic Regions and Cancellation Semantics
Present-day Cats Effect defines an abstraction,
Bracket
, which is used to scope a monadic region. This monadic region is given certain properties with respect to errors and cancellation which makes it possible to acquire scarce resources in such a way that the freeing of those resources is guaranteed. The various properties implied by this functionality also make it possible to implement higher level primitives, such asguarantee
(which allows one to suppress cancellation during a critical series of actions which must all be performed together).This is very useful, but upon closer examination, something is missing.
Bracket
is defined by the following function signature (in Cats Effect 3):The scope of the monadic region defined by
bracket
is statically constrained to be around theuse
effect. In other words,handle
will be sequenced afteruse
, no matter what. It isn't possible to acquire a resource, pair that acquisition with some finalizer which releases it, and then return that acquire/release pairing to some caller who canflatMap
on it to build up a program dynamically.In other words,
bracket
forms a static monadic region. This implies a clear dual: what about dynamic monadic regions? Fortunately, there is already quite a bit of prior art in this regard within the Cats Effect ecosystem alone! TheResource
concrete datatype was implemented in Cats Effect 1.0 to handle this use-case:All of the
flatMap
calls onResource
extend the scope of the monadic region, to an arbitrary degree. The region is only closed when theuse
function is invoked, which interprets the structure intoBracket[F]
and returns the result. This construct, as it turns out, is really really profoundly useful, and it can be found in several other guises throughout the ecosystem, includingfs2.Stream
andzio.Managed
(though the latter is hard-coded to theZIO
type and is not generic, unlikeResource
andStream
, but it does still define a dynamic monadic region).In fact, the original paper introducing monadic regions (Kiselyov and Shan, 2008) actually defines a dynamic region, not a static one! However, the paper is somewhat distinct from the situation in Cats Effect in that it also constrains effects to a closed algebra, predefined by the resource library itself. This allows the technique to encode statically provable resource safety (a very desirable property), but it also makes the technique undesirable in practice. Cats Effect allows users to construct arbitrary effects. In the absence of linear typing, this means that it is impossible to ever guarantee that resources are safely discarded (though we can guarantee that resources are always released), but the tradeoff is a much more practical effect system. Unfortunately, I am not aware of any prior literature describing dynamic monadic regions with open effect algebrae (though there is a little-known Haskell library which encodes the concept in a fashion similar to
Resource
).This leaves Cats Effect in a somewhat undesirable position of having to break new ground on a previously-untried abstraction:
Region
. In general, Cats Effect attempts to be quite conservative in what it defines, given its highly central position in the ecosystem. In this case, there is prior art (e.g.Resource
), but only in a concrete and specialized form. To that end, this particular construct remains the most divisive construct within the Cats Effect maintainership. Feel free to criticize accordingly!As a motivating example, consider the following scenario:
This is immensely difficult to encode with the current constructs. Higher level frameworks like fs2, Monix, and ZIO all have answers to this, but Cats Effect does not. Critically, the reason it is difficult to encode this use-case is the fact that
Resource
does not form aConcurrent
, and thus lacks arace
function. Unfortunately, the whole reason thatResource
doesn't form aConcurrent
comes down to the fact thatConcurrent
(in CE1) impliesBracket
, andResource
cannot form a lawfulBracket
since it is a dynamic region, not a static one.To resolve this, in CE3 we split the notion of resource management into two typeclasses:
trait Bracket[F[_], E] extends Safe[F, E]
trait Region[R[_[_], _], F[_], E] extends Safe[R[F, ?], E]
This is all where
Safe
is defined as the following:So
Safe
forms a type union betweenBracket
andRegion
, which allowsConcurrent
to be defined in such a way that resource management is guaranteed, but the exact form of that resource management is unfixed:So what then is
Region
, exactly? It is defined by two functions:openCase
andsupersede
:The
openCase
function is analogous tobracketCase
in theBracket
typeclass, in that it takes an acquisition effect and a release function, where the release function accepts the value which was acquired and the exit case (more on this type in a moment). For reference,bracketCase
has the following definition:You'll notice that
bracketCase
defines theCase
in terms of the result type,B
. This is of course impossible withopenCase
, due to the fact that the region is dynamic, and the final result type is unknowable here – it depends on an arbitrary chain offlatMap
calls which haven't happened yet!supersede
is tricky. The problem with a dynamic monadic region is not how to open the region, but rather how to close it. If everyflatMap
just extends the scope, you need some other function to "get you out", as it were. The problem is that the exact form of this function varies depending on the datatype. A value of typeResource[F, A]
contains exactly oneA
, and we can get it out by callinguse
. A value of typeStream[F, A]
contains manyA
s, or none at all, and there's no way of meaningfully abstracting over these two scenarios.However, as it turns out, we don't really need to.
Monad
abstracts over things like this. For example,IO[A]
is a monad that contains exactly oneA
(or an error), whileList[A]
contains zero or manyA
s. They're bothMonad
s, so how is this done exactly? By not attempting to abstract over how you get things out.So rather than talking about how to get the
A
out of theRegion
, we instead talk about how to close the scope, which is what leads us to the following type signature:Given a region (
rfa
) who's value we don't care about, and another completely independent region (rfb
), we can glue them together and produce the results of the second region, immediately closing the resource scope from the first region. In a sense, this is kind of like the*>
operator, fromApply
:The
A
is ignored, we just return theB
. Now it's tempting to say that we should just close the scope as soon as someone uses*>
(or analogous), since we can guarantee that theA
from within the region is not used in the result, and thus it is safe to free. However, in order to be lawful,*>
must be consistent withflatMap
in the following way:...and since
flatMap
extends the scope, so must*>
. Intuitively, this translates into allowing someone to make the following transformation without changing the meaning of the code:And so we introduce a new function,
supersede
, which is functionally equivalent to*>
save for the fact thatsupersede
closes the left monadic region, while*>
will nest the right region within the left (extending the left), just asflatMap
would. This function also allows us to define a convenience function onRegion
,close
, which has the following relatively trivial definition:Note that none of these functions allow you to abstract over getting values out of the region, but that's okay for the same reason that none of the other functions in Cats Effect allow you to abstract over getting values out of the effect, nor should they. Just as with
bracketCase
, these primitives are sufficient to write resource-safe code using dynamically scoped regions (as opposed to statically scoped inBracket
). This resource safety in turn makes it possible to write preemption-safe code withConcurrent
, cancelable effects withAsync
, and so on.ExitCase
As you may have noticed in the above,
ExitCase
has been removed from thebracketCase
andopenCase
signatures, replaced with justCase
. This isn't because a new type has been introduced, but rather it is because the concept of cancellation has been removed fromBracket
andRegion
! It's impossible to arrive in a situation where cancellation is relevant with just aBracket
orRegion
. Thus, it doesn't really make sense for either of them to codify the concept. Instead, we use a constrained type member to invert the contravariance and allowCase
to be refined by subtypes:(it's still an open question if we need the
Traverse
constraint, though clearlyExitCase
can satisfy it)Clearly we can define all of the laws for
Bracket
andRegion
without referring to cancellation, and those laws can be strengthened inConcurrent
to deal with theCanceled
case, but the real question is what this means for end-user syntax. Users want to write type signatures such as this one:Is that constraint still meaningful without
ExitCase
? Yes, yes it is! The Scala compiler will defineF.Case
to be an existential type, meaning that it is entirely unknowable within the body offoo
. We can still have values of that type, we just can't do anything with them. This in turn means that we're free to write generic code on top of rawBracket
, like above, but if we have an instance ofCase
, we can't get anything out of it (without usingTraverse
, see the note above).In practice, this means that raw
Bracket
is perfectly useful if all you need is thebracket
function. If you need the more powerfulbracketCase
, then you probably need to constrain theCase
type in some way. For example, you can always constrain it to be preciselyExitCase
:This is unwieldy, but we don't expect it to be particularly common.
Concurrent
(and obviously everything that follows in the hierarchy) fixesCase
to beExitCase[E, ?]
, which makes sense sinceExitCase
defines an algebra of results, errors, and cancellation, andConcurrent
is where we introduce the concept of cancelability.runAsync
While I still maintain the usefulness of
runAsync
, the addition ofConcurrent#start
and similar has made its original design goals mostly obsolete. Concrete datatypes are already free to use similar tricks to enable running on single-threaded VMs (such as JavaScript); we don't need to include it in the typeclasses. If we want to maintain the power of this function in a cleaner package without tying ourselves to a concreteIO
, we can defineEffect
with the following function:This is still quite useful, and it's a lot cleaner than what we have today. In fact, today, such a signature needs to be implemented using
runAsync
, which is unfortunate. This function would imply that anyEffect
must have an injection into anyAsync
, which is a reasonable restriction and maintains the current characterization semantics (whereEffect
forbids things likeKleisli[F, R, ?]
) and all of the current power without the weirdness or the concrete type.Effect
andConcurrentEffect
As a brief digression, it's worth noting that these typeclasses (in Cats Effect 1) are exactly the same thing.
Effect[F]
means thatF
is isomorphic toIO
, butIO
itself definesConcurrentEffect
, meaning that there are no types which form anEffect
but not aConcurrentEffect
, so there's absolutely no reason to split them.LiftIO
I use this a lot, but it does privilege
cats.effect.IO
quite unduly. It's primarily useful in MTL contexts, where lifting can be provided by a more efficient and less constrained mechanism than evaluating the effect into theSync
/Async
constructors. However, as John points out, it is something that can easily be provided outside the hierarchy. Each specific datatype can define their own version ofLiftIO
, as the function is lawless and has no interactions with the rest of the hierarchy.Bifunctor Classes
I honestly really wanted to find a way to get bifunctor classes into Cats Effect 3. They're at the very least a pedagogically interesting design space, and not having them in Cats Effect constrains the ecosystem to always treat bifunctor types as second-class, which is not at all what I want. While I still see a lot of problems with bifunctor asynchronous effect types, I at least want the experiments to be done to see how they work in practice.
Unfortunately, bridging between bifunctor and monofunctor typeclasses appears to be an intractable problem, due to the simple fact that bifunctor types are more expressive than monofunctor types. It's relatively easy to materialize a
Sync
given a (hypothetical)Sync2
, but it is not easy or even possible to automatically go in the other direction.To make matters worse, all of the monofunctor interfaces extend from the rich monofunctor ecosystem already defined by cats. This allows us to inherit a lot of functionality and compatibility. The bifunctor interfaces simply do not have these benefits, since while cats does define a
Bifunctor
type, it does not defineBiapplicative
,Bimonad
, or any of the other natural extensions in that line. Perhaps this is something which could be remedied as part of a bifunctor Cats Effect design, but it doesn't exist today.Ultimately, if we were to define a bifunctor hierarchy in Cats Effect, it would be very much parallel to the monofunctor one, and framework authors would be forced to choose between them or (even worse!) implement them both. We simply cannot make seamless materialization work in both directions due to the differences in expressiveness of the signatures. So while this opens up the ecosystem to experimentation and new designs around the bifunctorial ideas, it also imposes a very high burden upon framework authors, who will perpetually have to respond to feature requests for redundant APIs.
All of this would be entirely acceptable if bifunctor IO were more proven. At this point, it simply is not. What is often forgotten is that Cats Effect 1.0 was designed with the benefit of six years of time and experience using types like
scalaz.concurrent.Task
, next generation implementations like MonixTask
, and frameworks like scalaz-stream and fs2. The Cats Effect design is intended to be quite conservative – it didn't even include support for characterizing resource management in initial versions! – and I see no reason to abrogate that design at this time.Bifunctor IO may prove to be more useful and applicable than monofunctor IO going forward. That would certainly be very interesting! And I would love to see the industry move the state of the art forward. My greatest reservation with not including a bifunctor hierarchy in Cats Effect 3 is that it may artificially constrain bifunctor IO as a design space, since the majority of the ecosystem is going to follow along with (read: be constrained by) whatever Cats Effect chooses to do. But in this case, since the bidirectional materialization problem appears to be fundamentally without a solution, I am strongly opposed to including such a hierarchy at this time. Cats Effect 3.0 should be a solely monofunctor API.
Typeful Alternatives
To be clear, what the above is pointing out is not any sort of badness of bifunctorial IO, but rather the fact that the practical costs of imposing a parallel and broadly-incompatible hierarchy outweigh the benefits. However, there are potential alternative approaches beyond parallel hierarchies which could resolve this, provided we can find a way to encode them on Scala 2. The most promising of these is a higher rank implicit.
Luka demonstrated the potential of such an approach in a recent gist, but just to give a brief illustration, consider the following possibility (imagining kind-projector-like syntax for higher-rank types):
This signature is quite accurate, and it is equivalent to a bind-less encoding of the following on
F
's companion object:Here, the
E
is bound to the definition site, while in the higher-rank encoding it is unbound and can be reassigned at the call site. This allows functions like the following:You'll notice the fact that we used
flatMap
on two types here:F[A, B]
andF[C, B]
. This is why we need the higher-rank type, because we need to re-bind theα
toC
.At any rate, hopefully the potential of this encoding is clear. If we can get implicit materialization to work via an encoding of higher-rank types, then we can define these kinds of things for, at the very least,
Bracket
,Region
, andConcurrent
, which enables full and unrestricted bifunctorial (and higher!) abstraction without the need for a parallel hierarchy or any kind of conversion.This is a currently active area of investigation, and help is always appreciated!
Variance Annotations
It has been proposed and strongly advocated to constrain the polymorphic constructor types to be covariant in the new hierarchy. This has several advantages from a type inference standpoint. For example, a common use-case (using
delay
rather thanpure
just to illustrate):This will fail to compile, since
Some
andNone
are subtypes ofOption
, and so users need to add explicit (and ugly) ascriptions to both thedelay
bodies.It's tempting to say that simply assigning covariance to
F
(e.g.F[+_]: Sync
) will resolve this problem, since all of the base implementations of the effect types (e.g.ZIO
,Task
, etc) are themselves already covariant. However, variance annotations play havoc across the entire ecosystem and ultimately are significantly more trouble than they're worth. It is notable that Scalaz 7.0 had variance annotations on all of its polymorphic constructors, and they were all removed in 7.1. They were not removed due to subtyping prejudice, they were simply removed because they caused a lot more problems than they solved, and those problems were architectural, not related to compiler bugs.The easiest thing to point to here are derived inductive instances. For example, it is valid to define a
Sync[StateT[F, S, ?]]
given aSync[F]
. This is in fact extremely useful in a lot of cases. Unfortunately,StateT
is defined (in cats) to be invariant in its parameters. This instance would be forbidden if the constructors are forced to be covariant. The same can be said forWriterT
andKleisli
, two other very useful effects.The "virality" of variance annotations cannot be overstated. Imposing them in Cats Effect would impose them on the entire ecosystem, not just implementing frameworks but also everything upstream into cats. ZIO doesn't provide a good testbed with respect to this problem space, because it doesn't define first-party instances for anything upstream (since it depends on neither scalaz nor cats), and also ZIO encourages users to entirely avoid pre-existing inductive types in favor of its own built-in effect stacking mechanism. This means that a lot of the virality problems are sidestepped by design. Cats Effect does not have the same design, and would have to face these problems more directly.
As an aside, it is not in any way clear to me that all possible valid implementations of
Sync
(or evenAsync
) must be covariant. At the very least, there's certainly nothing invalid about invariant effects, even in a language with subtyping (i.e. Scala). Thus, outright forbidding them in the types seems incorrect.To that end, Cats Effect 3.0 will continue following the broader best practices in the ecosystem of defining concrete types to be appropriately variant, while polymorphic contexts will remain polyvariant (no annotation).
Non-Async Effects
Under various circumstances, it is quite useful to be able to characterize an effect which is statically known to not contain any asynchronous actions. These effects may be evaluated without any access to an
ExecutionContext
, and can be safely evaluated in synchronous JavaScript code. Cats Effect today defines a type,SyncIO
, which is just a version ofIO
without anyasync
. In Cats Effect 3, we abstractly characterize the set of strictly synchronous effects (those that are isomorphic toSyncIO
) similarly to how we characterize the set of asynchronous effects (those that are isomorphic toIO
).This is one of those abstractions which I'm not 100% certain carries its weight, but my experience is mostly biased by my time on the JVM, and my understanding is that these sorts of constructs are enormously helpful when running under Scala.js
Proposed Hierarchy
Now that all the conceptual groundwork has been laid, the following is provided as a visual reference for these ideas (and more). Note that not everything is fully reflected in this diagram, and also there are plenty of things here that are extensively motivated, but not described in this document. The most canonical reference will always be the source code. But still, visual aids are nice!
(names still subject to bikeshedding!)
This is obviously much more complex than the current hierarchy, but I believe the complexity carries its weight. Some high-level points:
Safe
is not a typeclass. Rather, it is a closed type union betweenBracket
andRegion
. Saying that someF
isSafe
is to say that it has resource management capabilities, without specifying how resources are managed.Bracket
represents a static monadic region, whileRegion
represents a dynamic region.Bracket
norRegion
know anything about cancellation. This is because it isn't possible to cancel anything without it beingConcurrent
! Their laws are defined in terms of errors (which they do know about, thanks toMonadError
), while cancellation is codified (in types and laws) inConcurrent
, similar to howflatMap
is defined byFlatMap
whilepure
is defined byApplicative
, and additional laws are imposed on both when they come together inMonad
.Concurrent
extends neitherBracket
norRegion
, but rather uses theSafe
union and self-types to say that allConcurrent
instances must be eitherBracket
orRegion
Temporal
splits the wall-clock related functions (sleep
andnow
) away from general concurrency, allowing more granular abstractions (particularly useful for testing)Temporal
(and above) are all generic in their error type,E
, similar toMonadError
. This is becauseThrowable
is forced by the JVM when capturing side-effects, butTemporal
and above are all pure and thus need not special-case a specific error type!Async
is now the second-most powerful typeclass in the whole hierarchy. If you have anAsync
, you can do almost anything. Note thatAsync
is still inhabited by types which are not isomorphic toIO
though, such asOptionT[IO, ?]
Sync
is split away from the rest of the hierarchy, and it's the only typeclass other thanConcurrent
which loses power in the new system. Specifically, if you have aSync
and only aSync
, you do not automatically get resource management (viabracket
). Due to subtype encoding of typeclasses, it is somewhat difficult to resolve this limitation, and I have never seen a use-case in the wild where someone neededbracket
anddelay
and nothing else.Effect
(and friends) represent an isomorphism withIO
(and equivalent). In other words, any type which forms anEffect
must be exactly as powerful asIO
, while any type which forms aManaged
must be exactly as powerful asResource[IO, ?]
. Analogously forSyncEffect
(equivalent toSyncIO
) andSyncManaged
. These names are awful.LiftIO
is gone, and the hierarchy does not special-caseIO
at any pointThe text was updated successfully, but these errors were encountered: