-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New channels implementation for ORC #17305
Conversation
|
||
func peek*[T](c: Channel[T]): int {.inline.} = peek(c.d) | ||
|
||
proc newChannel*[T](elements = 30): Channel[T] = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- document that
elements = 1
will cause channel to be unbuffered - magic number 30 sounds weird; how about making the default unbuffered instead (and using an impossible value, say -1, to denote that the channel is unbufferred)
- isn't typedesc param more common now-days? [T] is mostly useful for implicit generic instantiation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sense
/cc @mratsim with current API, you'd need to know ahead of time the capacity to use, which is in practice really hard: we have an impossible dilemma to solve:
with std/queues, you avoid this impossible dilemma and let the buffer grow as needed. We could improve this by adding a maximum cap on the deques underlying buffer but that's a minor addition to an existing module. Furthermore, even if somehow a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM to make progress, with no expectation of API stability at this point.
I'm really curious about whether we should instead use std/dequeues, and a few other points, but can be addressed in followup work
In my experience this is the far better design than unbounded queues, these tend to consume way too much memory and you lose the flow control aspect of bounded queues. Golang also uses bounded queues as far as I know and never revisited this design decision. |
There are multiple reasons:
References on backpressure: |
you're comparing
This gives best of both worlds:
|
We don't have an implementation for your "best of both worlds" though and it's not clear how it would perform. |
Now For specialized needs, the producer can compose the base channel with a buffer in front that provides the extra leeway needed, they can choose a small channel and implement a buffer with the hysteresis strategy you mention as ans example. In general, all the dynamism and complex logic should be done outside of the base channel. I've mentioned it in the channels API RFC PR nim-lang/RFCs#347 (comment)
|
I think there is no good way to make these work with the old GCs. We will eventually make
That's true. |
it doesn't over-provision, you do as follows: so yes, you do get best of both worlds: no overprovisioning and pay-what-you-actually-use, instead of a max capacity. performance wise, it's at least as good as |
Merging for now, we can refine it later. |
* Update lib/std/channels.nim * Rename tchannel_pthread.nim to tchannels_pthread.nim * Rename tchannel_simple.nim to tchannels_simple.nim Co-authored-by: Mamy Ratsimbazafy <[email protected]>
* Update lib/std/channels.nim * Rename tchannel_pthread.nim to tchannels_pthread.nim * Rename tchannel_simple.nim to tchannels_simple.nim Co-authored-by: Mamy Ratsimbazafy <[email protected]>
Thanks @mratsim !
The new implementation is mostly based on https://github.com/mratsim/weave/blob/5696d94e6358711e840f8c0b7c684fcc5cbd4472/unused/channels/channels_legacy.nim.
So I keep the license and the link.
Related article:
nim-lang/website#274