limit read_many
concurrency based on in-flight IO memory
#491
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Add the option to limit the amount of concurrent IO requests based on
memory usage. This is a useful knob in conjunction with IO requests
coalescing because it makes sense to keep a high number of concurrent
small IO requests but less so if they are extremely large. i.e., if all
the IO requests are 4MiB large, it doesn't make much sense to schedule
more than a few at a time. Scheduling too many at once could starve
other concurrent IO tasks for no throughput benefit. Conversely, It
makes sense to schedule 4KiB requests with a higher concurrency level.
This new memory limit is added alongside the existing concurrent limit.
The final number of concurrent IO requests dispatched by a
read_many
stream will be determined by whichever limit is reached first.
For instance, given the following pseudocode:
If
iovecs
contains 4KiB requests, the stream will schedule 128 IOrequests concurrently because 128 * 4KiB <= 1MiB. Conversely, if
iovecs
contains 256KiB requests, then the stream will schedule 4 IOrequests concurrently, because 4 * 256KiB <= 1MiB.