Replies: 2 comments
-
I'm not sure I got your point, so please point it out if I didn't.
Python is also doing the three steps open-read-close. it is just done through syscalls and hidden by the interpreter.
IMO The value of io_uring is in 1) the asynchronous model of execution; 2) reduction of system calls, which are way more expensive due to all the speculation mitigations. In the case you point out, you are not really benefiting from (1) since the work is essentially synchronous in this case. Finally, it is kind of an unfair comparison since the overhead of setting up io_uring (assuming you didn't disregard it) and pushing a request down its stack is going to dominate the cost over doing the read/write syscall synchronously against small, readily available data.
The complexity is abstractable away by a library/framework like python. it would be a bit of apple-to-oranges to compare the two, no? |
Beta Was this translation helpful? Give feedback.
-
Thanks @krisman for your comment. Complexity with register_files is:
After wrapping the whole register_files in Python you pretty much lose out on any speed advantage you would have gotten by just the whole management process itself vs passing normal Let me try to explain better... Also I am not trying to put with open('/dev/random', 'rb') as file:
data = file.read(1024) Using above example Python has few disadvantage
What if # open, read, close
io_uring_prep_single_read(sqe, '/dev/random', buffer, buffer_length) Now with |
Beta Was this translation helpful? Give feedback.
-
Sometimes you want to do just 1 time read. Currently with
io_uring
you can do this with multiple steps: 1. open, 2. read + 3.closeI did a simple benchmark
Its not the fact that
python
is faster thanio_uring
its just loosing a lot of time in event loop steps.I was thinking maybe this can be solved with
register_files +
linkingopen_direct + read_fixed + close_direct
still those lose out as well in complexity!What do you think?
Beta Was this translation helpful? Give feedback.
All reactions