[Python-ideas] An alternate approach to async IO

Guido van Rossum guido at python.org
Wed Nov 28 21:49:51 CET 2012


On Wed, Nov 28, 2012 at 12:32 PM, Trent Nelson <trent at snakebite.org> wrote:
>     Right, so, I'm arguing that with my approach, because the background
>     IO thread stuff is as optimal as it can be -- more IO events would
>     be available per event loop iteration, and the latency between the
>     event occurring versus when the event loop picks it up would be
>     reduced.  The theory being that that will result in higher through-
>     put and lower latency in practice.
>
>     Also, from a previous e-mail, this:
>
>         with aio.open('1GB-file-on-a-fast-SSD.raw', 'rb') as f:
>             data = f.read()
>
>     Or even just:
>
>         with aio.open('/dev/zero', 'rb') as f:
>             data = f.read(1024 * 1024 * 1024)
>
>     Would basically complete as fast as it physically possible to read
>     the bytes off the device.  If you've got 16+ cores, then you'll have
>     16 cores able to service IO interrupts in parallel.  So, the overall
>     time to suck in a chunk of data will be vastly reduced.
>
>     There's no other way to get this sort of performance without taking
>     my approach.

So there's something I fundamentally don't understand. Why do those
calls, made synchronously in today's CPython, not already run as fast
as you can get the bytes off the device? I assume it's just a transfer
from kernel memory to user memory. So what is the advantage of using
aio over

  with open(<file>, 'rb') as f:
      data = f.read()

? Is it just that you can run other Python code *while* the I/O is
happening as fast as possible in a separate thread? But that would
also be the case when using (real) threads in today's CPython. What am
I missing?

-- 
--Guido van Rossum (python.org/~guido)



More information about the Python-ideas mailing list