[Python-ideas] The async API of the future

Guido van Rossum guido at python.org
Sun Nov 4 16:26:23 CET 2012

On Sat, Nov 3, 2012 at 4:49 PM, Greg Ewing <greg.ewing at canterbury.ac.nz>
> Sturla Molden wrote:
>> But it uses a thread-pool that polls the registered wait objects, so the
>> overhead (with respect to latency) will still be O(n).
> I'm not sure exactly what you mean by "polling" here. I'm
> pretty sure that *none* of the mechanisms we're talking about
> here (select, poll, kqueue, IOCP, WaitForMultipleWhatever, etc)
> indulge in busy-waiting while looping over the relevant handles.
> They all ultimately make use of hardware interrupts to wake up
> a thread when something interesting happens.
> The scaling issue, as I understand it, is that select() and
> WaitForMultipleObjects() require you to pass in the entire list
> of fds or handles on every call, so that there is an O(n) setup
> cost every time you wait.
> A more scaling-friendly API would let you pre-register the set
> of interesting objects, so that the actual waiting call is
> O(1). I believe this is the reason things like epoll, kqueue
> and IOCP are considered more scalable.

I've been thinking about this too. I can see the scalability issues with
select(),  but frankly, poll(), epoll(), and even kqueue() all look similar
in O() behavior to me from an API perspective. I guess the differences are
in the kernel -- but is it a constant factor or an unfortunate O(N) or
worse? To what extent would this be overwhelmed by overhead in the Python
code we're writing around it? How bad is it to add extra
register()/unregister() (or (modify()) calls per read operation?

--Guido van Rossum (python.org/~guido)

--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20121104/c7659820/attachment.html>

More information about the Python-ideas mailing list