[Python-ideas] Async API
itamar at futurefoundries.com
Fri Oct 26 16:03:56 CEST 2012
On Thu, Oct 25, 2012 at 10:43 AM, Guido van Rossum <guido at python.org> wrote:
> On Thu, Oct 25, 2012 at 4:46 AM, Laurens Van Houtven <_ at lvh.cc> wrote:
> > Sorry, working really long hours these days; just wanted to chime in that
> > yes, you can call transport.write with large strings, and the reactor
> > do the right thing under the hood: loseConnection is the polite way of
> > dropping a connection, which should wait for all pending writes to finish
> > etc.
> This seems a decent enough pattern. It also makes it possible to use
> one of these things as a substitute for a writable file object, so you
> can e.g. use it as sys.stdout or the stream for a
> Still, I wonder what happens if the socket/pipe/whatever that is
> written to is very slow and the program produces too much data. Does
> memory just balloon up, or is there some kind of throttling of the
> writer? Or a buffer overflow exception? For a totally general solution
> I would at least like to have the *option* of doing synchronous
> (I'm asking these questions because I'd like to copy this useful
> pattern -- but I want to get the end cases right.)
There's a callback that gets called saying "your buffer is too full". This
is the producer/consumer API people have referred to. It's not the best API
in the world, and Glyph is working on an improvement, but that's the basic
idea. The general move is towards a push API - push as much data as you can
until you're told to stop.
Tornado has a "tell me when this write is removed from the buffer and
actually written to the socket" callback. This is more of a pull approach;
you write some data, and get notified when you should write some more.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-ideas