[Python-3000] Draft PEP for New IO system
mike.verdone at gmail.com
Thu Mar 1 17:07:18 CET 2007
The current algorithm for non-blocking writes is as follows:
When write is called check if the buffer is bigger than buffer_size.
If it is attempt to pre-flush the buffer. If we can't pre-flush, raise
BlockingIO error. Else, copy the new write data into the buffer
(flushed or not) and if we're bigger than buffer_size try to flush
again. If we couldn't flush the buffer, check that the remaining
buffer is smaller than max_buffer_size. If the buffer is less than
max_buffer_size return, if the buffer is greater than max_buffer_size,
truncate the buffer and throw BlockingIO error, informing the user how
many bytes were written/accepted into the buffer.
This code is in the patch I sent to Guido for review. It hasn't been
checked in yet.
I'm thinking it might be better to have only one buffer size. We want
to avoid partial writes as much as possible, but the algorithm seems
overly complicated. I may take a look at this again this weekend.
On 3/1/07, Adam Olsen <rhamph at gmail.com> wrote:
> On 2/28/07, Daniel Stutzbach <daniel.stutzbach at gmail.com> wrote:
> > What should Buffered I/O .write() do for a non-blocking object?
> > It seems like the .write() should write as much as it can to the Raw
> > I/O object and buffer the rest, but then how do we tell the Buffered
> > I/O object to "write more data from the buffer but still don't block"?
> > Along the same lines, for a non-blocking Buffer I/O object, how do we
> > specify "Okay, I know I've been writing only one byte a time so you
> > probably haven't bothered writing it to the raw object. Write as much
> > data as you can now, but don't block".
> > Option #1: On a non-blocking object, .flush() writes as much as it
> > can, but won't block. It would need a return value then, to indicate
> > whether the flush completed or not.
> > Option #2: Calling .write() with no arguments causes the Buffer I/O
> > object to flush as much write data to the raw object, but won't block.
> > (For a blocking object, it would block until all data is written to
> > the raw object).
> > I prefer option #2 because a .flush() that doesn't flush is more surprising.
> > The goal of supporting non-blocking file-like objects is to be able to
> > use select() with buffered I/O objects (and other things like a
> > compressed socket stream).
> Why do non-blocking operations need to use the same methods when
> they're clearly not the same semantics? Although long,
> .nonblockflush() would be explicit and allow .flush() to still block.
> I'm especially wary of infinite buffers. They allow a malicious peer
> to consume all your memory, DoSing the process or even the whole box
> if Linux's OOM killer doesn't kick in fast enough.
> Adam Olsen, aka Rhamphoryncus
> Python-3000 mailing list
> Python-3000 at python.org
> Unsubscribe: http://mail.python.org/mailman/options/python-3000/mike.verdone%40gmail.com
More information about the Python-3000