Hooks into the IO system to intercept raw file reads/writes
There's a lot of flexibility in the new layered IO system, but one thing it doesn't allow is any means of adding "hooks" to the data flow, or manipulation of an already-created io object. For example, when a subprocess.Popen object uses a pipe for the child's stdout, the data is captured instead of writing it to the console. Sometimes it would be nice to capture it, but still write to the console. That would be easy to do if we could wrap the underlying RawIOBase object and intercept read() calls[1]. A subclass of RawIOBase can do this trivially, but there's no way of replacing the class on an existing stream. The obvious approach would be to reassign the "raw" attribute of the BufferedIOBase object, but that's readonly. Would it be possible to make it read/write? Or provide another way of replacing the raw IO object underlying an io object? I'm sure there are buffer integrity issues to work out, but are there any more fundamental problems with this approach? Paul [1] Actually, it's *not* that easy, because subprocess.Popen objects are insanely hard to subclass - there are no hooks into the pipe creation process, and no way to intercept the object before the subprocess gets run (that happens in the __init__ method). But that's a separate issue, and also the subject of a different thread here.
I'm all for flexible I/O processing, but I worry that the idea brought up here feels a little half-baked. First of all, it seems to mention two separate cases of subclassing (both io.RawIOBase and subprocess.Popen). These days, subclassing(*) is often an anti-pattern: unless done with considerable foresight, every detail of the base class implementation essentially becomes part of the interface that the subclass relies upon, and now the base class becomes too constrained in its evolution. In my experience, a well-done API is usually much easier to evolve than even a very-well-done base class. The other thing is that I can't actually imagine the details of your proposal. Is the idea that you subclass RawIOBase to implement "tee" behavior? Why can't you do that at the receiving end? Is perhaps the proposal to assign the base object a work-around for a interface design in the Popen class? (I'm sure that class is far from perfect -- but it's also super constrained by the need to support Windows process creation.) _____ (*) I'm talking about subclassing as an API mechanism. A set of interrelated classes can work well if they are all part of the same package, so their implementations can evolve together as needed. But when proposing APIs which serve as important abstractions, it's much better if new abstractions are built by combining and wrapping objects rather than by subclassing. On Mon, Feb 2, 2015 at 6:53 AM, Paul Moore
There's a lot of flexibility in the new layered IO system, but one thing it doesn't allow is any means of adding "hooks" to the data flow, or manipulation of an already-created io object.
For example, when a subprocess.Popen object uses a pipe for the child's stdout, the data is captured instead of writing it to the console. Sometimes it would be nice to capture it, but still write to the console. That would be easy to do if we could wrap the underlying RawIOBase object and intercept read() calls[1]. A subclass of RawIOBase can do this trivially, but there's no way of replacing the class on an existing stream.
The obvious approach would be to reassign the "raw" attribute of the BufferedIOBase object, but that's readonly. Would it be possible to make it read/write? Or provide another way of replacing the raw IO object underlying an io object?
I'm sure there are buffer integrity issues to work out, but are there any more fundamental problems with this approach?
Paul
[1] Actually, it's *not* that easy, because subprocess.Popen objects are insanely hard to subclass - there are no hooks into the pipe creation process, and no way to intercept the object before the subprocess gets run (that happens in the __init__ method). But that's a separate issue, and also the subject of a different thread here. _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido van Rossum (python.org/~guido)
On 2 February 2015 at 16:31, Guido van Rossum
I'm all for flexible I/O processing, but I worry that the idea brought up here feels a little half-baked. First of all, it seems to mention two separate cases of subclassing (both io.RawIOBase and subprocess.Popen). These days, subclassing(*) is often an anti-pattern: unless done with considerable foresight, every detail of the base class implementation essentially becomes part of the interface that the subclass relies upon, and now the base class becomes too constrained in its evolution. In my experience, a well-done API is usually much easier to evolve than even a very-well-done base class.
The other thing is that I can't actually imagine the details of your proposal. Is the idea that you subclass RawIOBase to implement "tee" behavior? Why can't you do that at the receiving end? Is perhaps the proposal to assign the base object a work-around for a interface design in the Popen class? (I'm sure that class is far from perfect -- but it's also super constrained by the need to support Windows process creation.)
The idea is certainly a little half-baked :-( And you're absolutely right that it's strongly linked to a fight to work around limitations of subprocess.Popen. The suggestion originally came out of a couple of things I've been working on, one of which was trying to make a Popen call that captured the stdout/stderr streams while still displaying them (as you say, a "tee" type of mechanism). It's certainly possible to do the "tee" at the receiving end, but (because of the aforementioned Popen limitations) doing so requires ignoring the convenience of communicate() and writing your own capture code. That's not *too* hard using threads, but Popen avoids threads on Unix, using a select loop instead, and I'm not clear why, and whether my solution will break in the situations the Popen code is covering via the select loop. Also, getting corner cases in the capture code right (around encodings in particular) is something I'd prefer to leave to subprocess :-) The original issue was for a PR for a project that works on a lot of platforms I don't have access to, so I may well have been worrying too much about "not breaking stuff" :-) This proposal basically came from a feeling that if only I could "see" the data as it flows through the buffers of an existing io stream, I wouldn't have all these problems. Originally I was going to suggest a "buffer filled" type of callback. With such a hook, though, I was thinking I could do p = Popen(..., stdout=PIPE, stderr=PIPE) # Not sure if these need to be at the Raw IO level or the buffered IO level. Should be called every time an OS read happens. p.stdout.buffer.add_buffer_watcher(lambda buf: os.write(sys.stdout.fileno(), buf)) p.stderr.buffer.add_buffer_watcher(lambda buf: os.write(sys.stderr.fileno(), buf)) I guess that's a cleaner proposal, although I pretty much assumed that the overhead of such a hook being checked for on every buffer read would be unacceptable. So I came up with a clumsier approach based on trying to make it so you only paid the cost if you used the feature. Overall, that was probably a mistake :-( I hope it's clearer now. Paul
Perhaps you would be better off using the subprocess machinery in asyncio? It uses the subprocess module to manage the subprocess itself (both for Windows and for Unix-ish systems) but uses the async I/O machinery from the asyncio module (which is itself based on select or its better brethren). On Mon, Feb 2, 2015 at 9:10 AM, Paul Moore
On 2 February 2015 at 16:31, Guido van Rossum
wrote: I'm all for flexible I/O processing, but I worry that the idea brought up here feels a little half-baked. First of all, it seems to mention two separate cases of subclassing (both io.RawIOBase and subprocess.Popen). These days, subclassing(*) is often an anti-pattern: unless done with considerable foresight, every detail of the base class implementation essentially becomes part of the interface that the subclass relies upon, and now the base class becomes too constrained in its evolution. In my experience, a well-done API is usually much easier to evolve than even a very-well-done base class.
The other thing is that I can't actually imagine the details of your proposal. Is the idea that you subclass RawIOBase to implement "tee" behavior? Why can't you do that at the receiving end? Is perhaps the proposal to assign the base object a work-around for a interface design in the Popen class? (I'm sure that class is far from perfect -- but it's also super constrained by the need to support Windows process creation.)
The idea is certainly a little half-baked :-( And you're absolutely right that it's strongly linked to a fight to work around limitations of subprocess.Popen. The suggestion originally came out of a couple of things I've been working on, one of which was trying to make a Popen call that captured the stdout/stderr streams while still displaying them (as you say, a "tee" type of mechanism).
It's certainly possible to do the "tee" at the receiving end, but (because of the aforementioned Popen limitations) doing so requires ignoring the convenience of communicate() and writing your own capture code. That's not *too* hard using threads, but Popen avoids threads on Unix, using a select loop instead, and I'm not clear why, and whether my solution will break in the situations the Popen code is covering via the select loop. Also, getting corner cases in the capture code right (around encodings in particular) is something I'd prefer to leave to subprocess :-) The original issue was for a PR for a project that works on a lot of platforms I don't have access to, so I may well have been worrying too much about "not breaking stuff" :-)
This proposal basically came from a feeling that if only I could "see" the data as it flows through the buffers of an existing io stream, I wouldn't have all these problems. Originally I was going to suggest a "buffer filled" type of callback. With such a hook, though, I was thinking I could do
p = Popen(..., stdout=PIPE, stderr=PIPE) # Not sure if these need to be at the Raw IO level or the buffered IO level. Should be called every time an OS read happens. p.stdout.buffer.add_buffer_watcher(lambda buf: os.write(sys.stdout.fileno(), buf)) p.stderr.buffer.add_buffer_watcher(lambda buf: os.write(sys.stderr.fileno(), buf))
I guess that's a cleaner proposal, although I pretty much assumed that the overhead of such a hook being checked for on every buffer read would be unacceptable. So I came up with a clumsier approach based on trying to make it so you only paid the cost if you used the feature. Overall, that was probably a mistake :-(
I hope it's clearer now.
Paul
-- --Guido van Rossum (python.org/~guido)
On 2 February 2015 at 17:47, Guido van Rossum
Perhaps you would be better off using the subprocess machinery in asyncio? It uses the subprocess module to manage the subprocess itself (both for Windows and for Unix-ish systems) but uses the async I/O machinery from the asyncio module (which is itself based on select or its better brethren).
Quite possibly. I found the asyncio docs a bit of a struggle, TBH. Is there a tutorial? The basic idea is something along the lines of https://docs.python.org/3.4/library/asyncio-subprocess.html#subprocess-using... but I don't see how I'd modify that code (specifically the "yield from proc.stdout.readline()" bit) to get output from either stdout or stderr, whichever was ready first (and handle the 2 cases differently). I don't know whether event-based behaviour makes more sense to me than the corioutine-based approach used by the asyncio documentation. OTOH, the alternative protocol-based approach didn't seem to have an event for data received on stderr, just the one for stdout. So maybe it's not just the approach that's confusing me. Paul
Paul Moore wrote:
I found the asyncio docs a bit of a struggle, TBH. Is there a tutorial? The basic idea is something along the lines of https://docs.python.org/3.4/library/asyncio-subprocess.html#subprocess-using... but I don't see how I'd modify that code (specifically the "yield from proc.stdout.readline()" bit) to get output from either stdout or stderr, whichever was ready first (and handle the 2 cases differently).
One way is to use two subsidiary coroutines, one for each pipe. The following seems to work: #----------------------------------------------------- import asyncio.subprocess import sys @asyncio.coroutine def handle_pipe(label, pipe): while 1: data = yield from pipe.readline() if not data: return line = data.decode('ascii').rstrip() print("%s: %s" % (label, line)) @asyncio.coroutine def run_subprocess(): cmd = 'echo foo ; echo blarg >&2' proc = yield from asyncio.create_subprocess_exec( '/bin/sh', '-c', cmd, stdout = asyncio.subprocess.PIPE, stderr = asyncio.subprocess.PIPE) h1 = asyncio.async(handle_pipe("STDOUT", proc.stdout)) h2 = asyncio.async(handle_pipe("STDERR", proc.stderr)) yield from asyncio.wait([h1, h2]) if sys.platform == "win32": loop = asyncio.ProactorEventLoop() asyncio.set_event_loop(loop) else: loop = asyncio.get_event_loop() loop.run_until_complete(run_subprocess()) #----------------------------------------------------- % python3.4 asyncio_subprocess.py STDOUT: foo STDERR: blarg -- Greg
On 4 February 2015 at 08:32, Greg Ewing
Paul Moore wrote:
I found the asyncio docs a bit of a struggle, TBH. Is there a tutorial? The basic idea is something along the lines of
https://docs.python.org/3.4/library/asyncio-subprocess.html#subprocess-using... but I don't see how I'd modify that code (specifically the "yield from proc.stdout.readline()" bit) to get output from either stdout or stderr, whichever was ready first (and handle the 2 cases differently).
One way is to use two subsidiary coroutines, one for each pipe.
The following seems to work: [...]
Thanks! I'll play with that and see if I can get my head round it. Paul
On 4 February 2015 at 09:08, Paul Moore
The following seems to work: [...]
Thanks! I'll play with that and see if I can get my head round it.
It works for me, but I get an error on termination: RuntimeError: <_overlapped.Overlapped object at 0x00000000033C56F0> still has pending operation at deallocation, the process may crash I'm guessing that there's a missing wait on some object, but I'm not sure which (or how to identify what's wrong from the error). I'll keep digging, but on the assumption that it works on Unix, there may be a subtle portability issue here. Paul
Paul Moore wrote:
It works for me, but I get an error on termination:
RuntimeError: <_overlapped.Overlapped object at 0x00000000033C56F0> still has pending operation at deallocation, the process may crash
Hmmm, you could try adding this at the bottom of run_subprocess: yield from proc.wait() I didn't think that would be necessary, because the pipes won't get closed until the subprocess is finished, but maybe Windows is fussier. -- Greg
On 4 February 2015 at 11:32, Greg Ewing
Hmmm, you could try adding this at the bottom of run_subprocess:
yield from proc.wait()
Yeah, I had already tried that, but no improvement. ... found it. You need loop.close() at the end. Maybe the loop object should close itself in the __del__ method, like file objects? Paul
Paul Moore wrote:
... found it. You need loop.close() at the end. Maybe the loop object should close itself in the __del__ method, like file objects?
Yeah, this looks like a bug -- I didn't notice anything in the docs about it being mandatory to close() a loop when you're finished with it, and such a requirement seems rather unpythonic. -- Greg
Well, there is a lot of I/O machinery that keeps itself alive unless you
explicitly close it. So __del__ is unlikely to be called until it's too
late (at the dreaded module teardown stage). We may have to document this
better, but explicit closing a loop is definitely strongly recommended. On
Windows I think it's mandatory if you use the IOCP loop.
On Wed, Feb 4, 2015 at 12:21 PM, Greg Ewing
Paul Moore wrote:
... found it. You need loop.close() at the end. Maybe the loop object should close itself in the __del__ method, like file objects?
Yeah, this looks like a bug -- I didn't notice anything in the docs about it being mandatory to close() a loop when you're finished with it, and such a requirement seems rather unpythonic.
-- Greg
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido van Rossum (python.org/~guido)
Guido van Rossum wrote:
Well, there is a lot of I/O machinery that keeps itself alive unless you explicitly close it. So __del__ is unlikely to be called until it's too late (at the dreaded module teardown stage).
I thought we weren't doing module teardown any more? -- Greg
On 5 February 2015 at 15:05, Greg Ewing
Guido van Rossum wrote:
Well, there is a lot of I/O machinery that keeps itself alive unless you explicitly close it. So __del__ is unlikely to be called until it's too late (at the dreaded module teardown stage).
I thought we weren't doing module teardown any more?
It's still there as a last resort. The thing that changed is that __del__ methods don't necessarily prevent cycle collection any more, so more stuff should be cleaned up nicely by the cyclic GC before we hit whatever is left with the module cleanup hammer. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Hi,
2015-02-04 21:21 GMT+01:00 Greg Ewing
Paul Moore wrote:
... found it. You need loop.close() at the end. Maybe the loop object should close itself in the __del__ method, like file objects?
In the latest version of asyncio, there are now destructors on event loops and transports on Python 3.4+. The destructor closes the event loop/transport, but also emit a ResourceWarning warning because it's not safe to rely on destructors. The destructor may be called too late. For example, closing a transport may need a running event loop. Subprocess is a good example of complex transport.
Yeah, this looks like a bug -- I didn't notice anything in the docs about it being mandatory to close() a loop when you're finished with it, and such a requirement seems rather unpythonic.
In the latest section of the asyncio doc, I added recently: "ResourceWarning warnings are emitted when transports and event loops are not closed explicitly." https://docs.python.org/dev/library/asyncio-dev.html#develop-with-asyncio I also added a section dedicated to closing event loops and transports: https://docs.python.org/dev/library/asyncio-dev.html#close-transports-and-ev... Maybe I should repeat the information somewhere else in the documentation? Victor
On 10 February 2015 at 13:42, Victor Stinner
Maybe I should repeat the information somewhere else in the documentation?
Not many of the code samples seem to include loop.close(). Maybe they should? For example, if I have a loop.run_forever() call, should it be enclosed in try: ... finally: loop.close() to ensure the loop is closed? Typically, I rely on doing what the examples do for this sort of detail, even if the detailed documentation is present somewhere. Paul
2015-02-10 14:56 GMT+01:00 Paul Moore
On 10 February 2015 at 13:42, Victor Stinner
wrote: Maybe I should repeat the information somewhere else in the documentation?
Not many of the code samples seem to include loop.close().
Which code samples? I tried to ensure that all code snippets in asyncio doc and all examples in Tulip repository call loop.close().
Maybe they should?
Yes.
For example, if I have a loop.run_forever() call, should it be enclosed in try: ... finally: loop.close() to ensure the loop is closed?
"loop.run_forever(); loop.close()" should be fine. You may use try/finally if you don't want to register signal handlers for SIGINT/SIGTERM (which is not supported on Windows yet...). Victor
On 10 February 2015 at 14:04, Victor Stinner
Not many of the code samples seem to include loop.close().
Which code samples? I tried to ensure that all code snippets in asyncio doc and all examples in Tulip repository call loop.close().
For example, in https://docs.python.org/dev/library/asyncio-dev.html#detect-exceptions-never... the first example doesn't close the loop. Neither of the fixes given close the loop either.
Maybe they should?
Yes.
For example, if I have a loop.run_forever() call, should it be enclosed in try: ... finally: loop.close() to ensure the loop is closed?
"loop.run_forever(); loop.close()" should be fine. You may use try/finally if you don't want to register signal handlers for SIGINT/SIGTERM (which is not supported on Windows yet...).
Hmm, so if I write a server and hit Ctrl-C to exit it, what happens? That's how non-asyncio things like http.server typically let the user quit. Paul
2015-02-10 15:49 GMT+01:00 Paul Moore
Which code samples? I tried to ensure that all code snippets in asyncio doc and all examples in Tulip repository call loop.close().
For example, in https://docs.python.org/dev/library/asyncio-dev.html#detect-exceptions-never... the first example doesn't close the loop. Neither of the fixes given close the loop either.
Oh ok, I forgot two examples. It's now fixed: https://hg.python.org/cpython/rev/05bd5ec8365e Tell me if you see more examples where I forgot to explicitly close the event loop.
Hmm, so if I write a server and hit Ctrl-C to exit it, what happens? That's how non-asyncio things like http.server typically let the user quit.
When possible, it's better to register a signal handler to ask to stop the event loop. (Oh, I forgot this old thread about hooking I/O! You should start a new thread if you would like to talk about asyncio!) Victor
Regarding the error, Victor has made a lot of fixes to the iocp code for asyncio. Maybe it's fixed in the repo? On Feb 4, 2015 1:33 AM, "Paul Moore"
On 4 February 2015 at 09:08, Paul Moore
wrote: The following seems to work: [...]
Thanks! I'll play with that and see if I can get my head round it.
It works for me, but I get an error on termination:
RuntimeError: <_overlapped.Overlapped object at 0x00000000033C56F0> still has pending operation at deallocation, the process may crash
I'm guessing that there's a missing wait on some object, but I'm not sure which (or how to identify what's wrong from the error). I'll keep digging, but on the assumption that it works on Unix, there may be a subtle portability issue here.
Paul _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Hi, 2015-02-04 10:32 GMT+01:00 Paul Moore
It works for me, but I get an error on termination:
RuntimeError: <_overlapped.Overlapped object at 0x00000000033C56F0> still has pending operation at deallocation, the process may crash
I didn't reproduce your issue. I tried with the latest development version of asyncio, but also older versions. Maybe we used a different version of the code. Anyway, you should enable all debug checks: it will show you different issues in your code, https://docs.python.org/dev/library/asyncio-dev.html#debug-mode-of-asyncio
I'm guessing that there's a missing wait on some object, but I'm not sure which (or how to identify what's wrong from the error). I'll keep digging, but on the assumption that it works on Unix, there may be a subtle portability issue here.
In fact, it was a bug in asyncio: Python issue #23242. Good news: it was already fixed, 3 weeks ago. http://bugs.python.org/issue23242 The high-level subprocess API (create_subprocess_exec/shell) doesn't give access to the transport, so it's responsible to handle it. I forgot to close explicitly the transport when the process exited. It's now fixed. https://hg.python.org/cpython/rev/df493e9c6821 Recently I fixed a lot of similar issues, and as Guido wrote, I also fixed major IOCP issues (many subtle race conditions). Good news: all fixes will be part of Python 3.4.3! Victor
On Feb 2, 2015, at 6:53, Paul Moore
There's a lot of flexibility in the new layered IO system, but one thing it doesn't allow is any means of adding "hooks" to the data flow, or manipulation of an already-created io object.
Why do you need to add hooks to an already-created object? Why not just create the subclassed (and effectively hooked) object in place of the original one? Obviously that requires building the stack of raw/buffered/text manually instead, which requires a few extra lines in subprocess or wherever else you want to do it, but that doesn't seem like a huge burden for something this uncommon. The advantage is that you don't need to worry about exposing the buffer, making raw read-write, or anything else complicated. The disadvantage is that you can't do this if you've already started reading or writing--but that doesn't seem to apply to this use case, or to most other potential use cases. As for Guido's concerns about subclassing as an API mechanism: You can easily translate this into a request to replace the os.read and os.write calls used by a raw io object; then, whether you do that externally or in a subclass, you get the same result. The problem, either way, is that RawIOBase doesn't actually call os.read. Each implementation of the ABC does something different. For FileIO, I'm pretty sure it reads directly at the C level. A socket file calls recv on the socket. And so on. So, how does that affect your proposal?
For example, when a subprocess.Popen object uses a pipe for the child's stdout, the data is captured instead of writing it to the console. Sometimes it would be nice to capture it, but still write to the console. That would be easy to do if we could wrap the underlying RawIOBase object and intercept read() calls[1]. A subclass of RawIOBase can do this trivially, but there's no way of replacing the class on an existing stream.
The obvious approach would be to reassign the "raw" attribute of the BufferedIOBase object, but that's readonly. Would it be possible to make it read/write?
Making it read/write is a couple lines of C (plus some Python code for implementations that use pyio instead of _io). The problem is the buffer. If the raw you're replacing happens to reference the same file descriptor (or, I guess, another fd for the same file with the same file position) it would all work, but that seems to be stretching "consenting adults" freedom a bit. Also, you still need some way to construct your HookedRawIO subclassed object. Is HookedRawIO a wrapper that provides the RawIOBase interface but delegates to another RawIOBase? Or does it share or take over or dup the fd? Or ...
Or provide another way of replacing the raw IO object underlying an io object?
Somewhere in the bug database, someone (I think Nick Coghlan?) suggested a rewrap method on TextIOWrapper, which gives you a new TextIOWrapper around the same buffer (allowing you to override the other params). If the same idea were extended to the buffer classes, and if it allowed you to also replace the raw or buffer object (so the only thing you're "rewrapping" is the internal state), you could construct a HookedRawIO, then call rewrap on the buffer replacing it's raw, then call rewrap on the text replacing its buffer, then set the stdout attribute of the popen. But I'm not sure that's any cleaner.
I'm sure there are buffer integrity issues to work out, but are there any more fundamental problems with this approach?
Paul
[1] Actually, it's *not* that easy, because subprocess.Popen objects are insanely hard to subclass - there are no hooks into the pipe creation process, and no way to intercept the object before the subprocess gets run (that happens in the __init__ method). But that's a separate issue, and also the subject of a different thread here. _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 2 February 2015 at 18:07, Andrew Barnert
On Feb 2, 2015, at 6:53, Paul Moore
wrote: There's a lot of flexibility in the new layered IO system, but one thing it doesn't allow is any means of adding "hooks" to the data flow, or manipulation of an already-created io object.
Why do you need to add hooks to an already-created object? Why not just create the subclassed (and effectively hooked) object in place of the original one? Obviously that requires building the stack of raw/buffered/text manually instead, which requires a few extra lines in subprocess or wherever else you want to do it, but that doesn't seem like a huge burden for something this uncommon.
Largely because the IO object creation is hidden in the bowels of subprocess, to be honest. There's no doubt that the underlying problem is that subprocess is hostile to any form of subclassing or modification. And given that I have this problem in existing code, anything proposed here (which will be for 3.5 or maybe 3.6 at best) isn't going to be a solution for my immediate problem. My intention here was that, having had this sort of issue a few times (all variations on "I wish I could see when the IO object requests data from the OS, and do something at that point with the data the OS supplied") I thought there might be a good case for a general facility to do this, The feature was also available in the old AT&T "sfio" library which was a more flexible replacement for C's stdio, so it's not a new idea. Paul
participants (6)
-
Andrew Barnert
-
Greg Ewing
-
Guido van Rossum
-
Nick Coghlan
-
Paul Moore
-
Victor Stinner