PEP 3156/Tulip: Extensible EventLoop interface

The event loop interface in PEP 3156 has an extensibility problem. It seems likely that it will have a method like listen_udp() by the time it's done, but suppose that doesn't make it into the first official release. Third-party event loop implementations may want to provide UDP support as an extension, but the most consistent way to provide that extension is by adding new methods on the event loop object, where various extensions risk conflicting with each other or with new methods that become standardized later. The PEP specifies the add_reader family of methods in part so that core protocol implementations can be shared across all EventLoop implementations that support these methods. However, if those transports are implemented in a common base class (like Twisted's PosixReactorBase), there is no way for third-party transports to take advantage of a similar structure. I'd like to make it possible for transports to be developed independent of a particular EventLoop implementation in a way that is consistent with the way the core transports work. (This is a bigger concern for tulip than it is for twisted because twisted can update PosixReactorBase more frequently than the stdlib can change) I propose turning the interface around so that transport creation uses static functions that take the event loop as an argument, with a double-dispatch mechanism to allow the event loop to provide the actual implementation when it can: def create_connection(protocol_factory, host, port, event_loop=None): if event_loop is None: event_loop = get_event_loop() # Note the use of a fully-qualified name in the registry impl = event_loop.get_implementation('tulip.create_connection') return impl(protocol_factory, host, port) New third-party transports could provide fallbacks for event loops that don't have their own implementations: if impl is None: # These supports_*() functions are placeholders for a to-be-determined introspection interface. if supports_fd_interface(event_loop): return posix_udp_implementation(*args) elif supports_iocp_interface(event_loop): return iocp_udp_implementation(*args) else: raise Exception("This transport is not supported on this event loop") Or they could plug into the event loop's implementation registry: LibUVEventLoop.register_implementation('mymodule.listen_udp', libuv_udp_implementation) This does introduce a little magic (is there any precedent for this kind of multiple-dispatch in the standard library?), but I like the way it keeps the event loop interface from getting too big and monolithic. Third-party transports can avoid naming conflicts without looking fundamentally different from standard ones, and there's a clean path from doing something that's platform-specific (e.g. with add_reader and friends) to supporting multiple event loops to full standardization. -Ben

On Sun, Feb 3, 2013 at 6:10 AM, Ben Darnell <ben@bendarnell.com> wrote:
The event loop interface in PEP 3156 has an extensibility problem. It seems likely that it will have a method like listen_udp() by the time it's done, but suppose that doesn't make it into the first official release. Third-party event loop implementations may want to provide UDP support as an extension, but the most consistent way to provide that extension is by adding new methods on the event loop object, where various extensions risk conflicting with each other or with new methods that become standardized later.
The general idea of using factory functions for transport creation, even for the "core transports" (like sockets, SSL and pipes) sounds good to me. I've expressed some concerns previously about the breadth of the event loop class API, and this would go a long way towards alleviating them. As far as precedents for the proposed dispatch/registration mechanism goes, probably the closest we currently have is the codecs registry. A "transport registry" (using Python dottedname identifiers as the naming scheme) sounds promising to me. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sat, Feb 2, 2013 at 8:36 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Sun, Feb 3, 2013 at 6:10 AM, Ben Darnell <ben@bendarnell.com> wrote:
The event loop interface in PEP 3156 has an extensibility problem. It seems likely that it will have a method like listen_udp() by the time it's done, but suppose that doesn't make it into the first official release. Third-party event loop implementations may want to provide UDP support as an extension, but the most consistent way to provide that extension is by adding new methods on the event loop object, where various extensions risk conflicting with each other or with new methods that become standardized later.
The general idea of using factory functions for transport creation, even for the "core transports" (like sockets, SSL and pipes) sounds good to me. I've expressed some concerns previously about the breadth of the event loop class API, and this would go a long way towards alleviating them.
+1 Eli

On Sat, Feb 2, 2013 at 8:36 PM, Nick Coghlan <ncoghlan@gmail.com <mailto:ncoghlan@gmail.com>> wrote:
The general idea of using factory functions for transport creation, even for the "core transports" (like sockets, SSL and pipes) sounds good to me. I've expressed some concerns previously about the breadth of the event loop class API, and this would go a long way towards alleviating them.
How would this work, exactly? The implementation of these functions will depend on the platform and event loop being used, so there still needs to be some kind of dispatching mechanism based on the current event loop. -- Greg

On 4 Feb 2013 07:45, "Greg Ewing" <greg.ewing@canterbury.ac.nz> wrote:
On Sat, Feb 2, 2013 at 8:36 PM, Nick Coghlan <ncoghlan@gmail.com <mailto:
ncoghlan@gmail.com>> wrote:
The general idea of using factory functions for transport creation, even for the "core transports" (like sockets, SSL and pipes) sounds good to me. I've expressed some concerns previously about the breadth of the event loop class API, and this would go a long way towards alleviating them.
How would this work, exactly? The implementation of these functions will depend on the platform and event loop being used, so there still needs to be some kind of dispatching mechanism based on the current event loop.
Ben covered that in his original post: the event loop has a registry of transport factories, keyed by the factory name (e.g. "mycoolmodule.myneattransport". The factory functions check the event loop registry for an implementation and use it if they find it. Otherwise, they fall back to a non-optimised implementation based on the standard event loop API, or they throw an error indicating that the transport doesn't support the current event loop. It's a really elegant design that: - provides a consistent user experience between "first party" and "third party" transport implementations, rather than switching arbitrarily from "event loop method" to "module level function" based on an implementation detail - allows an optimised transport implementation to be added to an event loop without requiring application code to change in order to take advantage of it and without even requiring cooperation from the event loop developers - the transport creation API also extends cleanly to the creation of protocol stacks A per event loop implementation transport registry makes sense for the same reason the codec registry makes sense. The only differences are that: - the different transports may offer different APIs - the registry isn't global, but specific to each event loop implementation. Cheers, Nick.
-- Greg
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas

Ben: I wrote a long reply, it is inline below. However it's possible that I am not seeing the use case right. Your proposal is written in very general terms; perhaps you can come up with a more specific example to defend it further? The UDP example somehow doesn't seem very compelling to me. On Sat, Feb 2, 2013 at 12:10 PM, Ben Darnell <ben@bendarnell.com> wrote:
The event loop interface in PEP 3156 has an extensibility problem. It seems likely that it will have a method like listen_udp() by the time it's done, but suppose that doesn't make it into the first official release. Third-party event loop implementations may want to provide UDP support as an extension, but the most consistent way to provide that extension is by adding new methods on the event loop object, where various extensions risk conflicting with each other or with new methods that become standardized later.
This may be based on a misunderstanding. If a specific event loop implementation wants to offer a new transport, there is no reason to add it as a method to the event loop. The app that wants to use that transport has to import that event loop too, so the app might as well call a module-level function that's specific to that event loop, in order to instantiate the new transport. We may have to point this out in the PEP, since it is likely that implementers wanting to offer new features will think of adding new methods to their event loop first. But that's a precious namespace (since it's shared by all event loop implementations), whereas their own implementation's module namespace is less precious.
The PEP specifies the add_reader family of methods in part so that core protocol implementations can be shared across all EventLoop implementations that support these methods. However, if those transports are implemented in a common base class (like Twisted's PosixReactorBase), there is no way for third-party transports to take advantage of a similar structure.
I'm not sure I understand this (the "there is no way" part). Is the problem that that base class is private, or that the add_reader methods may not be there, or what?
I'd like to make it possible for transports to be developed independent of a particular EventLoop implementation in a way that is consistent with the way the core transports work.
Hm, now we're talking about something else. Transports implemented *independently* from an event loop implementation should not assume more than the standardized API. This feels rather limiting unless we're talking about transports built on top of other protocols (e.g. the mythical TCP-over-HTTP transport :-). My expectation is that most transport implementations (as opposed to protocols) are probably tied closely to an event loop implementation (and note that the various Tulip event loop implementations using select, poll, epoll, kqueue, are a single implementation for this purpose -- OTOH the IocpEventLoop (in the iocp branch) is a different implementation and, indeed, has different transport implementations!
(This is a bigger concern for tulip than it is for twisted because twisted can update PosixReactorBase more frequently than the stdlib can change)
I think what you're really getting at here is that there is no 3rd significant party cottage industry creating new transports. Transports in practice are all part of the Twisted distribution, so development is only gated by backwards compatibility requirements with user apps (APIs, once offered, must remain supported and stable), not by compatibilities with older versions of the rest of the framework (a new transport introduced in Twisted 12.1 doesn't have to work with Twisted 12.0).
I propose turning the interface around so that transport creation uses static functions that take the event loop as an argument, with a double-dispatch mechanism to allow the event loop to provide the actual implementation when it can:
def create_connection(protocol_factory, host, port, event_loop=None): if event_loop is None: event_loop = get_event_loop() # Note the use of a fully-qualified name in the registry impl = event_loop.get_implementation('tulip.create_connection') return impl(protocol_factory, host, port)
Hm. I don't see what this adds. It still always gets the protocol from the event loop so moving this standardized method out of the event loop class doesn't seem to buy anything. The implementation-dependent work is just moved into get_implementation(). I also don't see why we need a registry.
New third-party transports could provide fallbacks for event loops that don't have their own implementations:
if impl is None: # These supports_*() functions are placeholders for a to-be-determined # introspection interface. if supports_fd_interface(event_loop): return posix_udp_implementation(*args) elif supports_iocp_interface(event_loop): return iocp_udp_implementation(*args) else: raise Exception("This transport is not supported on this event loop")
It seems you want each transport implementation to provide its own create_connection() function, right? There's nothing wrong with that, and I don't see that just because 3rd party transports will be instantiated through a module-level function (in a specific module) that means that the standard transports specified by the PEP (standard in semantics, not in implementation!) can't be instantiated through methods on the event loop. Perhaps you are placing a higher value on consistency between standard and non-standard transports? To me, it is actually positive to be inconsistent here, so readers are made fully aware that a non-standard transport is being used. (The idea of introspection functions is fine, however.)
Or they could plug into the event loop's implementation registry:
LibUVEventLoop.register_implementation('mymodule.listen_udp', libuv_udp_implementation)
Yeah, but wouldn't this likely be a private affair between UV's event loop and UV's UDP transport, which are being distributed together? Users who wish to use UDP (assuming PEP 3156 ends up not specifying UDP support) are required to depend on a non-standard feature of their event loop, you can't hide that with a registry. (Note that a UDP transport has a different API than a TCP transport, and requires the protocol to implement different methods as well.)
This does introduce a little magic (is there any precedent for this kind of multiple-dispatch in the standard library?), but I like the way it keeps the event loop interface from getting too big and monolithic. Third-party transports can avoid naming conflicts without looking fundamentally different from standard ones, and there's a clean path from doing something that's platform-specific (e.g. with add_reader and friends) to supporting multiple event loops to full standardization.
I actually consider it a good thing that when a concept is standardized, the "name" of the API changes. When we adopt a 3rd party module in the stdlib we typically give it a new name too, to avoid confusion about which version is meant (since inevitably the 3rd party has more variability than the version adopted into the stdlib). With all that said, if a particular event loop implementation prefers to add non-standard methods to their event loop object, and then starts lobbying for its adoption in the standard, I can't stop them, and they may even have a good shot at getting adopted in the next Python version. But they should be aware of the risk they run, that the next version of the stdlib might expose a different API under their chose name. It will be easier for their users to transition if they choose a way to spell their extension that is *not* likely to be standardized, e.g. a function in their own module, or an event loop method name with a custom prefix like uv_listen_udp(). I'm looking forward to explanations of why I am preventing the developing of 3rd party transports with this response.... -- --Guido van Rossum (python.org/~guido)

On Sun, Feb 3, 2013 at 12:30 PM, Guido van Rossum <guido@python.org> wrote:
Ben: I wrote a long reply, it is inline below. However it's possible that I am not seeing the use case right. Your proposal is written in very general terms; perhaps you can come up with a more specific example to defend it further? The UDP example somehow doesn't seem very compelling to me.
UDP is a real-life example from tornado - we don't have any built-in support for UDP, but people who need it have been able to build it without touching tornado itself. The same argument would apply to pipes or any number of other (admittedly much more esoteric) network protocols. I'll elaborate on the UDP example below.
On Sat, Feb 2, 2013 at 12:10 PM, Ben Darnell <ben@bendarnell.com> wrote:
The event loop interface in PEP 3156 has an extensibility problem. It seems likely that it will have a method like listen_udp() by the time it's done, but suppose that doesn't make it into the first official release. Third-party event loop implementations may want to provide UDP support as an extension, but the most consistent way to provide that extension is by adding new methods on the event loop object, where various extensions risk conflicting with each other or with new methods that become standardized later.
This may be based on a misunderstanding. If a specific event loop implementation wants to offer a new transport, there is no reason to add it as a method to the event loop. The app that wants to use that transport has to import that event loop too, so the app might as well call a module-level function that's specific to that event loop, in order to instantiate the new transport.
We may have to point this out in the PEP, since it is likely that implementers wanting to offer new features will think of adding new methods to their event loop first. But that's a precious namespace (since it's shared by all event loop implementations), whereas their own implementation's module namespace is less precious.
Right. Third-party extensions to the event loop interface are inherently problematic, so we'll have to provide them in some other way. I'm proposing a pattern for that "some other way" and then realizing that I like it even for first-party interfaces.
The PEP specifies the add_reader family of methods in part so that core protocol implementations can be shared across all EventLoop implementations that support these methods. However, if those transports are implemented in a common base class (like Twisted's PosixReactorBase), there is no way for third-party transports to take advantage of a similar structure.
I'm not sure I understand this (the "there is no way" part). Is the problem that that base class is private, or that the add_reader methods may not be there, or what?
Suppose twisted did not have UDP support built in. Most reactor implementations subclass PosixReactorBase (with IOCPReactor as the notable exception). Twisted can add UDP support and implement listenUDP in PosixReactorBase and IOCPReactor, and suddenly most reactors (even third-party ones like TornadoReactor) support UDP for free. Those that don't (a hypothetical LibUVReactor?) can implement it themselves and interoperate with everything else. If a third party wanted to add UDP support separately from twisted's release schedule, they can't do with an interface that is generically usable across all reactors. They could make a static function listenUDP() that works with any IReactorFDSet, and maybe special-case IOCPReactor, but then there'd be no way for a third-party LibUVReactor to participate.
I'd like to make it possible for transports to be developed independent of a particular EventLoop implementation in a way that is consistent with the way the core transports work.
Hm, now we're talking about something else. Transports implemented *independently* from an event loop implementation should not assume more than the standardized API. This feels rather limiting unless we're talking about transports built on top of other protocols (e.g. the mythical TCP-over-HTTP transport :-).
My expectation is that most transport implementations (as opposed to protocols) are probably tied closely to an event loop implementation (and note that the various Tulip event loop implementations using select, poll, epoll, kqueue, are a single implementation for this purpose -- OTOH the IocpEventLoop (in the iocp branch) is a different implementation and, indeed, has different transport implementations!
add_reader is not very limiting except for its platform-specificity. It's possible to have a generic protocol across all posixy event loops and then special-case the small number of interesting non-posixy ones (or maybe there is some other class of methods that could be standardized for other platforms? Is there some set of methods analogous to add_reader that multiple IOCP-based loops could share?)
(This is a bigger concern for tulip than it is for twisted because twisted can update PosixReactorBase more frequently than the stdlib can change)
I think what you're really getting at here is that there is no 3rd significant party cottage industry creating new transports. Transports in practice are all part of the Twisted distribution, so development is only gated by backwards compatibility requirements with user apps (APIs, once offered, must remain supported and stable), not by compatibilities with older versions of the rest of the framework (a new transport introduced in Twisted 12.1 doesn't have to work with Twisted 12.0).
I was thinking more about release schedules. Twisted has several releases a year, but the standard library moves much more slowly.
I propose turning the interface around so that transport creation uses static functions that take the event loop as an argument, with a double-dispatch mechanism to allow the event loop to provide the actual implementation when it can:
def create_connection(protocol_factory, host, port, event_loop=None): if event_loop is None: event_loop = get_event_loop() # Note the use of a fully-qualified name in the registry impl = event_loop.get_implementation('tulip.create_connection') return impl(protocol_factory, host, port)
Hm. I don't see what this adds. It still always gets the protocol from the event loop so moving this standardized method out of the event loop class doesn't seem to buy anything. The implementation-dependent work is just moved into get_implementation(). I also don't see why we need a registry.
This version doesn't change much, it's mainly to set the stage for the following variations. However, it does have a few nice properties - it keeps the (public) event loop interface small and manageable, and callers don't need to touch actual event loop objects unless they want to have more than one. From a stylistic perspective I like this style of interface more than using dozens of methods on the event loop object itself (even if those dozens of methods are still there but hidden as an implementation detail).
New third-party transports could provide fallbacks for event loops that don't have their own implementations:
if impl is None: # These supports_*() functions are placeholders for a to-be-determined # introspection interface. if supports_fd_interface(event_loop): return posix_udp_implementation(*args) elif supports_iocp_interface(event_loop): return iocp_udp_implementation(*args) else: raise Exception("This transport is not supported on this event loop")
It seems you want each transport implementation to provide its own create_connection() function, right? There's nothing wrong with that, and I don't see that just because 3rd party transports will be instantiated through a module-level function (in a specific module) that means that the standard transports specified by the PEP (standard in semantics, not in implementation!) can't be instantiated through methods on the event loop.
Perhaps you are placing a higher value on consistency between standard and non-standard transports? To me, it is actually positive to be inconsistent here, so readers are made fully aware that a non-standard transport is being used.
When third-party modules get absorbed into the standard library, it's often possible to support both just by trying different imports until one works (unittest.mock vs mock, json vs simplejson, etc). Sometimes a module's interface gets cleaned up and rearranged in the process, but that seems to be less common. It would be nice if a third-party transport could get standardized and the only thing callers would need to change is their imports. However, this is a minor concern; as I wrote up this design I realized I liked it for first-party work even if there were no third-party modules to be consistent with.
(The idea of introspection functions is fine, however.)
Or they could plug into the event loop's implementation registry:
LibUVEventLoop.register_implementation('mymodule.listen_udp', libuv_udp_implementation)
Yeah, but wouldn't this likely be a private affair between UV's event loop and UV's UDP transport, which are being distributed together?
But libuv doesn't necessarily contain the transport creation function. The idea is that someone can propose a transport interface in a third-party module (mymodule.listen_udp in this example), implement it themselves for some event loop implementations, and other event loops can declare themselves compatible with it. (And in an admittedly far-fetched scenario, if there were two third-party UDP interfaces and LibUVEventLoop implemented one of them, yet another party could build a bridge between the two, and then they'd plug it in with register_implementation)
Users who wish to use UDP (assuming PEP 3156 ends up not specifying UDP support) are required to depend on a non-standard feature of their event loop, you can't hide that with a registry. (Note that a UDP transport has a different API than a TCP transport, and requires the protocol to implement different methods as well.)
Yes, third-party transports will be non-standard, but in practice the add_reader family makes it easy to get broad coverage in a quasi-standard way, and the registry makes it possible to fill in the gaps.
This does introduce a little magic (is there any precedent for this kind of multiple-dispatch in the standard library?), but I like the way it keeps the event loop interface from getting too big and monolithic. Third-party transports can avoid naming conflicts without looking fundamentally different from standard ones, and there's a clean path from doing something that's platform-specific (e.g. with add_reader and friends) to supporting multiple event loops to full standardization.
I actually consider it a good thing that when a concept is standardized, the "name" of the API changes. When we adopt a 3rd party module in the stdlib we typically give it a new name too, to avoid confusion about which version is meant (since inevitably the 3rd party has more variability than the version adopted into the stdlib).
The *module* gets a new name (and that is indeed a good thing), but the functions and classes within (usually) stay the same.
With all that said, if a particular event loop implementation prefers to add non-standard methods to their event loop object, and then starts lobbying for its adoption in the standard, I can't stop them, and they may even have a good shot at getting adopted in the next Python version. But they should be aware of the risk they run, that the next version of the stdlib might expose a different API under their chose name. It will be easier for their users to transition if they choose a way to spell their extension that is *not* likely to be standardized, e.g. a function in their own module, or an event loop method name with a custom prefix like uv_listen_udp().
Agreed.
I'm looking forward to explanations of why I am preventing the developing of 3rd party transports with this response....
I don't think the status quo prevents the development of third-party transports, but it does implicitly encourage two bad habits: A) adding methods to the event loop, inviting name collisions, or B) just building on add_reader and friends without thinking about non-posix platforms. Of course, no one expects a groundswell of third-party development at the event loop and transport level, so this could just be so much overengineering, but I like it from a stylistic perspective even without the third-party benefits. -Ben
-- --Guido van Rossum (python.org/~guido)

I'm going to try and snip as much as I can to get the the heart of this... On Sun, Feb 3, 2013 at 11:55 AM, Ben Darnell <ben@bendarnell.com> wrote:
UDP is a real-life example from tornado - we don't have any built-in support for UDP, but people who need it have been able to build it without touching tornado itself. The same argument would apply to pipes or any number of other (admittedly much more esoteric) network protocols. I'll elaborate on the UDP example below.
Hm. UDP is relatively easy because it uses sockets. Pipes are harder -- testing for the presence of add_reader (etc. -- I will leave this off from now on) isn't enough, because select on Windows does not support pipes. Thinking about what you could mean by "more esoteric protocols", there's really not much at the level of TCP and UDP that comes to mind. UNIX-domain sockets, and perhaps the (root-only) protocol for sniffing packets (raw sockets?). A new feature just landed in Tulip (I still have to update PEP 3156) where you can pass a pre-constructed socket object to create_connection() and start_serving(), which will make it a little easier to support esoteric ways of setting up the socket; however, create_connection() is still limited to sockets that implement a byte stream, because of the way the transport/protocol API works.
Right. Third-party extensions to the event loop interface are inherently problematic, so we'll have to provide them in some other way. I'm proposing a pattern for that "some other way" and then realizing that I like it even for first-party interfaces.
Glad that is out of the way. But I'm still skeptical -- first, as I explained before, I am actually in favor of using different styles for 1st and 3rd party interfaces, so the status of the interface used is obvious to the reader (and the coder, in case they are copy-pasting recipes :-); second, I don't expect there will be too many opportunities to put the pattern at work. Note that I'm only talking about 3rd party *interfaces* -- if a 3rd party module implements a 1st party interface (e.g. the stream transport/protocol interface specified in PEP 3156) it can just use the event loop create_connection method (assuming it is also implementing a new event loop -- otherwise what would be the point of the 3rd party code?). And note that even UDP requires a different interface between transport and protocol -- e.g. the protocol method called should be packet_received() rather than data_received(), and the protocol should probably not implement write/writelines but send and send_multiple. And the signatures of these methods will be different because (depending on how you use UDP) you have to have a parameter for the peer address. And yet, implementing UDP as pure 3rd party code using add_reader is simple, as long as the event loop supports add_reader. You just can't use create_connection or start_serving -- but those are really just convenience methods that are easily reimplemented. (We could refactor the standard implementations to have more reusable parts, but we'd run into the same problem as with add_reader -- while most UNIXy event loops will easily support such refactorings, that's not the case with event loops based on IOCP, other other libraries that don't naturally offer add_reader functionality. (Not sure if that's the case for libuv.) All this makes me skeptical that a single API should be used to register "transports". At the very least you will need different registries for each distinct transport/protocol interface; in addition, custom transports (even if they implement the same transport/protocol interface) may have different constructor arguments (e.g. consider plain TCP vs. SSL in Tulip).
Suppose twisted did not have UDP support built in. Most reactor implementations subclass PosixReactorBase (with IOCPReactor as the notable exception). Twisted can add UDP support and implement listenUDP in PosixReactorBase and IOCPReactor, and suddenly most reactors (even third-party ones like TornadoReactor) support UDP for free.
If you say so. I don't know enough about Twisted's internals to verify this claim. Depending on how things were factored I could easily imagining something in PosixReactorBase making the assumption of a stream protocol somewhere. In a stream protocol like TCP, it is safe to collapse two consecutive sends into one, and to split one send into multiples. But not for datagram protocols like UDP. In an ideal world, knowledge of all this is completely left out of the reactor. But, in a hypothetical world where Twisted only supported streams, who knows whether that is done?
Those that don't (a hypothetical LibUVReactor?) can implement it themselves and interoperate with everything else.
In practice I suspect that the number of 3rd party event loop implementations that support add_reader and let a different 3rd party's UDP implementation succeed will be vanishingly small. Even smaller if you don't count the ones that are essentially clones or subclasses of Tulip's UNIX support.
If a third party wanted to add UDP support separately from twisted's release schedule, they can't do with an interface that is generically usable across all reactors. They could make a static function listenUDP() that works with any IReactorFDSet, and maybe special-case IOCPReactor, but then there'd be no way for a third-party LibUVReactor to participate.
This sounds unavoidable no matter how you refactor the interface. There are potentially event loop implementations that don't use socket objects at all. (And yes, those will have to reject the 'sock' argument to create_connection and start_serving; and I have to change start_serving's return type to be something other than a socket.) When you implement a new 3rd party transport, you are pretty much inevitably limiting yourself to a subset of event loops. That subset won't be empty, and it will be sufficient for your purpose, but the ideal of portability across all (or even most) event loops, including ones that haven't been written yet, is unattainable. I certainly haven't seen an indication that your proposed registry will address this.
add_reader is not very limiting except for its platform-specificity. It's possible to have a generic protocol across all posixy event loops and then special-case the small number of interesting non-posixy ones (or maybe there is some other class of methods that could be standardized for other platforms? Is there some set of methods analogous to add_reader that multiple IOCP-based loops could share?)
Not exactly analogous -- the whole point of IOCP is that it is not "ready-based" but "completion-based". The sock_recv (etc.) methods on the event loop are my attempt to suggest a way for other completion-based event loops to open themselves up for new transport implementations, but this is much more limiting than add_reader -- e.g. I suspect that IOCP will let you read from a named pipe, but you must use a different library call than for receiving from a socket; even receiving a packet from UDP will require a different method. This issue doesn't exist in the same way for add_reader, because the system call to do the read is not made by the event loop, it is made by the transport.
This version doesn't change much, it's mainly to set the stage for the following variations. However, it does have a few nice properties - it keeps the (public) event loop interface small and manageable, and callers don't need to touch actual event loop objects unless they want to have more than one.
Not quite -- the call_soon(), call_later() etc. functionality is also exposed as event loop methods.
From a stylistic perspective I like this style of interface more than using dozens of methods on the event loop object itself (even if those dozens of methods are still there but hidden as an implementation detail).
You can't argue about style. :-)
When third-party modules get absorbed into the standard library, it's often possible to support both just by trying different imports until one works (unittest.mock vs mock, json vs simplejson, etc). Sometimes a module's interface gets cleaned up and rearranged in the process, but that seems to be less common. It would be nice if a third-party transport could get standardized and the only thing callers would need to change is their imports. However, this is a minor concern; as I wrote up this design I realized I liked it for first-party work even if there were no third-party modules to be consistent with.
You can't argue about style. :-)
But libuv doesn't necessarily contain the transport creation function.
But a libuv-based PEP 3156-compliant event loop implementation must.
The idea is that someone can propose a transport interface in a third-party module (mymodule.listen_udp in this example), implement it themselves for some event loop implementations, and other event loops can declare themselves compatible with it.
Aha! This is the executive summary of your proposal, or at least your goal for it. This is hypothesizing rather a lot of goodwill and coordination between different 3rd party developers. And the registry offered by the event loop comes down to not much more than a dictionary with keys that follow a certain convention (e.g. fully-qualified package+module name plus some identifier for the feature) and nothing can be said about what the items stored in the registry are (since a packet transport and a stream transport are not interchangeable, and even two stream transports may not be). Given that for each 3rd party transport the details of how to implement a compatible version of it will vary hugely, both depending on what the transport is trying to do and how the event loop works, I expect that the market for this registry will be rather small. And when a particular 3rd party transport wants to enable other 3rd party events to support them, they can implement their own registry, which the other 3rd party could then plug into. (But see below.)
(And in an admittedly far-fetched scenario, if there were two third-party UDP interfaces and LibUVEventLoop implemented one of them, yet another party could build a bridge between the two, and then they'd plug it in with register_implementation)
I do see one argument in favor of having a standard registry on the event loop, even if it's just a dict with register/lookup APIs and a naming convention, and no semantics assigned to the items registered. That argument is to make 3rd party transport implementers aware of the possibility that some other 3rd party might want to offer a compatible implementation aimed at an event loop that's not supported natively by the (former) 3rd party transport. And I could even be convinced that the standard protocols should use this registry so that the source code serves as an example of best practices. Still, it's a pretty weak argument IMO -- I don't expect there to be a significant cottage industry cranking out 3rd party protocol implementations, assuming we add UDP to PEP 3156, which is my plan. And I don't think that create_connection() should be used to create UDP connections -- the signature of the protocol factory passed in would be quite different, for starters, and the set of options needed to configure the transport is also different (assuming we want to support both connected and connection-less UDP).
I don't think the status quo prevents the development of third-party transports, but it does implicitly encourage two bad habits: A) adding methods to the event loop, inviting name collisions, or B) just building on add_reader and friends without thinking about non-posix platforms. Of course, no one expects a groundswell of third-party development at the event loop and transport level, so this could just be so much overengineering, but I like it from a stylistic perspective even without the third-party benefits.
Yeah, so that's the rub: I'm not so keen on adding extra machinery to the PEP that I don't expect to be used much. I have more important fish to fry (such as adding UDP :-). And adding a registry one whole Python release cycle later (e.g. in 3.5, assuming PEP 3156 is standardized and included in 3.4) doesn't strike me as such a bad thing -- I don't think we're painting ourselves into much of a corner by not having a registry right from the start. -- --Guido van Rossum (python.org/~guido)

On Mon, Feb 4, 2013 at 2:02 PM, Guido van Rossum <guido@python.org> wrote:
I'm going to try and snip as much as I can to get the the heart of this...
Me too...
Thinking about what you could mean by "more esoteric protocols", there's really not much at the level of TCP and UDP that comes to mind. UNIX-domain sockets, and perhaps the (root-only) protocol for sniffing packets (raw sockets?).
Looking at twisted as an example, the things that are supported across both PosixReactorBase and IOCPReactor are TCP, UDP, SSL, multicast, and subprocesses. Unix domain sockets aren't that interesting here since any system that supports them will (presumably?) support add_reader and things will just work.
A new feature just landed in Tulip (I still have to update PEP 3156) where you can pass a pre-constructed socket object to create_connection() and start_serving(), which will make it a little easier to support esoteric ways of setting up the socket; however, create_connection() is still limited to sockets that implement a byte stream, because of the way the transport/protocol API works.
Cool. That's definitely useful (especially on the server side), but we'll still need a separate interface for datagrams.
Right. Third-party extensions to the event loop interface are inherently problematic, so we'll have to provide them in some other way. I'm proposing a pattern for that "some other way" and then realizing that I like it even for first-party interfaces.
Glad that is out of the way. But I'm still skeptical -- first, as I explained before, I am actually in favor of using different styles for 1st and 3rd party interfaces, so the status of the interface used is obvious to the reader (and the coder, in case they are copy-pasting recipes :-); second, I don't expect there will be too many opportunities to put the pattern at work.
You can't argue about style. :-)
The idea is that someone can propose a transport interface in a third-party module (mymodule.listen_udp in this example), implement it themselves for some event loop implementations, and other event loops can declare themselves compatible with it.
Aha! This is the executive summary of your proposal, or at least your goal for it.
This is hypothesizing rather a lot of goodwill and coordination between different 3rd party developers.
Yeah, history is unfortunately not very supportive of the idea that developers of asynchronous frameworks will coordinate on this kind of thing :)
And the registry offered by the event loop comes down to not much more than a dictionary with keys that follow a certain convention (e.g. fully-qualified package+module name plus some identifier for the feature) and nothing can be said about what the items stored in the registry are (since a packet transport and a stream transport are not interchangeable, and even two stream transports may not be).
Given that for each 3rd party transport the details of how to implement a compatible version of it will vary hugely, both depending on what the transport is trying to do and how the event loop works, I expect that the market for this registry will be rather small. And when a particular 3rd party transport wants to enable other 3rd party events to support them, they can implement their own registry, which the other 3rd party could then plug into. (But see below.)
True.
(And in an admittedly far-fetched scenario, if there were two third-party UDP interfaces and LibUVEventLoop implemented one of them, yet another party could build a bridge between the two, and then they'd plug it in with register_implementation)
I do see one argument in favor of having a standard registry on the event loop, even if it's just a dict with register/lookup APIs and a naming convention, and no semantics assigned to the items registered. That argument is to make 3rd party transport implementers aware of the possibility that some other 3rd party might want to offer a compatible implementation aimed at an event loop that's not supported natively by the (former) 3rd party transport. And I could even be convinced that the standard protocols should use this registry so that the source code serves as an example of best practices.
Still, it's a pretty weak argument IMO -- I don't expect there to be a significant cottage industry cranking out 3rd party protocol implementations, assuming we add UDP to PEP 3156, which is my plan.
Yeah, as long as we get TCP, UDP, SSL, and pipes (at least for subprocesses), I'm hard pressed to imagine anything else that would be in so much demand it would need to be supported by all event loops.
And I don't think that create_connection() should be used to create UDP connections -- the signature of the protocol factory passed in would be quite different, for starters, and the set of options needed to configure the transport is also different (assuming we want to support both connected and connection-less UDP).
Of course. (I may have been unclear somewhere along the way, but it was never my intention for create_connection to work for both TCP and UDP)
I don't think the status quo prevents the development of third-party transports, but it does implicitly encourage two bad habits: A) adding methods to the event loop, inviting name collisions, or B) just building on add_reader and friends without thinking about non-posix platforms. Of course, no one expects a groundswell of third-party development at the event loop and transport level, so this could just be so much overengineering, but I like it from a stylistic perspective even without the third-party benefits.
Yeah, so that's the rub: I'm not so keen on adding extra machinery to the PEP that I don't expect to be used much. I have more important fish to fry (such as adding UDP :-). And adding a registry one whole Python release cycle later (e.g. in 3.5, assuming PEP 3156 is standardized and included in 3.4) doesn't strike me as such a bad thing -- I don't think we're painting ourselves into much of a corner by not having a registry right from the start.
Fair enough. I may use this pattern when/if I retrofit Tornado to be IOCP-friendly since we currently create transports with static functions, but for Tulip let's wait and see if the problem this proposal is trying to solve ever materializes. -Ben
-- --Guido van Rossum (python.org/~guido)

On Mon, Feb 4, 2013 at 8:50 PM, Ben Darnell <ben@bendarnell.com> wrote:
Fair enough. I may use this pattern when/if I retrofit Tornado to be IOCP-friendly since we currently create transports with static functions, but for Tulip let's wait and see if the problem this proposal is trying to solve ever materializes.
Great! -- --Guido van Rossum (python.org/~guido)

On 5 February 2013 04:50, Ben Darnell <ben@bendarnell.com> wrote:
Still, it's a pretty weak argument IMO -- I don't expect there to be a significant cottage industry cranking out 3rd party protocol implementations, assuming we add UDP to PEP 3156, which is my plan.
Yeah, as long as we get TCP, UDP, SSL, and pipes (at least for subprocesses), I'm hard pressed to imagine anything else that would be in so much demand it would need to be supported by all event loops.
There are two thing that come to *my* mind whenever this sort of debate comes up. I'll freely admit that they are 100% theoretical in terms of my actual requirements, but neither is particular implausible. - Synchronisation primitlives like Windows event objects - wanting to integrate code to be run when an event is set into an event loop seems relatively reasonable. - GUI input events - I don't know about Unix, but Windows GUI events are a separate notification stream from network or pipe data, and it's very plausible that someone would want to integrate GUI events into an async app. Twisted, for instance, has GUI event loop integration facilities, I believe. (I know that "write your own event loop" is always a solution. But I'm not sure that doesn't set the bar a bit too high - it depends on how easy it is to reuse and extend the implementation of the standard event loops, which is something I'm not really clear on yet). Paul

On Tue, Feb 5, 2013 at 12:32 PM, Paul Moore <p.f.moore@gmail.com> wrote:
On 5 February 2013 04:50, Ben Darnell <ben@bendarnell.com> wrote:
Still, it's a pretty weak argument IMO -- I don't expect there to be a significant cottage industry cranking out 3rd party protocol implementations, assuming we add UDP to PEP 3156, which is my plan.
Yeah, as long as we get TCP, UDP, SSL, and pipes (at least for subprocesses), I'm hard pressed to imagine anything else that would be in so much demand it would need to be supported by all event loops.
There are two thing that come to *my* mind whenever this sort of debate comes up. I'll freely admit that they are 100% theoretical in terms of my actual requirements, but neither is particular implausible.
- Synchronisation primitlives like Windows event objects - wanting to integrate code to be run when an event is set into an event loop seems relatively reasonable.
Since this is about interacting with the threading world, you can always wrap that in a concurrent.futures.Future, and then wrap that in eventloop.wrap_future().
- GUI input events - I don't know about Unix, but Windows GUI events are a separate notification stream from network or pipe data, and it's very plausible that someone would want to integrate GUI events into an async app. Twisted, for instance, has GUI event loop integration facilities, I believe.
That's way too large a topic to try to anticipate without thorough research. And it's not very likely that you can do this in a portable way either -- the best you can probably hope for is have a PEP 3156-compliant event loop that lets you use portable async networking APIs (transports and protocols) while also letting you write platform-*specific* GUI code. The best route here will probably be the PEP 3156 bridge that Twisted is going to develop once the PEP and Twisted's Python 3 port stabilize. (Unfortunately, wxPython is not ported to Python 3.)
(I know that "write your own event loop" is always a solution. But I'm not sure that doesn't set the bar a bit too high - it depends on how easy it is to reuse and extend the implementation of the standard event loops, which is something I'm not really clear on yet).
Check the most recent changes that landed in Tulip. The EventLoop class has been refactored to bits and pieces, with exactly that purpose in mind. (The PEP is a bit behind ATM, and I just broke the ProactorEventLoop's ability to act as a server, but I'm on that.) -- --Guido van Rossum (python.org/~guido)

Thanks for the responses. I think you're right on both counts - in particular, I'd forgotten the ability to wrap futures for the event case. On 5 February 2013 21:28, Guido van Rossum <guido@python.org> wrote:
Check the most recent changes that landed in Tulip. The EventLoop class has been refactored to bits and pieces, with exactly that purpose in mind. (The PEP is a bit behind ATM, and I just broke the ProactorEventLoop's ability to act as a server, but I'm on that.)
I'll try to get a chance to look. I haven't really checked the code for a while, as I've been busy. Did you ever look at my subprocess code? It was in a bitbucket repo rather than reitveld, my apologies if that was a problem, I can investigate how to use reitveld if needed. Paul

On Tue, Feb 5, 2013 at 2:08 PM, Paul Moore <p.f.moore@gmail.com> wrote:
Did you ever look at my subprocess code? It was in a bitbucket repo rather than reitveld, my apologies if that was a problem, I can investigate how to use reitveld if needed.
I glanced at it, but didn't really review it in depth. Do you think it's ready for integration? I do like the idea of using as much from the subprocess module as possible. We also need to look at supporting the same API on Windows (since subprocess supports it). I have access to a Windows box now. If you think you have something that's ready to integrate (even if just UNIX), please do use Rietveld (codereview.appspot.com). The best strategy is to leave your changes uncommitted in a current checkout of Tulip, and run the upload.py script that you can download here: https://codereview.appspot.com/static/upload.py and which is documented here: http://code.google.com/p/rietveld/wiki/UploadPyUsage (You are already a PSF contributed, right?) -- --Guido van Rossum (python.org/~guido)

On 5 February 2013 22:39, Guido van Rossum <guido@python.org> wrote:
On Tue, Feb 5, 2013 at 2:08 PM, Paul Moore <p.f.moore@gmail.com> wrote:
Did you ever look at my subprocess code? It was in a bitbucket repo rather than reitveld, my apologies if that was a problem, I can investigate how to use reitveld if needed.
I glanced at it, but didn't really review it in depth. Do you think it's ready for integration? I do like the idea of using as much from the subprocess module as possible. We also need to look at supporting the same API on Windows (since subprocess supports it). I have access to a Windows box now.
Functionally, I'm happy with the patch (on Unix, it needs something for Windows but I don't know IOCP very well, so I'm not sure how much I can do there, ironically). I'm a bit concerned with the usability of the API in the coroutine style (it's fine using callbacks, which I'm comfortable with, but I'm struggling to get my head round coroutines, so I've no intuition as to whether it feels natural in that style).
If you think you have something that's ready to integrate (even if just UNIX), please do use Rietveld (codereview.appspot.com). The best strategy is to leave your changes uncommitted in a current checkout of Tulip, and run the upload.py script that you can download here: https://codereview.appspot.com/static/upload.py and which is documented here: http://code.google.com/p/rietveld/wiki/UploadPyUsage
OK, I'll give it a try - probably not for a week or two, as I'm away a lot at the moment...
(You are already a PSF contributed, right?)
TBH, I'm not entirely sure. I *should* be, as there's code in Python from me, but I don't recall ever sending in a form. I'll send a new form in just to be certain. Paul.

On 2/5/2013 6:56 PM, Paul Moore wrote:
On 5 February 2013 22:39, Guido van Rossum <guido@python.org> wrote:
(You are already a PSF contributed, right?)
TBH, I'm not entirely sure. I *should* be, as there's code in Python from me, but I don't recall ever sending in a form. I'll send a new form in just to be certain.
This is now recorded in user records on the tracker. As for you, Paul, http://bugs.python.org/user301 indicates that there is no current record. If you ever did send a paper form, it got lost a few years ago like mine and many others. You can now send a scan by email. http://www.python.org/psf/contrib/ People with contributor agreements are also indicated by '*' after their name in issue messages. Other past or prospective contributors who are not sure of their recorded contributor status can check either way and send a form if needed. -- Terry Jan Reedy

Any others wanting to toy with tulip on windows can have the _overlapped.pyd I painfully made for python3.3-32bit. https://www.dropbox.com/s/20ljdyafpe25ekh/_overlapped.pyd BTW I needed to require windows vista for it to compile... #define _WIN32_WINNT 0x0600 #define NTDDI_VERSION 0x06000000 Yuval Greenfield

Great! Are you keeping that up to date? Richard is checking changes at a furious pace. :-) On Wed, Feb 6, 2013 at 11:19 AM, Yuval Greenfield <ubershmekel@gmail.com> wrote:
Any others wanting to toy with tulip on windows can have the _overlapped.pyd I painfully made for python3.3-32bit.
https://www.dropbox.com/s/20ljdyafpe25ekh/_overlapped.pyd
BTW I needed to require windows vista for it to compile...
#define _WIN32_WINNT 0x0600 #define NTDDI_VERSION 0x06000000
Yuval Greenfield
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- --Guido van Rossum (python.org/~guido)

On Wed, Feb 6, 2013 at 9:29 PM, Guido van Rossum <guido@python.org> wrote:
Great! Are you keeping that up to date? Richard is checking changes at a furious pace. :-)
I updated the file so the same link has the new pyd that is up to date as of http://code.google.com/p/tulip/source/detail?r=65c456e2c20ece3adabc6d5f37d78... Fix return type of SetFromWindowsErr(). E:\Dropbox\dev\python\tulip>c:\python33\python runtests.py ....s......s............s.............sss......ss.....sss...s..........sss...... .................................................... ---------------------------------------------------------------------- Ran 132 tests in 8.513s OK (skipped=15) Strangely, the first time I run the tests I get a big pile of output exceptions though the tests do pass. E.g. .........sss......ss.ERROR:root:Exception in task Traceback (most recent call last): File "E:\Dropbox\dev\python\tulip\tulip\tasks.py", line 96, in _step result = coro.send(value) File "E:\Dropbox\dev\python\tulip\tulip\base_events.py", line 235, in create_c onnection raise exceptions[0] File "E:\Dropbox\dev\python\tulip\tulip\base_events.py", line 226, in create_c onnection yield self.sock_connect(sock, address) File "c:\python33\lib\unittest\mock.py", line 846, in __call__ return _mock_self._mock_call(*args, **kwargs) File "c:\python33\lib\unittest\mock.py", line 901, in _mock_call raise effect OSError .ERROR:root:Exception in task [...] Indeed, it's hard to keep up.

On 06/02/2013 9:23pm, Yuval Greenfield wrote:
Strangely, the first time I run the tests I get a big pile of output exceptions though the tests do pass. E.g.
.........sss......ss.ERROR:root:Exception in task Traceback (most recent call last): File "E:\Dropbox\dev\python\tulip\tulip\tasks.py", line 96, in _step result = coro.send(value) File "E:\Dropbox\dev\python\tulip\tulip\base_events.py", line 235, in create_c onnection raise exceptions[0] File "E:\Dropbox\dev\python\tulip\tulip\base_events.py", line 226, in create_c onnection yield self.sock_connect(sock, address) File "c:\python33\lib\unittest\mock.py", line 846, in __call__ return _mock_self._mock_call(*args, **kwargs) File "c:\python33\lib\unittest\mock.py", line 901, in _mock_call raise effect OSError .ERROR:root:Exception in task [...]
I see this sometimes too. It seems that these are expected errors caused by using self.assertRaises(...). Why these errors are logged only sometimes I don't understand. -- Richard

On 6 February 2013 21:48, Richard Oudkerk <shibturn@gmail.com> wrote:
On 06/02/2013 9:23pm, Yuval Greenfield wrote:
Strangely, the first time I run the tests I get a big pile of output exceptions though the tests do pass. E.g.
I just tried a build on Windows 7 64-bit with Python 3.3 (i.e., not a 3.4 checkout). I got a lot of these types of error as well. I also got a couple of genuine ones. One was a build problem - PY_ULONG_MAX doesn't exist. I changed it to ULONG_MAX which got the compile to work - but it may not be correct, I guess. The other was: ERROR: test_sock_accept (tulip.events_test.ProactorEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Work\Scratch\tulip\tulip\events_test.py", line 309, in test_sock_accept conn, addr = self.event_loop.run_until_complete(f) File "C:\Work\Scratch\tulip\tulip\base_events.py", line 104, in run_until_complete return future.result() # May raise future.exception(). File "C:\Work\Scratch\tulip\tulip\futures.py", line 148, in result raise self._exception File "C:\Work\Scratch\tulip\tulip\windows_events.py", line 132, in _poll value = callback() File "C:\Work\Scratch\tulip\tulip\windows_events.py", line 83, in finish_accept listener.fileno()) OSError: [WinError 10014] The system detected an invalid pointer address in attempting to use a pointer argument in a call Interestingly, I saw the same OSError occurring in some of the "ERROR:root:Accept failed" logging stuff. I don't know if that's of any use... Paul

On 06/02/2013 10:20pm, Paul Moore wrote:
I just tried a build on Windows 7 64-bit with Python 3.3 (i.e., not a 3.4 checkout). I got a lot of these types of error as well. I also got a couple of genuine ones. One was a build problem - PY_ULONG_MAX doesn't exist. I changed it to ULONG_MAX which got the compile to work - but it may not be correct, I guess. The other was:
ERROR: test_sock_accept (tulip.events_test.ProactorEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Work\Scratch\tulip\tulip\events_test.py", line 309, in test_sock_accept conn, addr = self.event_loop.run_until_complete(f) File "C:\Work\Scratch\tulip\tulip\base_events.py", line 104, in run_until_complete return future.result() # May raise future.exception(). File "C:\Work\Scratch\tulip\tulip\futures.py", line 148, in result raise self._exception File "C:\Work\Scratch\tulip\tulip\windows_events.py", line 132, in _poll value = callback() File "C:\Work\Scratch\tulip\tulip\windows_events.py", line 83, in finish_accept listener.fileno()) OSError: [WinError 10014] The system detected an invalid pointer address in attempting to use a pointer argument in a call
Interestingly, I saw the same OSError occurring in some of the "ERROR:root:Accept failed" logging stuff.
I don't know if that's of any use...
Thanks. It should be fixed now, but I have not tested it on a 64 bit build. (My 64 bit setup seems to be all screwed up.) -- Richard

I can test it when I get home... On Thursday, 7 February 2013, Richard Oudkerk wrote:
On 06/02/2013 10:20pm, Paul Moore wrote:
I just tried a build on Windows 7 64-bit with Python 3.3 (i.e., not a 3.4 checkout). I got a lot of these types of error as well. I also got a couple of genuine ones. One was a build problem - PY_ULONG_MAX doesn't exist. I changed it to ULONG_MAX which got the compile to work - but it may not be correct, I guess. The other was:
ERROR: test_sock_accept (tulip.events_test.**ProactorEventLoopTests) ------------------------------**------------------------------** ---------- Traceback (most recent call last): File "C:\Work\Scratch\tulip\tulip\**events_test.py", line 309, in test_sock_accept conn, addr = self.event_loop.run_until_**complete(f) File "C:\Work\Scratch\tulip\tulip\**base_events.py", line 104, in run_until_complete return future.result() # May raise future.exception(). File "C:\Work\Scratch\tulip\tulip\**futures.py", line 148, in result raise self._exception File "C:\Work\Scratch\tulip\tulip\**windows_events.py", line 132, in _poll value = callback() File "C:\Work\Scratch\tulip\tulip\**windows_events.py", line 83, in finish_accept listener.fileno()) OSError: [WinError 10014] The system detected an invalid pointer address in attempting to use a pointer argument in a call
Interestingly, I saw the same OSError occurring in some of the "ERROR:root:Accept failed" logging stuff.
I don't know if that's of any use...
Thanks.
It should be fixed now, but I have not tested it on a 64 bit build. (My 64 bit setup seems to be all screwed up.)
-- Richard
______________________________**_________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/**mailman/listinfo/python-ideas<http://mail.python.org/mailman/listinfo/python-ideas>

On 7 February 2013 10:13, Paul Moore <p.f.moore@gmail.com> wrote:
On Thursday, 7 February 2013, Richard Oudkerk wrote:
It should be fixed now, but I have not tested it on a 64 bit build. (My 64 bit setup seems to be all screwed up.)
I can test it when I get home...
Yep, works fine now on 64-bit Windows 7. Ran 132 tests in 9.048s OK (skipped=15) No errors, no spurious logging of exceptions. Thanks! Paul

On Wed, Feb 6, 2013 at 1:48 PM, Richard Oudkerk <shibturn@gmail.com> wrote:
On 06/02/2013 9:23pm, Yuval Greenfield wrote:
Strangely, the first time I run the tests I get a big pile of output exceptions though the tests do pass. E.g.
.........sss......ss.ERROR:root:Exception in task Traceback (most recent call last): File "E:\Dropbox\dev\python\tulip\tulip\tasks.py", line 96, in _step result = coro.send(value) File "E:\Dropbox\dev\python\tulip\tulip\base_events.py", line 235, in create_c onnection raise exceptions[0] File "E:\Dropbox\dev\python\tulip\tulip\base_events.py", line 226, in create_c onnection yield self.sock_connect(sock, address) File "c:\python33\lib\unittest\mock.py", line 846, in __call__ return _mock_self._mock_call(*args, **kwargs) File "c:\python33\lib\unittest\mock.py", line 901, in _mock_call raise effect OSError .ERROR:root:Exception in task [...]
I see this sometimes too.
It seems that these are expected errors caused by using self.assertRaises(...). Why these errors are logged only sometimes I don't understand.
Me neither. :-( The message "Exception in task" means that it is a task that raises an exception. I used to ignore these; now I log them always, but ideally they should only be logged when whoever waits for the Task doesn't catch them (or, better, when nobody waits for the task). I tried to implement that part but I couldn't get it to work (yet) -- I will have to get back to this at some point, because accurate exception logging (never silent, but not too spammy either) is very important for a good user experience. But it remains a mystery why they sometimes show and not other times. It suggests there's some indeterminate timing in some tests. If it happens only the first time when the tests are run this usually points to a timing behavior that's different when the source code is parsed as opposed to read from a .pyc file. -- --Guido van Rossum (python.org/~guido)

On 06/02/2013 10:43pm, Guido van Rossum wrote:
Me neither.:-(
The message "Exception in task" means that it is a task that raises an exception. I used to ignore these; now I log them always, but ideally they should only be logged when whoever waits for the Task doesn't catch them (or, better, when nobody waits for the task). I tried to implement that part but I couldn't get it to work (yet) -- I will have to get back to this at some point, because accurate exception logging (never silent, but not too spammy either) is very important for a good user experience.
But it remains a mystery why they sometimes show and not other times. It suggests there's some indeterminate timing in some tests. If it happens only the first time when the tests are run this usually points to a timing behavior that's different when the source code is parsed as opposed to read from a .pyc file.
Commenting out the one use of suppress_log_errors() makes the "expected errors" appear on Linux too. But I would have thought that that would only effect the test which uses suppress_log_errors(). diff -r 65c456e2c20e tulip/events_test.py --- a/tulip/events_test.py Wed Feb 06 19:08:14 2013 +0000 +++ b/tulip/events_test.py Wed Feb 06 23:51:32 2013 +0000 @@ -508,7 +508,7 @@ self.assertFalse(sock.close.called) def test_accept_connection_exception(self): - self.suppress_log_errors() + #self.suppress_log_errors() sock = unittest.mock.Mock() sock.accept.side_effect = socket.error -- Richard

On Wed, Feb 6, 2013 at 3:58 PM, Richard Oudkerk <shibturn@gmail.com> wrote:
On 06/02/2013 10:43pm, Guido van Rossum wrote:
Me neither.:-(
The message "Exception in task" means that it is a task that raises an exception. I used to ignore these; now I log them always, but ideally they should only be logged when whoever waits for the Task doesn't catch them (or, better, when nobody waits for the task). I tried to implement that part but I couldn't get it to work (yet) -- I will have to get back to this at some point, because accurate exception logging (never silent, but not too spammy either) is very important for a good user experience.
But it remains a mystery why they sometimes show and not other times. It suggests there's some indeterminate timing in some tests. If it happens only the first time when the tests are run this usually points to a timing behavior that's different when the source code is parsed as opposed to read from a .pyc file.
Commenting out the one use of suppress_log_errors() makes the "expected errors" appear on Linux too. But I would have thought that that would only effect the test which uses suppress_log_errors().
diff -r 65c456e2c20e tulip/events_test.py --- a/tulip/events_test.py Wed Feb 06 19:08:14 2013 +0000 +++ b/tulip/events_test.py Wed Feb 06 23:51:32 2013 +0000 @@ -508,7 +508,7 @@ self.assertFalse(sock.close.called)
def test_accept_connection_exception(self): - self.suppress_log_errors() + #self.suppress_log_errors()
sock = unittest.mock.Mock() sock.accept.side_effect = socket.error
Good catch! What's going on is that the super().tearDown() call was missing from EventLoopTestsMixin. I'll fix that -- then we'll have to separately add suppress_log_errors() calls to various tests (I'll do that at a slower pace). -- --Guido van Rossum (python.org/~guido)

On Wed, Feb 6, 2013 at 4:11 PM, Guido van Rossum <guido@python.org> wrote:
On Wed, Feb 6, 2013 at 3:58 PM, Richard Oudkerk <shibturn@gmail.com> wrote:
On 06/02/2013 10:43pm, Guido van Rossum wrote:
Me neither.:-(
The message "Exception in task" means that it is a task that raises an exception. I used to ignore these; now I log them always, but ideally they should only be logged when whoever waits for the Task doesn't catch them (or, better, when nobody waits for the task). I tried to implement that part but I couldn't get it to work (yet) -- I will have to get back to this at some point, because accurate exception logging (never silent, but not too spammy either) is very important for a good user experience.
But it remains a mystery why they sometimes show and not other times. It suggests there's some indeterminate timing in some tests. If it happens only the first time when the tests are run this usually points to a timing behavior that's different when the source code is parsed as opposed to read from a .pyc file.
Commenting out the one use of suppress_log_errors() makes the "expected errors" appear on Linux too. But I would have thought that that would only effect the test which uses suppress_log_errors().
diff -r 65c456e2c20e tulip/events_test.py --- a/tulip/events_test.py Wed Feb 06 19:08:14 2013 +0000 +++ b/tulip/events_test.py Wed Feb 06 23:51:32 2013 +0000 @@ -508,7 +508,7 @@ self.assertFalse(sock.close.called)
def test_accept_connection_exception(self): - self.suppress_log_errors() + #self.suppress_log_errors()
sock = unittest.mock.Mock() sock.accept.side_effect = socket.error
Good catch! What's going on is that the super().tearDown() call was missing from EventLoopTestsMixin. I'll fix that -- then we'll have to separately add suppress_log_errors() calls to various tests (I'll do that at a slower pace).
Should all be fixed now. (Please check on Windows, I can't check it right now.) -- --Guido van Rossum (python.org/~guido)

2013/2/5 Guido van Rossum <guido@python.org>
- GUI input events - I don't know about Unix, but Windows GUI events are a separate notification stream from network or pipe data, and it's very plausible that someone would want to integrate GUI events into an async app. Twisted, for instance, has GUI event loop integration facilities, I believe.
That's way too large a topic to try to anticipate without thorough research. And it's not very likely that you can do this in a portable way either -- the best you can probably hope for is have a PEP 3156-compliant event loop that lets you use portable async networking APIs (transports and protocols) while also letting you write platform-*specific* GUI code. The best route here will probably be the PEP 3156 bridge that Twisted is going to develop once the PEP and Twisted's Python 3 port stabilize. (Unfortunately, wxPython is not ported to Python 3.)
That's not exactly true: http://wiki.wxpython.org/ProjectPhoenix#Links It's still a work in progress, but the basic interface is unlikely to change. -- Amaury Forgeot d'Arc

On Wed, Feb 6, 2013 at 2:37 PM, Amaury Forgeot d'Arc <amauryfa@gmail.com> wrote:
2013/2/5 Guido van Rossum <guido@python.org>
(Unfortunately, wxPython is not ported to Python 3.)
That's not exactly true: http://wiki.wxpython.org/ProjectPhoenix#Links It's still a work in progress, but the basic interface is unlikely to change.
Glad to hear it! -- --Guido van Rossum (python.org/~guido)
participants (10)
-
Amaury Forgeot d'Arc
-
Ben Darnell
-
Eli Bendersky
-
Greg Ewing
-
Guido van Rossum
-
Nick Coghlan
-
Paul Moore
-
Richard Oudkerk
-
Terry Reedy
-
Yuval Greenfield