Guido van Rossum wrote: [SNIP]
On Mon, Oct 29, 2012 at 4:12 PM, Steve Dower
The whole blocking coroutine model works really well with callback-based unblocks (whether they call Future.set_result or unblock_task), so I don't think there's anything to worry about here. Compatibility-wise, it should be easy to make programs portable, and since we can have completely separate implementations for Linux/Mac/Windows it will be possible to get good, if not excellent, performance out of each.
Right. Did you see my call_in_thread() yet? http://code.google.com/p/tulip/source/browse/scheduling.py#210 http://code.google.com/p/tulip/source/browse/polling.py#481
Yes, and it really stood out as one of the similarities between our work. I don't have an equivalent function, since writing "yield thread_pool.submit(...)" is sufficient (because it already returns a Future), but I haven't actually made the thread pool a property of the current scheduler. I think there's value in it
What will make a difference is the ready vs. complete notifications - most async Windows APIs will signal when they are complete (for example, the data has been read from the file) unlike many (most? All?) Linux APIs that signal when they are ready. It is possible to wrap this difference up by making all APIs notify on completion, and if we don't do this then user code may be less portable, which I'd hate to see. It doesn't directly relate to IOCP, but it is an important consideration for good cross-platform libraries.
I wonder if this could be done by varying the transports by platform? Not too many people are going to write new transports -- there just aren't that many options. And those that do might be doing something platform-specific anyway. It shouldn't be that hard to come up with a transport abstraction that lets protocol implementations work regardless of whether it's a UNIX style transport or a Windows style transport. UNIX systems with IOCP support could use those too.
I feel like a bit of a tease now, since I still haven't posted my code (it's coming, but I also have day work to do [also Python related]), but I've really left this side of things out of my definition completely in favour of allowing schedulers to "unblock" known functions. For example, (library) code that needs a socket to be ready can ask the current scheduler if it can do "select([sock], , )", and if the scheduler can then it will give the library code a Future. How the scheduler ends up implementing the asynchronous-select is entirely up to the scheduler, and if it can't do it, the caller can do it their own way (which probably means using a thread pool as a last resort). What I would expect this to result in is a set of platform-specific default schedulers that do common operations well and other (3rd-party) schedulers that do particular things really well. So if you want high performance single-threaded sockets, you replace the default scheduler with another one - but if Windows doesn't support the optimized scheduler, you can use the default scheduler without your code breaking. Writing this now it seems to be even clearer that we've approached the problem differently, which should mean there'll be room to share parts of the designs and come up with a really solid result. I'm looking forward to it. Cheers, Steve