On Tue, Oct 23, 2012 at 12:34 AM, Jim Jewett <jimjjewett@gmail.com> wrote:
On 10/21/12, Guido van Rossum <guido@python.org> wrote:
On Sun, Oct 21, 2012 at 1:07 PM, Steve Dower <Steve.Dower@microsoft.com> wrote:
It has synchronisation which is _aware_ of threads, but it never creates, requires or uses them. It simply ensures thread-safe reentrancy, which will be required for any general solution unless it is completely banned from interacting across CPU threads.
I don't see it that way. Any time you acquire a lock, you may be blocked for a long time. In a typical event loop that's an absolute no-no. Typically, to wait for another thread, you give the other thread a callback that adds a new event for *this* thread.
That (with or without rescheduling this thread to actually process the event) is a perfectly reasonable solution, but I'm not sure how obvious it is. People willing to deal with the conventions and contortions of twisted are likely to just use twisted.
I think part of my point is that we can package all this up in a way that is a lot less scary than Twisted's reputation. And remember, there are many other frameworks that use similar machinery. There's Tornado, Monocle (which runs on top of Tornado *or* Twisted), and of course the stdlib's asyncore, which is antiquated but still much used -- AFAIL Zope is still built around it.
A general API should have a straightforward way to wait for a result; even explicitly calling wait() may be too much to ask if you want to keep assuming that other events will cooperate.
Here I have some real world relevant experience: NDB, App Engine's new Datastore API (which I wrote). It is async under the hood (yield + its own flavor of Futures), and users who want the most performance from their app are encouraged to use the async APIs directly -- but users who don't care can ignore their existence completely. There are thousands of users, and I've seen people explain the async stuff to each other on StackOverflow, so I think it is quite accessible.
Agreed. I don't see much use for the cancellation stuff and all the extra complexity that adds to the interface.
wait_for_any may well be launching different strategies to solve the same problem, and intending to ignore all but the fastest. It makes sense to go ahead and cancel the slower strategies. (That said, I agree that the API shouldn't guarantee that other tasks are actually cancelled, let alone that they are cancelled before side effects occur.)
Agreed. And it's not hard to implement a custom cancellation mechanism either. -- --Guido van Rossum (python.org/~guido)