Expressiveness of coroutines versus Deferred callbacks (or possibly promises, futures)
Still working my way through zillions of messages on this thread, trying to find things worth responding to, I found this, from Guido:
[Generators are] more flexible [than Deferreds], since it is easier to catch different exceptions at different points (...) In the past, when I pointed this out to Twisted aficionados, the responses usually were a mix of "sure, if you like that style, we got it covered, Twisted has inlineCallbacks," and "but that only works for the simple cases, for the real stuff you still need Deferreds." But that really sounds to me like Twisted people just liking what they've got and not wanting to change.
If you were actually paying attention, we did explain what "the real stuff" is, and why you can't do it with inlineCallbacks. ;-) (Or perhaps I should say, why we prefer to do it with Deferreds explicitly.) Managing parallelism is easy with the when-this-then-that idiom of Deferreds, but challenging with the sequential this-then-this-then-this idiom of generators. The examples in the quoted message were all sequential workflows, which are roughly equivalent in both styles. As soon as a for loop gets involved though, yield-based coroutines have a harder time expressing the kind of parallelism that a lot of applications should use, so it's easy to become accidentally sequential (and therefore less responsive) even if you don't need to be. For example, using some hypothetical generator coroutine library, the idiomatic expression of a loop across several request/responses would be something like this: @yield_coroutine def something_async(): values = yield step1() results = set() for value in values: results.add(step3((yield step2(value)))) return_(results) Since it's in a set, the order of 'results' doesn't actually matter; but this code needs to sit and wait for each result to come back in order; it can't perform any processing on the ones that are already ready while it's waiting. You express this with Deferreds: def something_deferred(): return step1().addCallback( lambda values: gatherResults([step2(value).addCallback(step3) for value in values])).addCallback(set) In addition to being a roughly equivalent amount of code (fewer lines, but denser), that will run step2() and step3() on demand, as results are ready from the set of Deferreds from step1. That means that your program will automatically spread out its computation, which makes better use of time as results may be arriving in any order. The problem is that it is difficult to express laziness with generator coroutines: you've already spent the generator-ness on the function on responding to events, so there's no longer any syntactic support for laziness. (There's another problem where sometimes you can determine that work needs to be done as it arrives; that's an even trickier abstraction than Deferreds though and I'm still working on it. I think I've mentioned <http://tm.tl/1956> already in one of my previous posts.) Also, this is not at all a hypothetical or academic example. This pattern comes up all the time in e.g. web-spidering and chat applications. To be fair, you could express this in a generator-coroutine library like this: @yield_coroutine def something_async(): values = yield step1() thunks = [] @yield_coroutine def do_steps(value): return_(step3((yield step2(value)))) for value in values: thunks.append(do_steps(value)) return_(set((yield multi_wait(thunks)))) but that seems bizarre and not very idiomatic; to me, it looks like the confusing aspects of both styles. David Reid also wrote up some examples of how Deferreds can express sequential workflows more nicely as well (also indirectly as a response to Guido!) on his blog, here: <http://dreid.org/2012/03/30/deferreds-are-a-dataflow-abstraction>.
Which I understand -- I don't want to change either. But I also observe that a lot of people find bare Twisted-with-Deferreds too hard to grok, so they use Tornado instead, or they build a layer on top of either (like Monocle),
inlineCallbacks (and the even-earlier deferredGenerator) predates Monocle. That's not to say Monocle has no value; it is a portability layer between Twisted and Tornado that does the same thing inlineCallbacks does but allows you to do it even if you're not using Deferreds, which will surely be useful to some people. I don't want to belabor this point, but it bugs me a little bit that we get so much feedback from the broader Python community along the lines of "Why doesn't Twisted do X? I'd use it if it did X, but it's all weird and I don't understand Y that it forces me to do instead, that's why I use Z" when, in fact: Twisted does do X It's done X for years It actually invented X in the first place There are legitimate reasons why we (Twisted core developers) suggest and prefer Y for many cases, but you don't need to do it if you don't want to follow our advice Thing Z that is being cited as doing X actually explicitly mentions Twisted as an inspiration for its implementation of X It's fair, of course, to complain that we haven't explained this very well, and I'll cop to that unless I can immediately respond with a pre-existing URL that explains things :). One other comment that's probably worth responding to:
I suppose on systems that support both networking and GUI events, in my design these would use different I/O objects (created using different platform-specific factories) and the shared reactor API would sort things out based on the type of I/O object passed in to it.
In my opinion, it is a mistake to try to harmonize or unify all GUI event systems, unless you are also harmonizing the GUI itself (i.e. writing a totally portable GUI toolkit that does everything). And I think we can all agree that writing a totally portable GUI toolkit is an impossibly huge task that is out of scope for this (or, really, any other) discussion. GUI systems can already dispatch its event to user code just fine - interposing a Python reactor API between the GUI and the event registration adds additional unnecessary work, and may not even be possible in some cases. See, for example, the way that Xcode (formerly Interface Builder) and the Glade interface designer use: the name of the event handler is registered inside a somewhat opaque blob, which is data and not code, and then hooked up automatically at runtime based on reflection. The code itself never calls any event-registration APIs. Also, modeling all GUI interaction as a request/response conversation is limiting and leads to bad UI conventions. Consider: the UI element that most readily corresponds to a request/response is a modal dialog box. Does anyone out there really like applications that consist mainly of popping up dialog after dialog to prompt you for the answers to questions? -g
On Mon, Oct 15, 2012 at 11:08 AM, Glyph <glyph@twistedmatrix.com> wrote:
Still working my way through zillions of messages on this thread, trying to find things worth responding to, I found this, from Guido:
[Generators are] more flexible [than Deferreds], since it is easier to catch different exceptions at different points (...) In the past, when I pointed this out to Twisted aficionados, the responses usually were a mix of "sure, if you like that style, we got it covered, Twisted has inlineCallbacks," and "but that only works for the simple cases, for the real stuff you still need Deferreds." But that really sounds to me like Twisted people just liking what they've got and not wanting to change.
If you were actually paying attention, we did explain what "the real stuff" is, and why you can't do it with inlineCallbacks. ;-)
An yet the rest of your email could be paraphrased by those two quoted phrases. :-) But seriously, thanks for repeating the explanation for my benefit.
(Or perhaps I should say, why we prefer to do it with Deferreds explicitly.)
Managing parallelism is easy with the when-this-then-that idiom of Deferreds, but challenging with the sequential this-then-this-then-this idiom of generators. The examples in the quoted message were all sequential workflows, which are roughly equivalent in both styles. As soon as a for loop gets involved though, yield-based coroutines have a harder time expressing the kind of parallelism that a lot of applications *should * use, so it's easy to become accidentally sequential (and therefore less responsive) even if you don't need to be. For example, using some hypothetical generator coroutine library, the idiomatic expression of a loop across several request/responses would be something like this:
@yield_coroutine def something_async(): values = yield step1() results = set() for value in values: results.add(step3((yield step2(value)))) return_(results)
Since it's in a set, the order of 'results' doesn't actually matter; but this code needs to sit and wait for each result to come back in order; it can't perform any processing on the ones that are already ready while it's waiting. You express this with Deferreds:
def something_deferred(): return step1().addCallback( lambda values: gatherResults([step2(value).addCallback(step3) for value in values])).addCallback(set)
In addition to being a roughly equivalent amount of code (fewer lines, but denser), that will run step2() and step3() on demand, as results are ready from the set of Deferreds from step1. That means that your program will automatically spread out its computation, which makes better use of time as results may be arriving in any order.
The problem is that it is difficult to express laziness with generator coroutines: you've already spent the generator-ness on the function on responding to events, so there's no longer any syntactic support for laziness.
I see your example as a perfect motivation for adding some kind of map() primitive. In NDB there is one for the specific case of mapping over query results (common in NDB because it's primarily a database client). That map() primitive takes a callback that is either a plain function or a tasklet (i.e. something returning a Future). map() itself is also async (returning a Future) and all the tasklets results are waited for and collected only when you wait for the map(). It also handles the input arriving in batches (as they do for App Engine Datastore queries). IOW it exploits all available parallelism. While the public API is tailored for queries, the underlying mechanism can support a few different ways of collecting the results, supporting filter() and even reduce() (!) in addition to map(); and most of the code is reusable for other (non-query) contexts. I feel it would be possible to extend it to support "stop after the first N results" and "stop when this predicate says so" too. In general, whenever you want parallelism in Python, you have to introduce a new function, unless you happen to have a suitable function lying around already; so I don't feel I am contradicting myself by proposing a mechanism using callbacks here. It's the callbacks for sequencing that I dislike.
(There's another problem where sometimes you can determine that work needs to be done as it arrives; that's an even trickier abstraction than Deferreds though and I'm still working on it. I think I've mentioned < http://tm.tl/1956> already in one of my previous posts.)
NDB's map() does this.
Also, this is not at all a hypothetical or academic example. This pattern comes up all the time in e.g. web-spidering and chat applications.
Of course. In App Engine, fetching multiple URLs in parallel is the hello-world of async operations.
To be fair, you *could* express this in a generator-coroutine library like this:
@yield_coroutine
def something_async():
values = yield step1()
thunks = []
@yield_coroutine
def do_steps(value):
return_(step3((yield step2(value))))
for value in values:
thunks.append(do_steps(value))
return_(set((yield multi_wait(thunks))))
but that seems bizarre and not very idiomatic; to me, it looks like the confusing aspects of both styles.
Yeah, you need a map() operation: @yield_coroutine def something_async(): values = yield step1() @yield_coroutine def do_steps(value): return step3((yield step2(value))) return set(yield map_async(do_steps, values)) Or maybe map_async()'s Future's result should be a set?
David Reid also wrote up some examples of how Deferreds can express sequential workflows more nicely as well (also indirectly as a response to Guido!) on his blog, here: < http://dreid.org/2012/03/30/deferreds-are-a-dataflow-abstraction>.
Which I understand -- I don't want to change either. But I also observe that a lot of people find bare Twisted-with-Deferreds too hard to grok, so they use Tornado instead, or they build a layer on top of either (like Monocle),
inlineCallbacks (and the even-earlier deferredGenerator) predates Monocle. That's not to say Monocle has no value; it is a portability layer between Twisted and Tornado that does the same thing inlineCallbacks does but allows you to do it even if you're not using Deferreds, which will surely be useful to some people.
I don't want to belabor this point, but it bugs me a little bit that we get so much feedback from the broader Python community along the lines of "Why doesn't Twisted do X?
I don't think I quite said that. But I suspect it happens because Twisted is hard to get into. I suspect anything using higher-order functions this much has that problem; I feel this way about Haskell's Monads. I wouldn't be surprised if many Twisted lovers are also closet (or not) Haskell lovers.
I'd use it if it did X, but it's all weird and I don't understand Y that it forces me to do instead, that's why I use Z" when, in fact:
1. Twisted does do X 2. It's done X for years 3. It actually invented X in the first place 4. There are legitimate reasons why we (Twisted core developers) suggest and prefer Y for many cases, but you don't need to do it if you don't want to follow our advice 5. Thing Z that is being cited as doing X actually explicitly mentions Twisted as an inspiration for its implementation of X
It's fair, of course, to complain that we haven't explained this very well, and I'll cop to that unless I can immediately respond with a pre-existing URL that explains things :).
One other comment that's probably worth responding to:
I suppose on systems that support both networking and GUI events, in my design these would use different I/O objects (created using different platform-specific factories) and the shared reactor API would sort things out based on the type of I/O object passed in to it.
In my opinion, it is a mistake to try to harmonize or unify all GUI event systems, unless you are also harmonizing the GUI itself (i.e. writing a totally portable GUI toolkit that does everything). And I think we can all agree that writing a totally portable GUI toolkit is an impossibly huge task that is out of scope for this (or, really, any other) discussion. GUI systems can already dispatch its event to user code just fine - interposing a Python reactor API between the GUI and the event registration adds additional unnecessary work, and may not even be possible in some cases. See, for example, the way that Xcode (formerly Interface Builder) and the Glade interface designer use: the name of the event handler is registered inside a somewhat opaque blob, which is data and not code, and then hooked up automatically at runtime based on reflection. The code itself never calls any event-registration APIs.
Also, modeling all GUI interaction as a request/response conversation is limiting and leads to bad UI conventions. Consider: the UI element that most readily corresponds to a request/response is a modal dialog box. Does anyone out there really like applications that consist mainly of popping up dialog after dialog to prompt you for the answers to questions?
I don't feel very strongly about integrating GUI systems. IIRC Twisted has some way to integrate with certain GUI event loops. I don't think we should desire any more (but neither, less). -- --Guido van Rossum (python.org/~guido)
On Oct 15, 2012, at 6:51 PM, Guido van Rossum <guido@python.org> wrote:
(...) But seriously, thanks for repeating the explanation for my benefit.
Glad it was useful. To be fair, I think this is the first time I've actually written the whole thing down. And I didn't even get the whole thing down, I missed the following important bit:
I see your example as a perfect motivation for adding some kind of map() primitive. (...)
You're correct, of course; technically, a map() primitive resolves all the same issues. It's possible to do everything with generator coroutines that it's possible to do with callbacks explicitly; I shouldn't have made the case for sequencing callbacks on the basis that the behavior can't be replicated. And, modulo any of my other suggestions, a "map" primitive is a good idea - Twisted implements such a primitive with 'gatherResults' (although, of course, it works on any Deferred, not just those returned by inlineCallbacks). The real problem with generator coroutines is that if you make them the primitive, you have an abstraction inversion if you want to have callbacks (which, IMHO, are simply more straightforward in many cases). By using a generator scheduler, you're still using callbacks to implement the sequencing. At some point, you have to have some code calling x.next(), x.send(...), x.close(), and raising StopIteration(), but they are obscured by syntactic sugar. You still need a low-level callback-scheduling API to integrate with the heart of the event loop. One area where this abstraction inversion bites you is performance. Now, my experience might be dated here; I haven't measured in a few years, but as nice as generators can be for structuring complex event flows, that abstraction comes with a non-trivial performance cost. Exceptions in Python are much better than they used to be, but in CPython they're still not free. Every return value being replaced with a callback trampoline is bad, but replacing it instead with a generator being advanced, an exception being raised and a callback trampoline is worse. Of course, maybe inlineCallbacks is just badly implemented, but reviewing the implementation now it looks reasonably minimal. I don't want to raise the specter of premature optimization here; I'm not claiming that the implementation of the scheduler needs to be squeezed for every ounce of performance before anyone implements anything. But, by building in the requirement for these unnecessary gyrations to support syntax sugar for every request/response event-driven operation, one precludes the possibility of low-level optimizations for performance-sensitive event coordination later. Now, if a PyPy developer wants to chime in and tell me I'm full of crap, and either now or in the future StopIteration Exceptions will be free, and will actually send your CPU back in time, as well as giving a pet kitten as a present to a unicorn every time you 'raise', I'll probably believe it and happily retire this argument forever. But I doubt it. I'll also grant that it's possible that I'm just the equivalent of a crotchety old assembler programmer here, claiming that we can't afford these fancy automatic register allocators and indirect function calls and run-time linking because they'll never be fast enough for real programs. But I will note that, rarely as you need it, assembler does still exist at some layer of the C compiler stack, and you can write it yourself if you really want to; nothing will get in your way. So that's mainly the point I'm trying to make about a Deferred-like abstraction. Aside from matters of taste and performance, you need to implement your generator coroutines in terms of something, and it might as well be something clean and documented that can be used by people who feel they need it. This will also help if some future version of Python modifies something about the way that generators work, similar to the way .send() opened the door for non-ugly coroutines in the first place. Perhaps some optimized version of 'return' with a value? If the coroutine scheduler is firmly in terms of some other eventual-result API (Deferreds, Futures, Promises), then adding support to that scheduler for @yield_coroutine_v2 should be easy; as would adding support for other things I don't like, like tasklets and greenlets ;).
It also handles the input arriving in batches (as they do for App Engine Datastore queries). (...)
... I think I've mentioned <http://tm.tl/1956> already in one of my previous posts. ... NDB's map() does this.
I'm curious as to how this works. If you are getting a Future callback, don't you only get that once? How do you re-sequence all of your generators to run the same step again when more data is available?
In general, whenever you want parallelism in Python, you have to introduce a new function, unless you happen to have a suitable function lying around already;
I'm glad we agree there, at least :).
so I don't feel I am contradicting myself by proposing a mechanism using callbacks here. It's the callbacks for sequencing that I dislike.
Earlier I was talking about implementing event sequencing as callbacks, which you kind of have to do either way. Separately, there's the issue of presenting event sequencing as control flow. While this is definitely useful for high-level applications - at my day job, about half the code I write is decorated with @inlineCallbacks - these high-level applications depend on a huge amount of low-level code (protocol parsers, database bindings, thread pools) being written and exhaustively tested, whose edge cases are much easier to exhaustively flesh out with explicit callbacks. When you need to test a portion of the control flow, there's no need to fool a generator into executing down to a specific branch point; you just pull out the callback to a top-level name rather than a closure and call it directly. Also, correct usage of generator coroutines depends on a previous understanding of event-driven programming. This is why Twisted core maintainers are not particularly sanguine about inlineCallbacks and generally consider it a power-tool for advanced users rather than an introductory facility to make things easier. In our collective experience helping people understand both Deferreds and inlineCallbacks, there are different paths to enlightenment. When learning Deferreds, someone with no previous event-driven experience will initially be disgusted; why does their code have to look like such a mess? Then they'll come to terms with the problem being solved and accept it, but move on to being perplexed: what the heck are these Deferreds doing, anyway? Finally they start to understand what's happening and move on to depending on the reactor to much, and are somewhat baffled by callbacks never being called. Finally they realize they should start testing their code by firing Deferreds synchronously and inspecting results, and everything starts to come together. Keep in mind, as you read the following, that I probably couldn't do my job as effectively without inlineCallbacks and I am probably its biggest fan on the Twisted team, also :). When learning with inlineCallbacks, someone with no previous event-driven experience will usually be excited. The 'yield's are weird, but almost exciting - it makes the code feel more advanced somehow, and they sort of understand the concurrency implications, but not really. It's also convenient! They just sprinkle in a 'yield' any time they need to make a call that looks like maybe it'll block sometimes. Everything works okay for a while, and then (inevitably, it seems) they happen across some ordering bug and just absolutely cannot figure out, which causes state corruption (because they blithely stuck a 'yield' between two things that really needed to be in an effective critical section) or hangs (generators hanging around waiting on un-fired Deferreds so you don't even get the traceback out of GC closing them because something's keeping a reference to them; harder to debug even than "normal" unfired Deferreds because they're not familiar with how to inspect or trace the flow of event execution, since the code looked "normal"). Now, this is easier to back out of than a massive multithreaded (read) mess, because the code does at least have a finite number of visible task-switch points, and it's usually possible to track it down with some help. But the experience is not pleasant, because by this point there are usually 10-deep call-stacks of generator-calling-a-generator-calling-a-generator and, especially in the problematic cases, it's not clear what got started from where. inlineCallbacks is a great boon to promoting Twisted usage, because some people never make it out of the "everything works okay for a while" phase, and it's much easier to get started. We certainly support it as best we can - optimize it, add debugging information to it - because we want people to have the best experience possible. So it's not like it's unmaintained or anything. But, without Deferreds to fall back down to in order to break down sequencing into super explicit, individual steps, without any potentially misleading syntactic sugar, I don't know how we'd help these folks. I have a few thoughts on how our experiences have differed here, since I'm assuming you don't hear these sorts of complaints about NDB. One is that Twisted users are typically dealing with a truly bewildering diversity of events, whereas NDB is, as you said, mostly a database client. It's not entirely unusual for a Twisted application to be processing events from a serial port, a USB device, some PTYs, a couple of server connections, some timed events, some threads (usually database connections) and some HTTP client connections. Another is that we only hear from users with problems. Maybe there are millions of successful users of inlineCallbacks who have architected everything from tiny scripts to massive distributed systems without ever needing to say so much as a how-do-you-do to the Twisted mailing list or IRC channel. (Somehow I doubt this is completely accurate but maybe it accounts for some of our perspective.) Nevertheless I feel like the strategy of backing out a generator into lower-level discrete callback-sequenced operations is a very important tool in the debugging toolbox.
Or maybe map_async()'s Future's result should be a set?
Well really it ought to be a dataflow of some kind so you can enumerate it as it's going :). But I think if the results arrive in some order you ought to be able to see that order in application code, even if you usually don't care.
I don't want to belabor this point, but it bugs me a little bit that we get so much feedback from the broader Python community along the lines of "Why doesn't Twisted do X?
I don't think I quite said that.
Sorry, I didn't mean to say that you did. I raised the point because people who do say things like that tend to cite your opinions that e.g. Monocle is something new and different as reasons why they thought that Twisted didn't do what it did. (I certainly sympathize with the pressure that comes along with everyone scrutinizing every word one says and trying to discover hidden meaning; I'm sure that in a message as long as this one, someone will draw at least five wrong conclusions from me, too.)
But I suspect it happens because Twisted is hard to get into.
Part of it's a marketing issue. Like, if we just converted all of our examples to inlineCallbacks and let people trip over the problems we've seen later on, I'm sure we would get more adoption, and possibly not even a backlash later; people with bugs in their programs tend to think that there's a bug in their programs. They only blame the tools when the programs are hard to write in the first place. Part of it is a background issue. GUI programmers and people who have worked with multiplayer games instantly recognize what Deferreds are for and are usually up and running within minutes. People primarily with experience with databases and web servers - a pretty big audience, in this day and age - are usually mystified. But, there are intractable parts of it, too. The Twisted culture is all about extreme reliability and getting a good reputation for systems built using it, and I guess we've made some compromises about expanding our audience in service of that goal.
I suspect anything using higher-order functions this much has that problem; I feel this way about Haskell's Monads.
I've heard several people who do know Haskell say things like "Deferreds are just a trivial linearization of the I/O eigenfunctor over the monadic category of callbacks" and it does worry me. I still think they're relatively straightforward - I invented them in one afternoon when I was about 20 and they have changed relatively little since then - but if they're actually a homomorphism of the lambda calculus over the event manifold as it approaches the monad limit (or whatever: does anyone else feel like Haskell people have great ideas, but they have sworn a solemn vow to only describe them in a language that can only be translated by using undiscovered stone tablets buried on the dark side of the moon?) then I can understand why some users have a hard time.
I wouldn't be surprised if many Twisted lovers are also closet (or not) Haskell lovers.
There are definitely some appealing concepts there. Their 'async' package, for example, does everything in the completely wrong, naive but apparently straightforward way that Java originally did (asynchronous exceptions? communication via shared mutable state? everything's a thread? no event-driven I/O?) but I/O is so limited and the VM is so high tech that it might actually be able to work. I suppose I can best summarize my feelings as <https://glyph.im/blob/EC0C1BF9-F79E-4F3D-A876-9273933E2E78.jpeg>. Anyway, back on topic...
I don't feel very strongly about integrating GUI systems. IIRC Twisted has some way to integrate with certain GUI event loops. I don't think we should desire any more (but neither, less).
Yeah, all we do is dispatch Twisted events from the GUI's loop, usually using the GUI's built-in support for sockets. So your GUI app runs as a normal app. You can, of course, return a Deferred from a function that prompts the user for input, and fire it from a GUI callback, and that'll all work fine: Deferreds don't actually depend on the reactor at all, so you can use them from any callback (they are only in the 'internet' package where the event loop goes for unfortunate historical reasons). -glyph
On Tue, Oct 16, 2012 at 11:15 AM, Glyph <glyph@twistedmatrix.com> wrote: [lots] It'll be days before I digest all of that. But thank you very much for writing it all up. You bring up all sorts of interesting issues. I think I would like to start discovering some of the issues by writing an extensive prototype using Greg Ewing's model -- it is the most radical but therefore most worthy of some serious prototyping before either adopting or rejecting it. -- --Guido van Rossum (python.org/~guido)
Glyph wrote:
The real problem with generator coroutines is that if you make them the primitive, you have an abstraction inversion if you want to have callbacks
Has anyone suggested making generator coroutines "the primitive", whatever that means? Guido seems to have made it clear that he wants the interface to the event loop layer to be based on plain callbacks. To plug in a generator coroutine, you install a callback that wakes up the coroutine. So using generators with the event loop will be entirely optional.
I haven't measured in a few years, but as nice as generators can be for structuring complex event flows, that abstraction comes with a non-trivial performance cost. ... Every return value being replaced with a callback trampoline is bad, but replacing it instead with a generator being advanced, an exception being raised /and /a callback trampoline is worse.
This is where we expect yield-from to help a *lot*, by removing almost all of that overhead. A return to the trampoline is only needed when a task wants to yield the CPU, instead of every time it makes a function call to a subgenerator. Returns are still a bit more expensive due to the StopIterations, but raising and catching an exception in C code is still fairly efficient compared to doing it in Python. (Although not quite as super-efficient as it was in Python 2.x, unfortunately, due to tracebacks being attached to exceptions, so that we can't instantiate exceptions lazily any more.) -- Greg
participants (3)
-
Glyph
-
Greg Ewing
-
Guido van Rossum