Re: [Python-Dev] PEP 343 and __with__

At 12:37 PM 10/3/2005 -0400, Jason Orendorff wrote:
Which is why it's proposed to add __enter__/__exit__ to locks, and somewhat more controversially, file objects. (Guido objected on the basis that people might reuse the file object, but reusing a closed file object results in a sensible error message and so doesn't seem like a problem to me.)
You didn't offer any reasons why this would be useful and/or good.
Because this multiplies the difficulty of implementing context managers in C. It's easy to define a pair of C methods for __enter__ and __exit__, but an iterator requires creating another class in C. The yield-based syntax is just syntax sugar, not the essence of the proposal.
Considering your argument that locks should be contextmanagers, it would seem like a good idea for C implementations to be easy. :)
My apologies if this is redundant or unwelcome at this date.
Since the PEP is accepted and has patches for both its implementation and a good part of its documentation, a major change like this would certainly need a better rationale. If your idea was that __with__ would somehow make it easier for locks to be context managers, it's based on a flawed premise. All that's required now is to have __enter__ and __exit__ call acquire() and release(). At this point, it's simply an open issue as to which stdlib objects will be context managers, and which will have helper functions or classes to serve as context managers. The actual API used to implement them has little or no bearing on that issue.

"Phillip J. Eby" <pje@telecommunity.com> writes:
Though given the amount of interest said patch has attracted (none at all) perhaps noone cares very much and the proposal should be dropped. Which would be a shame given the time I spent on it and all the hot air here on python-dev... Cheers, mwh (who still likes PEP 343 and doesn't particularly like Jason's suggested changes). -- Gevalia is undrinkable low-octane see-through only slightly roasted bilge water. Compared to .us coffee it is quite drinkable. -- Måns Nilsson, asr

For the record, I very much want PEPs 342 and 343 implemented. I haven't had the time to look at the patch and don't expect to find the time any time soon, but it's not for lack of desire to see this feature implemented. I don't like Jason's __with__ proposal and even less like his idea to drop __enter__ and __exit__ (I think this would just make it harder to provide efficient implementations in C). I'm all for adding __enter__ and __exit__ to locks. I'm even considering that it might be a good idea to add them to files. For the record, here at Elemental we write a lot of Java code that uses database connections in a pattern that would have greatly benefited from a similar construct in Java. :) --Guido On 10/3/05, Michael Hudson <mwh@python.net> wrote:
-- --Guido van Rossum (home page: http://www.python.org/~guido/)

At 07:02 PM 10/3/2005 +0100, Michael Hudson wrote:
Actually, I have been reading the patch and meant to comment on it. I was perplexed by the odd stack behavior of the new opcode until I realized that it's try/finally that's weird. :) I was planning to look into whether that could be cleaned up as well, when I got distracted and didn't go back to it.
perhaps noone cares very much and the proposal should be dropped.
I care an awful lot, as 'with' is another framework-dissolving tool that makes it possible to do more things in library form, without needing to resort to template methods. It also enables more context-sensitive programming, in that "global" states can be set and restored in a structured fashion. It may take a while to feel the effects, but it's going to be a big improvement to Python, maybe as big as new-style classes, and certainly bigger than decorators.

"Phillip J. Eby" <pje@telecommunity.com> writes:
Oh, good.
I was perplexed by the odd stack behavior of the new opcode until I realized that it's try/finally that's weird. :)
:)
I was planning to look into whether that could be cleaned up as well, when I got distracted and didn't go back to it.
I see. I don't know whether trying to clean up the stack protocol around exceptions is worth the about of pain it causes in the head (anyone still thinking about removing the block stack?).
I think 'as big as new-style classes' is probably an exaggeration, but I'm glad my troll caught a few people :) Cheers, mwh -- Those who have deviant punctuation desires should take care of their own perverted needs. -- Erik Naggum, comp.lang.lisp

Michael Hudson wrote:
I think 'as big as new-style classes' is probably an exaggeration, but I'm glad my troll caught a few people :)
I was planning on looking at your patch too, but I was waiting for an answer from Guido about the fate of the ast-branch for Python 2.5. Given that we have patches for PEP 342 and PEP 343 against the trunk, but ast-branch still isn't even passing the Python 2.4 test suite, I'm wondering if it should be bumped from the feature list again. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://boredomandlaziness.blogspot.com

On 10/4/05, Nick Coghlan <ncoghlan@gmail.com> wrote:
What do you want me to say about the AST branch? It's not my branch, I haven't even checked it out, I'm just patiently waiting for the folks who started it to finally finish it. -- --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum wrote:
It was a question I asked a few weeks back [1] that didn't get any response (even from Brett!), to do with the fact that for Python 2.4 there was a deadline for landing the ast-branch that was a month or two in advance of the deadline for 2.4a1. I thought you'd set that deadline, but now that I look for it, I can't actually find any evidence of that. The only thing I can find is Jeremy's email saying it wasn't ready in time [2] (Jeremy's concern about reference leaks in ast-branch when it encounters compile errors is one I share, btw). Anyway, the question is: What do we want to do with ast-branch? Finish bringing it up to Python 2.4 equivalence, make it the HEAD, and only then implement the approved PEP's (308, 342, 343) that affect the compiler? Or implement the approved PEP's on the HEAD, and move the goalposts for ast-branch to include those features as well? I believe the latter is the safe option in terms of making sure 2.5 is a solid release, but doing it that way suggests to me that the ast compiler would need to be held over until 2.6, which would be somewhat unfortunate. Given that I don't particularly like that answer, I'd love for someone to convince me I'm wrong ;) Cheers, Nick. [1] http://mail.python.org/pipermail/python-dev/2005-September/056449.html [2] http://mail.python.org/pipermail/python-dev/2004-June/045121.html -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://boredomandlaziness.blogspot.com

On 10/5/05, Nick Coghlan <ncoghlan@gmail.com> wrote:
Given the total lack of response, I have a different suggestion. Let's *abandon* the AST-branch. We're fooling ourselves believing that we can ever switch to that branch, no matter how theoretically better it is. -- --Guido van Rossum (home page: http://www.python.org/~guido/)

To answer Nick's email here, I didn't respond to that initial email because it seemed specifically directed at Guido and not me. On 10/5/05, Guido van Rossum <guido@python.org> wrote:
Since the original people who have done the majority of the work (Jeremy, Tim, Neal, Nick, logistix, and myself) have fallen so far behind this probably is not a bad decision. Obviously I would like to see the work pan out, but since I personally just have not found the time to shuttle the branch the rest of the way I really am in no position to say much in terms of objecting to its demise. Maybe I can come up with a new design and get my dissertation out of it. =) -Brett

[Brett]
To answer Nick's email here, I didn't respond to that initial email because it seemed specifically directed at Guido and not me.
Fair enough. I think I was actually misrembering the sequence of events leading up to 2.4a1, so the question was less appropriate for Guido than I thought :) [Guido]
[Brett]
If we kill the branch for now, then anyone that wants to bring up the idea again can write a PEP first, not only to articulate the benefits of switching to an AST compiler (Jeremy has a few notes scattered around the web on that front), but also to propose a solid migration strategy. We tried the "develop in parallel, switch when done"; it doesn't seem to have worked due to the way it split developer effort between the branches, and both the HEAD and ast-branch ended up losing out.
Maybe I can come up with a new design and get my dissertation out of it. =)
A strategy that may work out better is to develop something independent of the Python core that can: 1. Produce an ASDL based AST structure from: - Python source code - CPython 'AST' - CPython bytecode 2. Parse an ASDL based AST structure and produce: - Python source code - CPython 'AST' - CPython bytecode That is, initially develop an enhanced replacement for the compiler package, rather than aiming directly to replace the actual CPython compiler. Then the folks who want to do serious bytecode hacking can reverse compile the bytecode on the fly ;) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://boredomandlaziness.blogspot.com

Nick Coghlan <ncoghlan@gmail.com> wrote:
If we kill the branch for now, then anyone that wants to bring up the idea again can write a PEP first
I still have some (very) small hope that it can be finished. If we don't get it done soon then I fear that it will never happen. I had hoped that a SoC student would pick up the task or someone would ask for a grant from the PSF. Oh well.
A strategy that may work out better is [...]
Another thought I've had recently is that most of the complexity seems to be in the CST to AST translator. Perhaps having a parser that provided a nicer CST might help. Neil

On 10/6/05, Neil Schemenauer <nas@arctrix.com> wrote:
Dream on, Neil... Adding more work won't make it more likely to happen. The only alternative to abandoning it that I see is to merge it back into main NOW, using the time that remains us until the 2.5 release to make it robust. That way, everybody can help out (and it may motivate more people). Even if this is a temporary regression (e.g. PEP 342), it might be worth it -- but only if there are at least two people committed to help out quickly when there are problems. -- --Guido van Rossum (home page: http://www.python.org/~guido/)

On 10/6/05, Guido van Rossum <guido@python.org> wrote:
You're both right. The CST-to-AST translator is fairly complex; it would be better to parse directly to an AST. On the other hand, the AST translator seems fairly complete and not particularly hard to write. I'd love to see a new parser in 2.6.
I'm sorry I didn't respond earlier. I've been home with a new baby for the last six weeks and haven't been keeping a close eye on my email. (I didn't see Nick's earlier email until his most recent post.) It would take a few days of work to get the branch ready to merge to the head. There are basic issues like renaming newcompile.c to compile.c and the like. I could work on that tomorrow and Monday. I did do a little work on the ast branch earlier this week. The remaining issues feel pretty manageable, so you can certainly count me as one of the two people committed to help out. I'll make a point of keeping a closer eye on python-dev email, in addition to writing some code. Jeremy

Jeremy Hylton <jeremy@alum.mit.edu> writes:
Unless I'm missing something, we would need to merge HEAD to the AST branch once more to pick up the changes in MAIN since the last merge, and then make sure everything in the AST branch is passing the test suite. Otherwise we risk having MAIN broken for awhile following a merge. Finally, we can then merge the diff of HEAD to AST back into MAIN. If we try to merge the entire AST branch since its inception, we will re-apply to MAIN those changes made in MAIN which have already been merged to the AST branch and it will be difficult to sort out all the conflicts. If we try to merge the AST branch from the its last merge tag to its head we will miss the work done on AST prior to that merge. Let me know at kbk@shore.net if you want to do this. -- KBK

IMO, merging to the head is a somewhat dangerous strategy that doesn't have any benefits. Whether done on the head or in the branch, the same amount of work needs to be done. If the stability of the head is disrupted, it may impede other maintenance efforts because it is harder to test bug fixes when the test suites are not passing.

[Kurt]
[Raymond]
Well, at some point it will HAVE to be merged into the head. The longer we wait the more painful it will be. If we suffer a week of instability now, I think that's acceptable, as long as all developers are suitably alerted, and as long as the AST team works towards resolving the issues ASAP. I happen to agree with Kurt that we should first merge the head into the branch; then the AST team can work on making sure the entire test suite passes; then they can merge back into the head. BUT this should only be done with a serious commitment from the AST team (I think Neil and Jeremy are offering this -- I just don't know how much time they will have available, realistically). My main point is, we should EITHER abandon the AST branch, OR force a quick resolution. I'm willing to suffer a week of instability in head now, or in a week or two -- but I'm not willing to wait again. Let's draw a line in the sand. The AST team (which includes whoever will help) has up to three weeks to het the AST branch into a position where it passes all the current unit tests merged in from the head. Then they merge it into the head after which we can accept at most a week of instability in the head. After that the AST team must remain available to resolve remaining issues quickly. How does this sound to the non-AST-branch developers who have to suffer the inevitable post-merge instability? I think it's now or never -- waiting longer isn't going to make this thing easier (not with several more language changes approved: with-statement, extended import, what else...) What does the AST team think? -- --Guido van Rossum (home page: http://www.python.org/~guido/)

On 10/6/05, Guido van Rossum <guido@python.org> wrote:
So basically we have until November 1 to get all tests passing? For anyone who wants a snapshot of where things stand, http://www.python.org/sf/1191458 lists the tests that are currently failing (read the comments to get the current list; count is at 14). All AST-related tracker items are under the AST group so filtering to just AST stuff is easy. I am willing to guess a couple of those tests will start passing as soon as http://www.python.org/sf/1246473 is dealt with (this is just based on looking at some of the failure output seeming to be off by one). As of right now the lnotab is only has statement granularity when it really needs expression granularity. That requires tweaking all instances where an expression node is created to also take in the line number of where the expression exists. This fix is one of the main reasons I have not touched the AST branch; it is not difficult, but it is not exactly fun or small either. =)
Well, I have homework this weekend, a midterm two weeks from tomorrow (so the preceding weekend will be studying), and October 23 is my birthday so I will be busy that entire weekend visiting family. In other words Python time is a premium this month. But I will try to squeeze in what time I can. But I think the three week time frame is reasonable to light the fire under our asses to get this thing done (especially if it inspires people to jump in and help out; as always, people interested in joining in, check out the branch and read Python/compile.txt ). -Brett

Guido van Rossum <guido@python.org> writes:
I can be available to do this again. It would involve freezing the AST branch for a day. Once the AST branch is stable, we would need to freeze everything, merge MAIN to AST one more time to pick up the last few changes in MAIN, and then merge the AST head back to MAIN. By doing these merges from MAIN to AST we would have effectively moved the AST branch point along MAIN to HEAD. So the final join is HEAD to AST, conducted from MAIN. I'll run a local experiment to verify this concept is workable. -- KBK

At 09:50 AM 10/4/2005 +0100, Michael Hudson wrote:
(anyone still thinking about removing the block stack?).
I'm not any more. My thought was that it would be good for performance, by reducing the memory allocation overhead for frames enough to allow pymalloc to be used instead of the platform malloc. After more investigation, however, I realized that was a dumb idea, because for a typical application the amortized allocation cost of frames approaches zero as the program runs and allocates as many frames as it will ever use, as large as it will ever use them, and just recycles them on the free list. And all of the ways I came up with for removing the block stack were a lot more complex than leaving it as-is. Clearly, the cost of function calls in Python lies somewhere else, and I'd probably look next at parameter tuple allocation, and other frame initialization activities. I seem to recall that Armin Rigo once supplied a patch that sped up calls at the cost of slowing down recursive or re-entrant ones, and I seem to recall that it was based on preinitializing frames, not just preallocating them: http://mail.python.org/pipermail/python-dev/2004-March/042871.html However, the patch was never applied because of its increased memory usage as well as the slowdown for recursion. Every so often, in blue-sky thinking about alternative Python VM designs, I think about making frames virtual, in the sense of not even having "real" frame objects except for generators, sys._getframe(), and tracebacks. I suspect, however, that doing this in a way that doesn't mess with the current C API is non-trivial. And for many "obvious" ways to simplify the various stacks, locals, etc., the downside could be more complexity for generators, and probably less speed as well. For example, we could use a single "stack" arena in the heap for parameters, locals, cells, and blocks, rather than doing all the various sub-allocations within the frame. But then creating a frame would involve copying data off the top of this pseudo-stack, and doing all the offset computations and perhaps some other trickery as well. And resuming a generator would have to either copy it back, or have some sane way to make calls out to a new stack arena when calling other functions - thus making those operations slower. The real problem, of course, with any of these ideas is that we are at best shaving a few percentage points here, a few points there, so it's comparatively speaking rather expensive to do the experiments to see if they help anything.

On 10/5/05, Phillip J. Eby <pje@telecommunity.com> wrote:
I did something similar to reduce the frame size to under 256 bytes (don't recall if I made a patch or not) and it had no overall effect on perf.
I think that's a big part of it. This patch shows C calls getting sped up primarly by avoiding tuple creation: http://python.org/sf/1107887 I hope to work on that and get it into 2.5. I've also been thinking about avoiding tuple creation when calling python functions. The change I have in mind would probably have to wait until p3k, but could yield some speed ups. Warning: half baked idea follows. My thoughts are to dynamically allocate the Python stack memory (e.g., void *stack = malloc(128MB)). Then all calls within each thread uses its own stack. So things would be pushed onto the stack like they are currently, but we wouldn't need to do create a tuple to pass to a method, they could just be used directly. Basically more closely simulate the way it currently works in hardware. This would mean all the PyArg_ParseTuple()s would have to change. It may be possible to fake it out, but I'm not sure it's worth it which is why it would be easier to do this for p3k. The general idea is to allocate the stack in one big hunk and just walk up/down it as functions are called/returned. This only means incrementing or decrementing pointers. This should allow us to avoid a bunch of copying and tuple creation/destruction. Frames would hopefully be the same size which would help. Note that even though there is a free list for frames, there could still be PyObject_GC_Resize()s often (or unused memory). WIth my idea, hopefully there would be better memory locality, which could speed things up. n

Neal Norwitz wrote:
One issue with argument tuples on the stack (or some sort of stack) is that functions may hold onto argument tuples longer: def foo(*args): global last_args last_args = args I considered making true tuple objects (i.e. with ob_type etc.) on the stack, but this possibility breaks it. Regards, Martin

Neal Norwitz <nnorwitz@gmail.com> writes:
Hey, me too! I also came to the same conclusion. Cheers, mwh -- The ultimate laziness is not using Perl. That saves you so much work you wouldn't believe it if you had never tried it. -- Erik Naggum, comp.lang.lisp

Phillip J. Eby wrote:
Clearly, the cost of function calls in Python lies somewhere else, and I'd probably look next at parameter tuple allocation,
For simple calls where there aren't any *args or other such complications, it seems like it should be possible to just copy the args from the calling frame straight into the called one. Or is this already done these days? -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing@canterbury.ac.nz +--------------------------------------+

Phillip J. Eby writes:
You didn't offer any reasons why this would be useful and/or good.
It makes it dramatically easier to write Python classes that correctly support 'with'. I don't see any simple way to do this under PEP 343; the only sane thing to do is write a separate @contextmanager generator, as all of the examples do. Consider: # decimal.py class Context: ... def __enter__(self): ??? def __exit__(self, t, v, tb): ??? DefaultContext = Context(...) Kindly implement __enter__() and __exit__(). Make sure your implementation is thread-safe (not easy, even though decimal.getcontext/.setcontext are thread-safe!). Also make sure it supports nested 'with DefaultContext:' blocks (I don't mean lexically nested, of course; I mean nested at runtime.) The answer requires thread-local storage and a separate stack of saved context objects per thread. It seems a little ridiculous to me. Whereas: class Context: ... def __with__(self): old = decimal.getcontext() decimal.setcontext(self) try: yield finally: decimal.setcontext(old) As for the second proposal, I was thinking we'd have one mental model for context managers (block template generators), rather than two (generators vs. enter/exit methods). Enter/exit seemed superfluous, given the examples in the PEP.
[T]his multiplies the difficulty of implementing context managers in C.
Nonsense. static PyObject * lock_with() { return PyContextManager_FromCFunctions(self, lock_acquire, lock_release); } There probably ought to be such an API even if my suggestion is in fact garbage (as, admittedly, still seems the most likely thing). Cheers, -j

Jason Orendorff wrote:
Hmm, it's kind of like the iterable/iterator distinction. Being able to do: class Whatever(object): def __iter__(self): for item in self.stuff: yield item is a very handy way of defining "this is how you iterate over this class". The only cost is that actual iterators then need to define an __iter__ method that returns 'self' (which isn't much of a cost, and is trivial to do even for iterators written in C). If there was a __with__ slot, then we could consider that as identifying a "manageable context", with three methods to identify an actual context manager: __with__ that returns self __enter__ __exit__ Then the explanation of what a with statement does would simply look like: abc = EXPR.__with__() # This is the only change exc = (None, None, None) VAR = abc.__enter__() try: try: BLOCK except: exc = sys.exc_info() raise finally: abc.__exit__(*exc) And the context management for decimal.Context would look like: class Context: ... @contextmanager def __with__(self): old = decimal.getcontext() new = self.copy() # Make this nesting and thread safe decimal.setcontext(new) try: yield new finally: decimal.setcontext(old) And for threading.Lock would look like: class Lock: ... def __with__(self): return self def __enter__(self): self.acquire() return self def __exit__(self): self.release() Also, any class could make an existing independent context manager (such as 'closing') its native context manager as follows: class SomethingCloseable: ... def __with__(self): return closing(self)
Try to explain the semantics of the with statement without referring to the __enter__ and __exit__ methods, and then see if you still think they're superfluous ;) The @contextmanager generator decorator is just syntactic sugar for writing duck-typed context managers - the semantics of the with statement itself can only be explained in terms of the __enter__ and __exit__ methods. Indeed, explaining how the @contextmanager decorator itself works requires recourse to the __enter__ and __exit__ methods of the actual context manager object the decorator produces. However, I think the idea of have a distinction between manageable contexts and context managers similar to the distinction between iterables and iterators is one well worth considering. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://boredomandlaziness.blogspot.com

The argument I am going to try to make is that Python coroutines need a more usable API.
That's not true. It can certainly use the coroutine API instead. Now... as specified in PEP 342, the coroutine API can be used to implement 'with', but it's ugly. I think this is a problem with the coroutine API, not the idea of using coroutines per se. Actually I think 'with' is a pretty tame use case for coroutines. Other Python objects (dicts, lists, strings) have convenience methods that are strictly redundant but make them much easier to use. Coroutines should, too. This: with EXPR as VAR: BLOCK expands to this under PEP 342: _cm = contextmanager(EXPR) VAR = _cm.next() try: BLOCK except: try: _cm.throw(*sys.exc_info()) except: pass raise finally: try: _cm.next() except StopIteration: pass except: raise else: raise RuntimeError Blah. But it could look like this: _cm = (EXPR).__with__() VAR = _cm.start() try: BLOCK except: _cm.throw(*excinfo) else: _cm.finish() I think that looks quite nice. Here is the proposed specification for start() and finish(): class coroutine: # pseudocode ... def start(self): """ Convenience method -- exactly like next(), but assert that this coroutine hasn't already been started. """ if self.__started: raise ValueError # or whatever return self.next() def finish(self): """ Convenience method -- like next(), but expect the coroutine to complete without yielding again. """ try: self.next() except (StopIteration, GeneratorExit): pass else: raise RuntimeError("coroutine didn't finish") Why is this good? - Makes coroutines more usable for everyone, not just for implementing 'with'. - For example, if you want to feed values to a coroutine, call start() first and then send() repeatedly. Quite sensible. - Single mental model for 'with' (always uses a coroutine or lookalike object). - No need for "contextmanager" wrapper. - Harder to implement a context manager object incorrectly (it's quite easy to screw up with __begin__ and __end__). -j

Right after I sent the preceding message I got a funny feeling I'm wasting everybody's time here. I apologize. Guido's original concern about speedy C implementation for locks stands. I don't see a good way around it. By the way, my expansion of 'with' using coroutines (in previous message) was incorrect. The corrected version is shorter; see below. -j This: with EXPR as VAR: BLOCK would expand to this under PEP 342 and my proposal: _cm = (EXPR).__with__() VAR = _cm.next() try: BLOCK except: _cm.throw(*sys.exc_info()) finally: try: _cm.next() except (StopIteration, GeneratorExit): pass else: raise RuntimeError("coroutine didn't finish")

On 10/4/05, Jason Orendorff <jason.orendorff@gmail.com> wrote:
OK. Our messages crossed, so you can ignore my response. Let's spend our time implementing the PEPs as they stand, then see what else we can do with the new APIs. -- --Guido van Rossum (home page: http://www.python.org/~guido/)

Just a quick note. Nick convinced me that adding __with__ (without losing __enter__ and __exit__!) is a good thing, especially for the decimal context manager. He's got a complete proposal for PEP changes which he'll post here. After a brief feedback period I'll approve his changes and he'll check them into the PEP. My apologies to Jason for missing the point he was making; thanks to Nick for getting it and turning it into a productive change proposal. -- --Guido van Rossum (home page: http://www.python.org/~guido/)

On 10/4/05, Jason Orendorff <jason.orendorff@gmail.com> wrote:
Where in the world do you get this idea? The translation is as follows, according to PEP 343: abc = EXPR exc = (None, None, None) VAR = abc.__enter__() try: try: BLOCK except: exc = sys.exc_info() raise finally: abc.__exit__(*exc) PEP 342 doesn't touch on the expansion of with-statements at all. I think I know where you're coming from, but please do us a favor and don't misrepresent the PEPs. If anything, your proposal is more complicated; it requires four new APIs instead of two, and requires an extra call to set up (__with__() followed by start()). Proposals like yours (and every other permutation) were brought up during the initial discussion. We picked one. Don't create more churn by arguing for a different variant. Spend your efforts on implementing it so you can actually use it and see how bad it is (I predict it won't be bad at all). -- --Guido van Rossum (home page: http://www.python.org/~guido/)
participants (13)
-
"Martin v. Löwis"
-
Brett Cannon
-
Greg Ewing
-
Guido van Rossum
-
Jason Orendorff
-
Jeremy Hylton
-
kbk@shore.net
-
Michael Hudson
-
Neal Norwitz
-
Neil Schemenauer
-
Nick Coghlan
-
Phillip J. Eby
-
Raymond Hettinger