> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> > print f()
> > </frag>
> > # print 3.
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
Yes this can be easy to implement but more confusing situations can arise:
What should this print? the situation leads not to a canonical solution
as class def scopes.
from foo import *
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
From: Samuele Pedroni [mailto:firstname.lastname@example.org]
> the first candidate would be a generalization of 'class'
> (although that make it redundant with 'class' and meta-classes)
> so that
> KEYW-TO-BE kind name [ '(' expr,... ')' ] [ maybe  extended syntax ]:
> would be equivalent to
> name = kind(name-as-string,(expr,...),dict-populated-executing-suite)
[fixed up to exclude the docstring, as per the followup message]
I like this - it's completely general, and easy to understand. Then again,
I always like constructs defined in terms of code equivalence, it seems to
be a good way to make the semantics completely explicit.
The nice thing, to me, is that it solves the immediate problem (modulo a
suitable "kind" to work for properties), as well as being extensible to
allow it to be used in more general contexts.
The downside may be that it's *too* general - I've no feel for how it would
look if overused - it might feel like people end up defining their own
> the remaining problem would be to pick a suitable KEYW-TO-BE
Someone, I believe, suggested reusing "def" - this might be nice, but IIRC
it won't work because of the grammar's strict lookahead limits. (If it does
work, then "def" looks good to me).
If def won't work, how about "define"? The construct is sort of an extended
form of def. Or is that too cute?
By the way, can I just say that I am +1 on Michael Hudson's original patch
for [...] on definitions. Even though it doesn't solve the issue of
properties, I think it's a nice solution for classmethod and staticmethod,
and again I like the generality.
I tried adding a variety of new instructions to the PVM, initially with a
code compression goal for the bytecodes, and later with a performance goal.
accesses to locals with an index<16 using a one byte instruction (no
accesses to consts with an index<16 using a one byte instruction (no
accesses to locals with an index<16 using a one byte instruction (no
compare ops using a one byte instruction (no oparg)
PyStone score for best of 10 runs.
umodified 2.3a2 22200
using enum, 22200 (compacting the opcode numeric space using an enum instead
USING_LOAD_FAST_N, USING_LOAD_CONST_N 22350
USING_LOAD_FAST_N, USING_STORE_FAST_N, 22000
USING_LOAD_FAST_N, USING_LOAD_CONST_N, USING_STORE_FAST_N 22200
USING_LOAD_FAST_N, USING_LOAD_CONST_N, USING_STORE_FAST_N, USING_SHORT_CMP
While reducing the size of compiled bytecodes by about 1%, the proposed
modifications at best increase performance by 2%, and at worst reduce
performance by 3%.
Enabling all of the proposed opcodes results in a 1% performance loss.
In general, it would seem that adding opcodes in bulk, even if many opcodes
switch to the same labels, results in a minor performance loss.
Running PyStone under windows results in a fairly large variation in
results. A zip file containing the source files I modified can be found at
If someone would like to try this code on their systems, I would be grateful
to know what kind of results they achieve.
The various proposed opcodes are controlled by a set of #defines in the file
The results of my static analysis indicate that the indices used on
LOAD_FAST, LOAD_CONST, STORE_FAST are almost always small. There may be some
benefit to optimising these instructions to use single byte opargs.
The results of my static and dynamic analysis indicate that the (COMPARE_OP,
JUMP_IF_FALSE, POP_TOP) pattern is highly used. Im looking at what changes
would need to be made to the compiler to remove the need for this sequence
On Tuesday, Feb 25, 2003, at 15:04 US/Eastern,
> python23.zip is good for end users of programs written in Python, but
> not so good for Python programmers: AFAIK it won't show source lines
> in tracebacks for modules loaded from the zip file.
Speaking entirely from a point of ignorance, why are the source line #s
not shown for frames that are implemented in modules loaded from
Assuming the ZIP archive could be exactly identical to what one might
find in /usr/lib/python2.3/, the zip could contain all the py + pyc as
found in the normal library?
As such, it would be trivial for the developer to unzip the zip into--
for example-- /tmp/ for reference purposes. Assuming the developer
has a copy of the 2.3 source lying around and has the zip with just the
PYC, the lines numbers are still very useful.
All things considered, I would think it would be highly desirable for
the developer's Python development environment to be as much like a
stock deployment environment as possible. Java made a grave mistake
in this regard -- the whole class loader mechanism can cause massive
problems-- very annoying and hard to debug problems-- when moving code
from a development environment into deployment if the class loader that
loads a particular class changes between the two environments.
I got a problem report for Stackless today, that
it seems to leak with tracebacks.
After trying other Python versions, I found out
that this is a "feature" of Python and not related
to Stackless. The problem becomes only more visible,
since people are keeping thousands of threads alive.
Here the problem:
When an exception has been raised in a frame, and
it already is handled in an except clause, the
exception is not cleared out from tstate and also
stays alive in the frame object.
Only when the frame is left, eval_frame calls
which clear all these, breaking cycles.
Does this need to be so, and for what reason?
Would it be equivalent if I cleared error info
in the context of a finally: ?
If not, please give me advice how to solve this
problem. It exists in all long-running frames
which have seen exceptions.
Thanks a lot - chris
Christian Tismer :^) <mailto:email@example.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/
14109 Berlin : PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776
PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04
whom do you want to sponsor today? http://www.stackless.com/
I'm not a python-dev regular, so sorry if this is a FAQ. What's the status
of defining a syntax for function attributes (PEP 232)? I'm using __doc__
to carry metadata about methods right now, but would very much like to use
function attributes. However, without a specialized syntax, I'm stuck
doing things like
VeryLongMethodName.MetadataName = "foo"
which is fine if it's a one-off, but I'd like others to use the code, and
this isn't exactly a friendly mechanism. The proposals in the PEP would be
fine; I was thinking something like
"""this is the docstring"""
.this_is_a_function_attribute = 1
but that's just off the top of my head.
I'm happy to do some work writing a PEP if there's some consensus about
what syntax would be preferable.
>>>The last point is probably compiler dependent. GCC has the tendency
>>>to use the same layout for the assembler code as you use in the
>>>C source code, so placing often used code close to the top
>>>results in better locality (at least on my machines).
>> My experience with gcc (on x86) is that it uses a lookup table
>> for contiguous switch statements rather than a long chain of
>> compares/branches. A quick look at the assembler output from ceval.c
>> suggests it's using a lookup table.
>Right, but the code for the case implementations itself is
>ordered (more or less) in the order you use in the C file. At
>least that was the case at the time (which must have been GCC
>2.95.x or even earlier).
Yeah - I think I must have been reading too fast - on second reading,
you clearly said "locality".
Andrew McNamara, Senior Developer, Object Craft
>The general problem with the ceval switch statement is that it
>is too big. Adding new opcodes will only make it bigger, so I doubt
>that much can be gained in general by trying to come up with new
>The last point is probably compiler dependent. GCC has the tendency
>to use the same layout for the assembler code as you use in the
>C source code, so placing often used code close to the top
>results in better locality (at least on my machines).
My experience with gcc (on x86) is that it uses a lookup table
for contiguous switch statements rather than a long chain of
compares/branches. A quick look at the assembler output from ceval.c
suggests it's using a lookup table. What architecture did you observe
Andrew McNamara, Senior Developer, Object Craft
I've just seen the Introducing Python video, found in
This is a very interesting video, at least after you stop laughing. :-))
Jokes apart, it's indeed interesting to know how your mailing list
partners/programming partners/benevolent dictators/friends/whatever
look like, when they're not ascii characters. I'd advice it to anyone
who is part of that community and is not able to be closer in meetings
and similar events.
Btw, Tim, your <wink>s will have a special meaning to me from
now on. ;-)
[ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ]