> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> > print f()
> > </frag>
> > # print 3.
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
Yes this can be easy to implement but more confusing situations can arise:
What should this print? the situation leads not to a canonical solution
as class def scopes.
from foo import *
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
I'm aware that I've not been great about keeping people up to date
about my timing plans for the last couple of releases, so I'd like to
be a little more organized this time.
My plan goes roughly as follows:
April 8 ~1200 GMT: freeze begins.
Over the next 24 hours I'll do tests, write
Misc/NEWS, update version numbers, etc.
April 9 ~1200 GMT: freeze ends, release begins.
This is when Fred, Tim and Jack do their magic,
and check any changes they have to make into the
I'd really like to get the Mac release done at the
same time as the others.
During this time, I'll draft changes to python.org
and an announcement.
April 10 ~1200 GMT: release ends
By now, F, T & J have done their bits, uploaded
files to creosote and sf (or pointed me to where I
can get them), etc.
I twiddle pages on creosote, fiddle with sf, tag
the tree, cut the tarball, compute md5s, etc.
The cunning will notice that this doesn't require me to be in the
office after half past five...
Does this plan sound reasonable to everyone?
I don't have any special knowledge of all this. In fact, I made all
the above up, in the hope that it corresponds to reality.
-- Mark Carroll, ucam.chat
PEP 279 proposes three separate things. Comments on each:
1. New builtin: indexed()
I like the idea of having some way to iterate over a sequence and
its index set in parallel. It's fine for this to be a builtin.
I don't like the name "indexed"; adjectives do not make good
function names. Maybe iterindexed()?
I don't like the start and stop arguments. If I saw code like
for i, j in iterindexed("abcdefghij", 5, 10): print i, j
I would expect it to print
while the spec in the PEP would print
Very confusing. I propose to remove the start/stop arguments, *or*
change the spec to:
def iterindexed(sequence, start=0, stop=None):
i = start
while stop is None or i < stop:
item = sequence[i]
yield (i, item)
i += 1
This reduces the validity to only sequences (as opposed to all
iterable collections), but has the advantage of making
iterindexed(x, i, j) iterate over x[i:j] while reporting the index
sequence range(i, j) -- not so easy otherwise.
The simplified version is still attractive because it allows
arbitrary iterators to be passed in:
i = 0
it = iter(collection)
yield (i, it.next())
i += 1
2. Generator comprehensions
I don't think it's worth the trouble. I expect it will take a lot
of work to hack it into the code generator: it has to create a
separate code object in order to be a generator. List
comprehensions are inlined, so I expect that the generator
comprehension code generator can't share much with the list
comprehension code generator. And this for something that's not
that common and easily done by writing a 2-line helper function.
IOW the ROI isn't high enough.
3. Generator exception passing
This is where the PEP seems weakest. There's no real motivation
("This is a true deficiency" doesn't count :-). There's no hint as
to how it should be implemented. The example has a "return log"
statement in the generator body which is currently illegal, and I
can't figure out to where this value would be returned. The
example looks like it doesn't need a generator, and if it did, it
would be easy to stop the generator by setting a global "please
stop" flag and calling next() once more. (If you don't like
globals, make the generator a method of a class and make the stop
flag an instance variable.)
--Guido van Rossum (home page: http://www.python.org/~guido/)
http://python.org/sf/518846 reports that new-style classes cannot be
used as exceptions. I think it is desirable that this is fixed, but I
also believe that it conflicts with string exceptions. So I would like
to propose that string exceptions are deprecated for Python 2.3, in
order to remove them in Python 2.4, simultaneously allowing arbitrary
objects as exceptions.
Should the use of PyArg_NoArgs() be deprecated?
There are many uses (53) throughout Modules/*.c. It seems that this
check is not useful anymore if the MethodDef is set to METH_NOARGS.
Is this correct? If so, I can make a patch.
I offer the following PEP for review by the community. If it receives
a favorable response, it will be implemented in Python 2.3.
A long discussion has already been held in python-dev about this PEP;
most things you could bring up have already been brought up there.
The head of the thread there is:
I believe that the review questions listed near the beginning of the
PEP are the main unresolved issues from that discussion.
This PEP is also on the web, of course, at:
If you prefer to look at code, here's a reasonably complete
implementation (in C; it may be slightly out of date relative to the
--Guido van Rossum (home page: http://www.python.org/~guido/)
Title: Adding a bool type
Version: $Revision: 1.12 $
Last-Modified: $Date: 2002/03/30 05:37:02 $
Author: guido(a)python.org (Guido van Rossum)
Type: Standards Track
Post-History: 8-Mar-2002, 30-Mar-2002
This PEP proposes the introduction of a new built-in type, bool,
with two constants, False and True. The bool type would be a
straightforward subtype (in C) of the int type, and the values
False and True would behave like 0 and 1 in most respects (for
example, False==0 and True==1 would be true) except repr() and
str(). All built-in operations that conceptually return a Boolean
result will be changed to return False or True instead of 0 or 1;
for example, comparisons, the "not" operator, and predicates like
I'm particularly interested in hearing your opinion about the
following three issues:
1) Should this PEP be accepted at all.
2) Should str(True) return "True" or "1": "1" might reduce
backwards compatibility problems, but looks strange to me.
(repr(True) would always return "True".)
3) Should the constants be called 'True' and 'False'
(corresponding to None) or 'true' and 'false' (as in C++, Java
Most other details of the proposal are pretty much forced by the
backwards compatibility requirement; e.g. True == 1 and
True+1 == 2 must hold, else reams of existing code would break.
Minor additional issues:
4) Should we strive to eliminate non-Boolean operations on bools
in the future, through suitable warnings, so that e.g. True+1
would eventually (e.g. in Python 3000 be illegal). Personally,
I think we shouldn't; 28+isleap(y) seems totally reasonable to
5) Should operator.truth(x) return an int or a bool. Tim Peters
believes it should return an int because it's been documented
as such. I think it should return a bool; most other standard
predicates (e.g. issubtype()) have also been documented as
returning 0 or 1, and it's obvious that we want to change those
to return a bool.
Most languages eventually grow a Boolean type; even C99 (the new
and improved C standard, not yet widely adopted) has one.
Many programmers apparently feel the need for a Boolean type; most
Python documentation contains a bit of an apology for the absence
of a Boolean type. I've seen lots of modules that defined
constants "False=0" and "True=1" (or similar) at the top and used
those. The problem with this is that everybody does it
differently. For example, should you use "FALSE", "false",
"False", "F" or even "f"? And should false be the value zero or
None, or perhaps a truth value of a different type that will print
as "true" or "false"? Adding a standard bool type to the language
resolves those issues.
Some external libraries (like databases and RPC packages) need to
be able to distinguish between Boolean and integral values, and
while it's usually possible to craft a solution, it would be
easier if the language offered a standard Boolean type.
The standard bool type can also serve as a way to force a value to
be interpreted as a Boolean, which can be used to normalize
Boolean values. Writing bool(x) is much clearer than "not not x"
and much more concise than
Here are some arguments derived from teaching Python. When
showing people comparison operators etc. in the interactive shell,
I think this is a bit ugly:
>>> a = 13
>>> b = 12
>>> a > b
If this was:
>>> a > b
it would require one millisecond less thinking each time a 0 or 1
There's also the issue (which I've seen puzzling even experienced
Pythonistas who had been away from the language for a while) that if
>>> cmp(a, b)
>>> cmp(a, a)
you might be tempted to believe that cmp() also returned a truth
value. If ints are not (normally) used for Booleans results, this
would stand out much more clearly as something completely
The following Python code specifies most of the properties of the
def __new__(cls, val=0):
# This constructor always returns an existing instance
__str__ = __repr__
def __and__(self, other):
if isinstance(other, bool):
return bool(int(self) & int(other))
return int.__and__(self, other)
__rand__ = __and__
def __or__(self, other):
if isinstance(other, bool):
return bool(int(self) | int(other))
return int.__or__(self, other)
__ror__ = __or__
def __xor__(self, other):
if isinstance(other, bool):
return bool(int(self) ^ int(other))
return int.__xor__(self, other)
__rxor__ = __xor__
# Bootstrap truth values through sheer willpower
False = int.__new__(bool, 0)
True = int.__new__(bool, 1)
The values False and True will be singletons, like None; the C
implementation will not allow other instances of bool to be
created. At the C level, the existing globals Py_False and
Py_True will be appropriated to refer to False and True.
All built-in operations that are defined to return a Boolean
result will be changed to return False or True instead of 0 or 1.
In particular, this affects comparisons (<, <=, ==, !=, >, >=, is,
is not, in, not in), the unary operator 'not', the built-in
functions callable(), hasattr(), isinstance() and issubclass(),
the dict method has_key(), the string and unicode methods
endswith(), isalnum(), isalpha(), isdigit(), islower(), isspace(),
istitle(), isupper(), and startswith(), the unicode methods
isdecimal() and isnumeric(), and the 'closed' attribute of file
Note that subclassing from int means that True+1 is valid and
equals 2, and so on. This is important for backwards
compatibility: because comparisons and so on currently return
integer values, there's no way of telling what uses existing
applications make of these values.
Because of backwards compatibility, the bool type lacks many
properties that some would like to see. For example, arithmetic
operations with one or two bool arguments is allowed, treating
False as 0 and True as 1. Also, a bool may be used as a sequence
I don't see this as a problem, and I don't want evolve the
language in this direction either; I don't believe that a stricter
interpretation of "Booleanness" makes the language any clearer.
Another consequence of the compatibility requirement is that the
expression "True and 6" has the value 6, and similarly the
expression "False or None" has the value None. The "and" and "or"
operators are usefully defined to return the first argument that
determines the outcome, and this won't change; in particular, they
don't force the outcome to be a bool. Of course, if both
arguments are bools, the outcome is always a bool. It can also
easily be coerced into being a bool by writing for example
"bool(x and y)".
Because the repr() or str() of a bool value is different from an
int value, some code (for example doctest-based unit tests, and
possibly database code that relies on things like "%s" % truth)
may fail. How much of a backwards compatibility problem this will
be, I don't know. If we this turns out to be a real problem, we
could changes the rules so that str() of a bool returns "0" or
"1", while repr() of a bool still returns "False" or "True".
Other languages (C99, C++, Java) name the constants "false" and
"true", in all lowercase. In Python, I prefer to stick with the
example set by the existing built-in constants, which all use
CapitalizedWords: None, Ellipsis, NotImplemented (as well as all
built-in exceptions). Python's built-in module uses all lowercase
for functions and types only. But I'm willing to consider the
lowercase alternatives if enough people think it looks better.
It has been suggested that, in order to satisfy user expectations,
for every x that is considered true in a Boolean context, the
expression x == True should be true, and likewise if x is
considered false, x == False should be true. This is of course
impossible; it would mean that e.g. 6 == True and 7 == True, from
which one could infer 6 == 7. Similarly,  == False == None
would be true, and one could infer  == None, which is not the
case. I'm not sure where this suggestion came from; it was made
several times during the first review period. For truth testing
of a value, one should use "if", e.g. "if x: print 'Yes'", not
comparison to a truth value; "if x == True: print 'Yes'" is not
only wrong, it is also strangely redundant.
An experimental, but fairly complete implementation in C has been
uploaded to the SourceForge patch manager:
This document has been placed in the public domain.
I recently came up with a fix for thread support in Python
under Cygwin. Jason Tishler and Norman Vine are looking it
over, but I'm pretty sure something similar should be used
for the Cygwin Python port.
This is easily done--simply add a few lines to thread.c
and create a new thread_cygwin.h (context diff and new file
But there is a larger issue:
The thread interface code in thread_pthread.h uses mutexes
and condition variables to emulate semaphores, which are
then used to provide Python "lock" and "sema" services.
I know this is a common practice since those two thread
synchronization primitives are defined in "pthread.h". But
it comes with quite a bit of overhead. (And in the case of
Cygwin causes race conditions, but that's another matter.)
POSIX does define semaphores, though. (In fact, it's in
the standard just before Mutexes and Condition Variables.)
According to POSIX, they are found in <semaphore.h> and
_POSIX_SEMAPHORES should be defined if they work as POSIX
If they are available, it seems like providing direct
semaphore services would be preferable to emulating them
using condition variables and mutexes.
thread_posix.h.diff-c is a context diff that can be used
to convert thread_pthread.h into a more general POSIX
version that will use semaphores if available.
thread_cygwin.h would no longer be needed then, since all
it does is uses POSIX semaphores directly rather than
mutexes/condition vars. Changing the interface to POSIX
threads should bring a performance improvement to any
POSIX platform that supports semaphores directly.
Does this sound like a good idea? Should I create a
more thorough set of patch files and submit them?
(I haven't been accepted to the python-dev list yet, so
please CC me. Thanks.)
-O Gerald S. Williams, 22Y-103GA : mailto:email@example.com O-
-O AGERE SYSTEMS, 555 UNION BLVD : office:610-712-8661 O-
-O ALLENTOWN, PA, USA 18109-3286 : mobile:908-672-7592 O-
> Subject: Re: [Python-Dev] Re: PEP 279
> From: Guido van Rossum <guido(a)python.org>
> Date: Fri, 29 Mar 2002 01:41:05 -0500
[GvR] > I like the idea of having some way to iterate over a sequence and
[GvR] > its index set in parallel. It's fine for this to be a builtin.
[RDH] > I like itercount() or enumerate(). The reason is that this
[RDH] > function can work with any iterable including those that
[RDH] > do not have numeric indices (such as dictionaries).
[RDH] > Counting or enumeration is what is really happening.
[GvR] I don't like either of those, you don't seem to like iterindexed(), so
[GvR] we'll have to think more about a name.
[Just] I quite like the name enumerate. Hate itercount. I'm neutral on
[RDH] > 3. itercount(collection) # good enough
[GvR] I really hate the alternate count option, so let's agree to pick 3
[GvR] (with a different name).
Though my tastes are a little different, iterindexed() works just fine.
I'm agreed (agreeing with dictators is good for ones health :-) to
option three as you wrote it:
[GvR] def iterindexed(collection):
[GvR] i = 0
[GvR] it = iter(collection)
[GvR] while 1:
[GvR] yield (i, it.next())
[GvR] i += 1
Thank you. I'll put it back it the PEP just like this and mark it
> > > 2. Generator comprehensions
[GvR] If the only way to get you to stop asking for this is a -1 from me,
[GvR] I'll give it a -1.
Okay, I'll mark this one as rejected. The rationale for the rejection
will be that the implementation and maintenance complexities
exceed the added value. The added value would be minimal
because it's already easy to code the generator directly.
[RDH] > Several commenters wanted this one quite a bit. An excerpt: "This
[RDH] > rules. You rock."
[GvR] Yeah, if I left Python's design to Ping, it would become quite the
[GvR] clever hack. :-)
Poor Ping, getting a little public ribbing for foolishly supporting
my proposal <grin> when the comment actually came from Kragen Sitaker :)
> > > 3. Generator exception passing
[RDH] > I need help from others on py-dev who can articulate the need
[RDH] > For me, it's as plain as day and I don't know what to say to convey
[RDH] > message better than it is expressed in the PEP.
[GvR] Too bad. This one gets a big fat -1 until there's a good motivational
Okay, let's defer this one until the case for or against becomes stronger.
I'll move it to join the separate PEP for generator parameter passing.
Putting that one in a separate pep was necessary because it wasn't
yet ready for pronoucement. I'll mark the two (exception passing
and parameter passing) as being proposed for 2.4 or later and note
that the case is not currently strong enough to warrant acceptance.
1. iterindexed(collection) --> accepted
2. gen comprehensions --> rejected
3. gen exception passing --> deferred, needs case building
4. gen parameter passing --> deferred, needs alternatives explored
Everyone, thank you for your time and thoughtful comments. We're done.