> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> > print f()
> > </frag>
> > # print 3.
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
Yes this can be easy to implement but more confusing situations can arise:
What should this print? the situation leads not to a canonical solution
as class def scopes.
from foo import *
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
I was watching file modification times on my Windows box (strange
hobby, I know :-), and I noticed that after a fresh install of Python,
the .pyc files seem to be written when the first code that imports the
corresponding module runs, rather than all of the .pyc files being
compiled at once by the installer. Wasn't there code in the installer
that precompiles all modules? I know the Unix install does this, and
I vaguely remember that the Windows installer did this too -- or was
it only the Win32all installer??? If there's code to do that in the
Windows installer now, it seems it's not working. If there isn't such
code, perhaps there should be?
--Guido van Rossum (home page: http://www.python.org/~guido/)
this is something we discussed with Guido, and also Moshe Zadka at Europython.
Guido thought it seems reasonable enough, if the details can be nailed.
I have written it down so the idea doesn't get lost, for the moment is more a
matter of whether it can get a number, and then it can go dormant for a while.
- * -
Title: Resource-Release Support for Generators
Author: Samuele Pedroni <pedronis(a)python.org>
Type: Standards Track
Generators allow for natural coding and abstraction of traversal
over data. Currently if external resources needing proper timely
release are involved, generators are unfortunately not adequate.
The typical idiom for timely release is not supported, a yield
statement is not allowed in the try clause of a try-finally
statement inside a generator. The finally clause execution cannot
be either guaranteed or enforced.
This PEP proposes that generators support a close method and
destruction semantics, such that the restriction can be lifted,
expanding the applicability of generators.
Python generators allow for natural coding of many data traversal
scenarios Their instantiation produces iterators, i.e. first-class
objects abstracting traversal (with all the advantages of first-
classness). In this respect they match in power and offer some
advantages over the approach using iterator methods taking a
(smalltalkish) block. On the other hand, given current limitations
(no yield allowed in a try clause of a try-finally inside a
generator) the latter approach seems better suited at encapsulating
not only traversal but also exception handling and proper resource
acquisition and release.
Let's consider an example (for simplicity, files in read-mode are
for path in file(index_path,"r"):
for line in file(path.strip(),"r"):
this is short and to the point, but the try-finally for timely
closing of the files cannot be added. (While instead of a path,
a file, whose closing then would be responsibility of the caller,
could be passed in as argument, the same is not applicable for the
files opened depending on the contents of the index).
If we want timely release, we have to sacrifice the simplicity and
directness of the generator-only approach: (e.g.)
self.index_path = index_path
self.index = None
self.document = None
self.index = file(self.index_path,"r")
for path in self.index:
self.document = file(path.strip(),"r")
for line in self.document:
self.document = None
to be used as:
all_lines = AllLines("index.txt")
for line in all_lines:
The more convoluted solution implementing timely release, seems
to offer a precious hint. What we have done is encapsulating our
traversal in an object (iterator) with a close method.
This PEP proposes that generators should grow such a close method
with such semantics that the example could be rewritten as:
index = file(index_path,"r")
for path in file(index_path,"r"):
document = file(path.strip(),"r")
for line in document:
all = all_lines("index.txt")
for line in all:
PEP 255  disallows yield inside a try clause of a try-finally
statement, because the execution of the finally clause cannot be
guaranteed as required by try-finally semantics. The semantics of
the proposed close method should be such, that while the finally
clause execution still cannot be guaranteed, it can be enforced
when required. The semantics of generator destruction on the
other hand should be extended in order to implement a best-effort
policy for the general case. This strikes as a reasonable
compromise, the resulting global behavior being similar to that of
files and closing.
A close() method should be implemented for generator objects.
1) If a generator is already terminated, close should be a no-op.
Otherwise: (two alternative solutions)
(Return Semantics) The generator should be resumed, generator
execution should continue as if the instruction at re-entry point
is a return. Consequently finally clauses surrounding the re-entry
point would be executed, in the case of a then allowed
Issues: is it important to be able to distinguish forced
termination by close, normal termination, exception propagation
from generator or generator-called code? In the normal case it
seems not, finally clauses should be there to work the same in
all these cases, still this semantics could make such a distinction
Except-clauses, like by a normal return, are not executed, such
clauses in legacy generators expect to be executed for exceptions
raised by the generator or by code called from it. Not executing
them in the close case seems correct.
OR (Exception Semantics) The generator should be resumed and
execution should continue as if a special-purpose exception
(e.g. CloseGenerator) has been raised at re-entry point. Close
implementation should consume and not propagate further
Issues: should StopIteration be reused for this purpose? Probably
not. We would like close to be a harmless operation for legacy
generators, which could contain code catching StopIteration to deal
with other generators/iterators.
In general, with exception semantics, it is unclear what to do
if the generator does not terminate or we do not receive the
special exception propagated back. Other different exceptions
should probably be propagated, but consider this possible legacy
except: # or except Exception:, etc
If close is invoked with the generator suspended after the
yield, the except clause would catch our special purpose
exception, so we would get a different exception propagated
back, which in this case ought to be reasonably consumed
and ignored but in general should be propagated, but separating
these scenarios seem hard.
The exception approach has the advantage to let the generator
distinguish between termination cases and have more control.
On the other hand clear-cut semantics seem harder to define.
2) Generator destruction should invoke close method behavior.
If this proposal is accepted, it should become common practice
to document whether a generator acquires resources, so that its
close method ought to be called. If a generator is no longer
used, calling close should be harmless.
On the other hand, in the typical scenario the code that
instantiated the generator should call close if required by it,
generic code dealing with iterators/generators instantiated
elsewhere should typically not be littered with close calls.
The rare case of code that has acquired ownership of and need to
properly deal with all of iterators, generators and generators
acquiring resources that need timely release, is easily solved:
Definitive semantics ought to be chosen, implementation issues
should be explored.
The idea that the yield placement limitation should be removed
and that generator destruction should trigger execution of finally
clauses has been proposed more than once. Alone it cannot
guarantee that timely release of resources acquired by a generator
can be enforced.
PEP 288  proposes a more general solution, allowing custom
exception passing to generators.
 PEP 255 Simple Generators
 PEP 288 Generators Attributes and Exceptions
This document has been placed in the public domain.
So, in an attempt to garner comments (now that we have 2.3 off the
chopping block) I'm reposting my PEP proposal (with minor updates).
Comments would be appreciated, of course (nudges Barry slightly after
him getting me to write this on my only free Sunday in months ;)
Title: Be Honest about LC_NUMERIC (to the C library)
Version: $Revision: 1.9 $
Last-Modified: $Date: 2002/08/26 16:29:31 $
Author: Christian R. Reis <kiko at async.com.br>
Type: Standards Track
Content-Type: text/plain <pep-xxxx.html>
Support in Python for the LC_NUMERIC locale category is currently
implemented only in Python-space, which causes inconsistent behavior
and thread-safety issues for applications that use extension modules
and libraries implemented in C. This document proposes a plan for
removing this inconsistency by providing and using substitute
locale-agnostic functions as necessary.
Python currently provides generic localization services through the
locale module, which among other things allows localizing the
display and conversion process of numeric types. Locale categories,
such as LC_TIME and LC_COLLATE, allow configuring precisely what
aspects of the application are to be localized.
The LC_NUMERIC category specifies formatting for non-monetary
numeric information, such as the decimal separator in float and
fixed-precision numbers. Localization of the LC_NUMERIC category is
currently implemented in only in Python-space; the C libraries are
unaware of the application's LC_NUMERIC setting. This is done to
avoid changing the behavior of certain low-level functions that are
used by the Python parser and related code .
However, this presents a problem for extension modules that wrap C
libraries; applications that use these extension modules will
inconsistently display and convert numeric values.
James Henstridge, the author of PyGTK , has additionally pointed
out that the setlocale() function also presents thread-safety
issues, since a thread may call the C library setlocale() outside of
the GIL, and cause Python to function incorrectly.
The inconsistency between Python and C library localization for
LC_NUMERIC is a problem for any localized application using C
extensions. The exact nature of the problem will vary depending on
the application, but it will most likely occur when parsing or
formatting a numeric value.
The initial problem that motivated this PEP is related to the
GtkSpinButton  widget in the GTK+ UI toolkit, wrapped by PyGTK.
The widget can be set to numeric mode, and when this occurs,
characters typed into it are evaluated as a number.
Because LC_NUMERIC is not set in libc, float values are displayed
incorrectly, and it is impossible to enter values using the
localized decimal separator (for instance, `,' for the Brazilian
locale pt_BR). This small example demonstrates reduced usability
for localized applications using this toolkit when coded in Python.
Martin v. Löwis commented on the initial constraints for an
acceptable solution to the problem on python-dev:
- LC_NUMERIC can be set at the C library level without breaking
- float() and str() stay locale-unaware.
The following seems to be the current practice:
- locale-aware str() and float() [XXX: atof(), currently?]
stay in the locale module.
An analysis of the Python source suggests that the following
functions currently depend on LC_NUMERIC being set to the C locale:
[XXX: still need to check if any other occurrences exist]
The proposed approach is to implement LC_NUMERIC-agnostic functions
for converting from (strtod()/atof()) and to (snprintf()) float
formats, using these functions where the formatting should not vary
according to the user-specified locale.
This change should also solve the aforementioned thread-safety
Potential Code Contributions
This problem was initially reported as a problem in the GTK+
libraries ; since then it has been correctly diagnosed as an
inconsistency in Python's implementation. However, in a fortunate
coincidence, the glib library implements a number of
LC_NUMERIC-agnostic functions (for an example, see ) for reasons
similar to those presented in this paper. In the same GTK+ problem
report, Havoc Pennington has suggested that the glib authors would
be willing to contribute this code to the PSF, which would simplify
implementation of this PEP considerably.
[I'm checking if Alex Larsson is willing to sign the PSF
contributor agreement  to make sure the code is safe to
integrate; XXX: what would be necessary to sign here?]
There may be cross-platform issues with the provided locale-agnostic
functions. This needs to be tested further.
Martin has pointed out potential copyright problems with the
contributed code. I believe we will have no problems in this area as
members of the GTK+ and glib teams have said they are fine with
relicensing the code.
An implementation is being developed by Gustavo Carneiro
<gjc at inescporto.pt>. It is currently attached to Sourceforge.net
bug 744665 
[XXX: The SF.net tracker is horrible 8(]
 PEP 1, PEP Purpose and Guidelines, Warsaw, Hylton
 Python locale documentation for embedding,
 PyGTK homepage, http://www.daa.com.au/~james/pygtk/
 GtkSpinButton screenshot (demonstrating problem),
 GNOME bug report, http://bugzilla.gnome.org/show_bug.cgi?id=114132
 Code submission of g_ascii_strtod and g_ascii_dtostr (later
renamed g_ascii_formatd) by Alex Larsson,
 PSF Contributor Agreement,
 Python bug report, http://www.python.org/sf/774665
This document has been placed in the public domain.
Christian Reis, Senior Engineer, Async Open Source, Brazil.
http://async.com.br/~kiko/ | [+55 16] 261 2331 | NMFL
I've been working on a set of Python tools that use readline, but want to
keep history separate between different interaction modes. Unfortunately,
this really needs to be able to access readline's clear_history(), as
read_history_file() leaves existing history intact.
I'd be happy to whip up a patch to add this (as readline.clear_history()),
but I was wondering if perhaps the reason it's not currently exported by
the readline module is a compatibility issue for older readline
implementations that are officially supported. Thanks.
Over in the spambayes project, we get reports of database corruption from
people using Sleepycat bsddb. The most recent comment on a bug report is
Date: 2003-08-18 04:09
i did some experimenting with various bsddb3 versions:
- with db-3.3.11 the python segfaults and core is dumped
- with db-3.2.9 the database is corrupted
- with db-4.1.25 everything works as it should (no db corruption)
spambayes makes elementary use of a Berkeley DB, just accessing via the dict
interface -- inserts, deletes and lookups, but no cursors, no transactions,
I don't have time to dig into this, but assuming the report is correct, how
can we "encourage" Unix weenies to use db-4.1.25? (On Windows, db-4.1.25 is
shipped with the installer.) If the problems with older versions are so
severe, maybe the Python wrapper should do a version check and refuse to run
if it finds an old version?
> So when the marshalled representation of 0.001 is loaded under
> "german" LC_NUMERIC here, we get back exactly 0.0. I'm not
> sure why.
When I call "marshal.dumps(0.1)" from AsyncDialog (or anywhere in the
Outlook code) I get "f\x030.0", which fits with what you have.
> So the obvious <wink> answers are:
(Glad you posted this - I was wading through the progress of marshalling
(PyOS_snprintf etc) and getting rapidly lost).
> 1. When LC_NUMERIC is "german", MS C's atof() stops at the first
> period it sees.
This is the case:
f = atof("0.1");
Gives me with gcc version 3.2 20020927 (prerelease):
Gives me with Microsoft C++ Builder (I don't have Visual C++ handy, but
I suppose it would be the same):
The help file for Builder does say that this is the correct behaviour -
it will stop when it finds an unrecognised character - here '.' is
unrecognised (because we are in German), so it stops.
Does this then mean that this is a Python bug? Or because Python tells
us not to change the c locale and we (Outlook) are, it's our
Presumably what we'll have to do for a solution is just what Mark is
doing now - find the correct place to put a call that (re)sets the c
locale to English.
I am building Python 2.3 in an heterogenous environment where I want
all of the platform-independent code to be shared and the platform
specific code kept segregated. In particular I have a structure like
for the platform independent files, and
for platform-specific directories and files (bin/, include/, and lib/).
Getting this to work is easily accomplished with --prefix and
--exec-prefix when running configure on each platform:
These builds are put under source control and are read-only once they
are checked in. Hence the problem:
If /foo/bar/python/include and its contents are read-only then
following installs fail to put pyconfig.h (a platform-dependent file)
into the platform-specific include directory because the
$(INSTALL_DATA) in the 'inclinstall' target fails and the final line
in that target (to install pyconfig.h) never runs.
Clear as mud?
The main thing I want to do is avoid reinstalling the shared files
after the first time, since I'm building this on many different
My proposed solution to this is to move the $(INSTALL_DATA) for
pyconfig.h from the bottom of the inclinstall target to before the
loop that installs the shared headers. Then that step will succeed and
it doesn't matter whether or not the other INSTALL_DATA calls
fail. Make sense?
This begs another install target, which just installs the items that
end up in the exec-prefix directories. Then on each platform I can
just install the platform-specific code.
Thanks in advance,
Tom Emerson Basis Technology Corp.
Software Architect http://www.basistech.com
"Beware the lollipop of mediocrity: lick it once and you suck forever"
compiling python-2.3 on various platforms I needed the attached patch.
the first two patches to Makefile.pre.in and Modules/makesetup are needed
if source dir and compile dir are not the same.
Modules/resource.c patch fixes a problem for Solaris 2.5.1 which
doesn't define _SC_PAGE_SIZE but _SC_PAGESIZE in unistd.h.
other than those small patches, python-2.3 compiled fine
on the following systems using either gcc-2.95.3 or gcc-3.3:
"I hope to die ___ _____
before I *have* to use Microsoft Word.", 0--,| /OOOOOOO\
Donald E. Knuth, 02-Oct-2001 in Tuebingen. <_/ / /OOOOOOOOOOO\
Harald Koenig \/\/\/\/\/\/\/\/\/
science+computing ag // / \\ \
koenig(a)science-computing.de ^^^^^ ^^^^^