I have uploaded a port of Python 2.1 to the incoming directories of
the Hobbes (http://hobbes.nmsu.edu) and LEO (http://archiv.leo.org/) OS/2
This port supports the case sensitive module import semantics
introduced in Python 2.1 for case-insensitive but case preserving
file systems (such as used by the MS-Windows family of OSes, as well
The distributed archives are:
python-2.1-os2emx-bin-010617.zip (binary installation package, 2.9MB)
python-2.1-os2emx-src-010617.zip (source patches and makefiles, 108kB)
More info available at http://www.pcug.org.au/~andymac/software.html;
the above archives are also available there if you can't find them
at Hobbes or LEO.
OS/2 users enjoy!
Andrew I MacIntyre "These thoughts are mine alone..."
E-mail: andrew.macintyre(a)aba.gov.au (work) | Snail: PO Box 370
andymac(a)bullseye.apana.org.au (play) | Belconnen ACT 2616
andymac(a)pcug.org.au (play2) | Australia
With a sigh of relief I announce Python 2.0.1c1 -- the first Python
release in a long time whose license is fully compatible with the GPL:
I thank Moshe Zadka who did almost all of the work to make this a
useful bugfix release, and then went incommunicado for several weeks.
(I hope you're OK, Moshe!)
Note that this is a release candidate. We don't expect any problems,
but we're being careful nevertheless. We're planning to do the final
release of 2.0.1 a week from now; expect it to be identical to the
release candidate except for some dotted i's and crossed t's.
Python 2.0 users should be able to replace their 2.0 installation with
the 2.0.1 release without any ill effects; apart from the license
change, we've only fixed bugs that didn't require us to make feature
changes. The SRE package (regular expression matching, used by the
"re" module) was brought in line with the version distributed with
Python 2.1; this is stable feature-wise but much improved bug-wise.
For the full scoop, see the release notes on SourceForge:
Python 2.1 users can ignore this release, unless they have an urgent
need for a GPL-compatible Python version and are willing to downgrade.
Rest assured that we're planning a bugfix release there too: I expect
that Python 2.1.1 will be released within a month, with the same
GPL-compatible license. (Right, Thomas?)
We don't intend to build RPMs for 2.0.1. If someone else is
interested in doing so, we can link to them.
--Guido van Rossum (home page: http://www.python.org/~guido/)
You can view an HTML version of PEP 255 here:
Discussion should take place primarily on the Python Iterators list:
If replying directly to this message, please remove (at least) Python-Dev
Title: Simple Generators
Version: $Revision: 1.3 $
Author: nas(a)python.ca (Neil Schemenauer),
tim.one(a)home.com (Tim Peters),
magnus(a)hetland.org (Magnus Lie Hetland)
Type: Standards Track
This PEP introduces the concept of generators to Python, as well
as a new statement used in conjunction with them, the "yield"
When a producer function has a hard enough job that it requires
maintaining state between values produced, most programming languages
offer no pleasant and efficient solution beyond adding a callback
function to the producer's argument list, to be called with each value
For example, tokenize.py in the standard library takes this approach:
the caller must pass a "tokeneater" function to tokenize(), called
whenever tokenize() finds the next token. This allows tokenize to be
coded in a natural way, but programs calling tokenize are typically
convoluted by the need to remember between callbacks which token(s)
were seen last. The tokeneater function in tabnanny.py is a good
example of that, maintaining a state machine in global variables, to
remember across callbacks what it has already seen and what it hopes to
see next. This was difficult to get working correctly, and is still
difficult for people to understand. Unfortunately, that's typical of
An alternative would have been for tokenize to produce an entire parse
of the Python program at once, in a large list. Then tokenize clients
could be written in a natural way, using local variables and local
control flow (such as loops and nested if statements) to keep track of
their state. But this isn't practical: programs can be very large, so
no a priori bound can be placed on the memory needed to materialize the
whole parse; and some tokenize clients only want to see whether
something specific appears early in the program (e.g., a future
statement, or, as is done in IDLE, just the first indented statement),
and then parsing the whole program first is a severe waste of time.
Another alternative would be to make tokenize an iterator,
delivering the next token whenever its .next() method is invoked. This
is pleasant for the caller in the same way a large list of results
would be, but without the memory and "what if I want to get out early?"
drawbacks. However, this shifts the burden on tokenize to remember
*its* state between .next() invocations, and the reader need only
glance at tokenize.tokenize_loop() to realize what a horrid chore that
would be. Or picture a recursive algorithm for producing the nodes of
a general tree structure: to cast that into an iterator framework
requires removing the recursion manually and maintaining the state of
the traversal by hand.
A fourth option is to run the producer and consumer in separate
threads. This allows both to maintain their states in natural ways,
and so is pleasant for both. Indeed, Demo/threads/Generator.py in the
Python source distribution provides a usable synchronized-communication
class for doing that in a general way. This doesn't work on platforms
without threads, though, and is very slow on platforms that do
(compared to what is achievable without threads).
A final option is to use the Stackless variant implementation of
Python instead, which supports lightweight coroutines. This has much
the same programmatic benefits as the thread option, but is much more
efficient. However, Stackless is a controversial rethinking of the
Python core, and it may not be possible for Jython to implement the
same semantics. This PEP isn't the place to debate that, so suffice it
to say here that generators provide a useful subset of Stackless
functionality in a way that fits easily into the current CPython
implementation, and is believed to be relatively straightforward for
other Python implementations.
That exhausts the current alternatives. Some other high-level
languages provide pleasant solutions, notably iterators in Sather,
which were inspired by iterators in CLU; and generators in Icon, a
novel language where every expression "is a generator". There are
differences among these, but the basic idea is the same: provide a
kind of function that can return an intermediate result ("the next
value") to its caller, but maintaining the function's local state so
that the function can be resumed again right where it left off. A
very simple example:
a, b = 0, 1
a, b = b, a+b
When fib() is first invoked, it sets a to 0 and b to 1, then yields b
back to its caller. The caller sees 1. When fib is resumed, from its
point of view the yield statement is really the same as, say, a print
statement: fib continues after the yield with all local state intact.
a and b then become 1 and 1, and fib loops back to the yield, yielding
1 to its invoker. And so on. From fib's point of view it's just
delivering a sequence of results, as if via callback. But from its
caller's point of view, the fib invocation is an iterable object that
can be resumed at will. As in the thread approach, this allows both
sides to be coded in the most natural ways; but unlike the thread
approach, this can be done efficiently and on all platforms. Indeed,
resuming a generator should be no more expensive than a function call.
The same kind of approach applies to many producer/consumer functions.
For example, tokenize.py could yield the next token instead of invoking
a callback function with it as argument, and tokenize clients could
iterate over the tokens in a natural way: a Python generator is a kind
of Python iterator, but of an especially powerful kind.
A new statement is introduced:
yield_stmt: "yield" expression_list
"yield" is a new keyword, so a future statement is needed to phase
this in. [XXX spell this out]
The yield statement may only be used inside functions. A function that
contains a yield statement is called a generator function.
When a generator function is called, the actual arguments are bound to
function-local formal argument names in the usual way, but no code in
the body of the function is executed. Instead a generator-iterator
object is returned; this conforms to the iterator protocol, so in
particular can be used in for-loops in a natural way. Note that when
the intent is clear from context, the unqualified name "generator" may
be used to refer either to a generator-function or a generator-
Each time the .next() method of a generator-iterator is invoked, the
code in the body of the generator-function is executed until a yield
or return statement (see below) is encountered, or until the end of
the body is reached.
If a yield statement is encountered, the state of the function is
frozen, and the value of expression_list is returned to .next()'s
caller. By "frozen" we mean that all local state is retained,
including the current bindings of local variables, the instruction
pointer, and the internal evaluation stack: enough information is
saved so that the next time .next() is invoked, the function can
proceed exactly as if the yield statement were just another external
A generator function can also contain return statements of the form:
Note that an expression_list is not allowed on return statements
in the body of a generator (although, of course, they may appear in
the bodies of non-generator functions nested within the generator).
When a return statement is encountered, nothing is returned, but a
StopIteration exception is raised, signalling that the iterator is
exhausted. The same is true if control flows off the end of the
function. Note that return means "I'm done, and have nothing
interesting to return", for both generator functions and non-generator
# A binary tree class.
def __init__(self, label, left=None, right=None):
self.label = label
self.left = left
self.right = right
def __repr__(self, level=0, indent=" "):
s = level*indent + `self.label`
s = s + "\n" + self.left.__repr__(level+1, indent)
s = s + "\n" + self.right.__repr__(level+1, indent)
# Create a Tree from a list.
n = len(list)
if n == 0:
i = n / 2
return Tree(list[i], tree(list[:i]), tree(list[i+1:]))
# A recursive generator that generates Tree leaves in in-order.
for x in inorder(t.left):
for x in inorder(t.right):
# Show it off: create a tree.
t = tree("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
# Print the nodes of the tree in in-order.
for x in t:
# A non-recursive generator.
stack = 
node = node.left
while not node.right:
node = stack.pop()
node = node.right
# Exercise the non-recursive generator.
for x in t:
Q & A
Q. Why a new keyword? Why not a builtin function instead?
A. Control flow is much better expressed via keyword in Python, and
yield is a control construct. It's also believed that efficient
implementation in Jython requires that the compiler be able to
determine potential suspension points at compile-time, and a new
keyword makes that easy.
A preliminary patch against the CVS Python source is available.
Footnotes and References
 PEP 234, http://python.sf.net/peps/pep-0234.html
 PEP 219, http://python.sf.net/peps/pep-0219.html
 "Iteration Abstraction in Sather"
Murer , Omohundro, Stoutamire and Szyperski
 The concept of iterators is described in PEP 234
This document has been placed in the public domain.
Announce: PyClimate 1.1, 13 June 2001
We present version 1.1 of our package PyClimate: http://www.pyclimate.org
It is dedicated to the analysis of atmospheric and oceanic data sets
using Python. It makes heavy use of NumPy and some of our own
C extensions to speed up the code.
It is distributed according to the GNU General Public License Version 2
(it is free).
The updates from version 1.0 to 1.1 comprise:
a) Some minor bugs corrected.
b) Addition of functions and classes to perform Canonical
Correlation Analysis (CCA) in the EOF space.
c) Additional functions and scaling conventions for EOFs. The interface
for EOFs and SVD is much easier than previous version. The routines
accept arbitrarily shaped fields for input and reshape
accordingly their outputs.
d) Updated algorithms (much faster) for the random selection of
subsamples in Monte Carlo tests on eigenvectors.
e) New algorithms for differential operators on the sphere. Current
version handles periodic boundary conditions in longitude and
arbitrarily shaped scalar or vector fields organized according to
COARDS Conventions. Current version handles fields
arranged as (T,Z,lat,lon).
f) Exception strings have been converted to classes.
g) The reference file has grown a lot. It is not currently distributed with
the package, but can still be accessed from the WEB server.
We have made our best to make all those changes backwards compatible.
In some cases (exception strings -> classes), they may break some code,
but we are enforced to make those changes due to Python's new designs.
We would appreciate feedback by users.
Jon Saenz, jsaenz(a)wm.lc.ehu.es
Jesus Fernandez, chus(a)wm.lc.ehu.es
Juan Zubillaga, juan(a)zubi.net
<P><A HREF="http://www.pyclimate.org">PyClimate 1.1</A> -
Analysis of Atmospheric and Oceanic Data Sets using Python.
I am pleased to announce the release of pgnotify 0.1.
pgnotify is a PostgreSQL client-side asynchronous notification handler for
Typically, asynchronous notification is used to communicate the message "I
changed this table, take a look at it to see what's new" from one PostgreSQL
client to other interested PostgreSQL clients.
pgnotify is developed on this platform:
- FreeBSD 4.0
- Python 2.1
- PostgreSQL 7.1.1
- PyGreSQL 3.2
At present, pgnotify works with PyGreSQL only. It should work with PoPy and
psycopg when those modules provide Pythonic interfaces to additional
necessary PostgreSQL client-side functions, as described in the README.
Get pgnotify here:
As usual, feedback is welcome.
Ng Pheng Siong <ngps(a)post1.com> * http://www.post1.com/home/ngps
Quidquid latine dictum sit, altum viditur.
I've made a new packaging of the ZODB from Digital Creations. This
release contains the code from Zope 2.3.2 and the most recent ZEO
release, ZEO 1.0beta3.
The release is available from:
The full list of changes in this release runs as follows:
* Removed top-level setup.py, and changed the README accordingly.
* Resynced with ZEO 1.0b3
* Resynced with BerkeleyStorage 1.0b2
* Resynced with Zope 2.3.2.
* Removed SearchIndex and Catalog packages; they're tangential to
the ZODB itself.
Version 0.3 of the Quixote Web development toolkit is now available.
Quixote uses a Python package to store all the code and HTML for a
Web-based application. PTL, Python Template Language, is used to mix
HTML with Python code; the basic syntax looks just like Python, but
expressions are converted to strings and appended to the output.
Notable changes in this version are:
* Now supports Python 2.1.
* Names of the form __*__ are reserved for Python, and 2.1 is
beginning to enforce this rule. Accordingly the Quixote special
methods have been renamed:
__access__ -> _q_access
__exports__ -> _q_exports
__getname__ -> _q_getname
index -> _q_index
* Massive changes to quixote.publisher and quixote.config, to make
the publishing loop more flexible and more easily changed by
applications. For example, it's now possible to catch the
ZODB's ConflictErrors and retry an operation.
* quixote.publish can now gzip-compress its output if the browser
claims to support it. Only the 'gzip' and 'x-gzip' content
encodings are supported; 'deflate' isn't because we couldn't get
it to work reliably. Compression can be enabled by setting the
'compress_pages' config option to true.
As usual, some of these changes are incompatible with the previous
The Quixote home page is at:
The code can be downloaded from:
Discussion of Quixote occurs on the quixote-users mailing list:
Sketch 0.6.11 - A vector drawing program
Sketch is a vector drawing program for Linux and other unices. It's
intended to be a flexible and powerful tool for illustrations, diagrams
and other purposes.
It has advanced features like gradients, text along a path and clip
masks and is fully scriptable due to its implementation in a combination
of Python and C.
More information, sources and binaries are available at the website:
Summary of the Changes since 0.6.10:
* Fix another Python 2.1 related bug
* Updated Spanish translation by Esteban Manchado Velázquez
* a few other bug fixes.
Sketch is released under the GNU Library General Public License.
By popular demand, I have created a Win32 binary for Python 2.1 in addition
to the one already present for Python 2.0. You can find it at the following
For those who are wondering what cx_Oracle is, it is a module which provides
a Python DB-API 2.0 compliant method for accessing Oracle databases. Please
see www.computronix.com/utilities/ReadMe.txt for more information.