> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> > print f()
> > </frag>
> > # print 3.
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
Yes this can be easy to implement but more confusing situations can arise:
What should this print? the situation leads not to a canonical solution
as class def scopes.
from foo import *
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
Recently an issue has come up on the C++-sig which I think merits a
little attention here. To boil it down, the situation looks like
Shared library Q uses threading but not Python. It supplies a an
interface by which users can supply callback functions. Some of these
callbacks will be invoked directly in response to external calls into
Q; others will be invoked on threads started by calls into Q.
Python extension module A calls shared library Q, but doesn't use its
callback interface. It works fine by itself.
Python extension module B calls shared library Q and uses Q's callback
interface. Because some of the callbacks need to use the Python API,
and *might* be invoked by threads, they must all acquire the GIL.
Because they also might be invoked by direct calls into Q, B must
always release the GIL before calling anything in Q.
Problem: using B while A is loaded breaks A: because B has installed
callbacks in Q that acquire the GIL, A must also release the GIL
before calling into Q.
Notice that the author of A may have had no reason to believe anyone
would install Python callbacks in Q!
It seems to me that for situations like these, where a function may or
may not be called on Python's main thread, it would be helpful if
Python supplied a "recursive mutex" GIL acquisition/release pair, for
which acquisition and release on the main thread would simply bump a
counter. Is this something that was considered and rejected?
dave(a)boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution
Title: Support for System Upgrades
Version: $Revision: 0.0 $
Author: mal(a)lemburg.com (Marc-Andr? Lemburg)
Type: Standards Track
This PEP proposes strategies to allow the Python standard library
to be upgraded in parts without having to reinstall the complete
distribution or having to wait for a new patch level release.
Python currently does not allow overriding modules or packages in
the standard library per default. Even though this is possible by
defining a PYTHONPATH environment variable (the paths defined in
this variable are prepended to the Python standard library path),
there is no standard way of achieving this without changing the
Since Python's standard library is starting to host packages which
are also available separately, e.g. the distutils, email and PyXML
packages, which can also be installed independently of the Python
distribution, it is desireable to have an option to upgrade these
packages without having to wait for a new patch level release of
the Python interpreter to bring along the changes.
This PEP proposes two different but not necessarily conflicting
1. Adding a new standard search path to sys.path:
$stdlibpath/system-packages just before the $stdlibpath
entry. This complements the already existing entry for site
add-ons $stdlibpath/site-packages which is appended to the
sys.path at interpreter startup time.
To make use of this new standard location, distutils will need
to grow support for installing certain packages in
$stdlibpath/system-packages rather than the standard location
for third-party packages $stdlibpath/site-packages.
2. Tweaking distutils to install directly into $stdlibpath for the
system upgrades rather than into $stdlibpath/site-packages.
The first solution has a few advantages over the second:
* upgrades can be easily identified (just look in
* upgrades can be deinstalled without affecting the rest
of the interpreter installation
* modules can be virtually removed from packages; this is
due to the way Python imports packages: once it finds the
top-level package directory it stay in this directory for
all subsequent package submodule imports
* the approach has an overall much cleaner design than the
hackish install on top of an existing installation approach
The only advantages of the second approach are that the Python
interpreter does not have to changed and that it works with
older Python versions.
Both solutions require changes to distutils. These changes can
also be implemented by package authors, but it would be better to
define a standard way of switching on the proposed behaviour.
Solution 1: Python 2.3 and up
Solution 2: all Python versions supported by distutils
This document has been placed in the public domain.
CEO eGenix.com Software GmbH
eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,...
Python Consulting: http://www.egenix.com/
Python Software: http://www.egenix.com/files/python/
[Originally posted to comp.lang.python with no response; asking here
before filing a bug report]
Is garbage collection supposed to run when Python exits? The following
program does not print any output, unless I uncomment the gc.collect()
(or add a for loop that forces GC after creating the cycle):
a = A()
b = A()
a.b = b
b.a = a
a.x = B()
Aahz (aahz(a)pythoncraft.com) <*> http://www.pythoncraft.com/
"I disrespectfully agree." --SJM
> Modified Files:
> Log Message:
> Try to get compilation working for cygwin
> Index: _randommodule.c
And then, for example, before:
> ! PyObject_GenericGetAttr, /*tp_getattro*/
> ! 0, /*tp_getattro*/
> + Random_Type.tp_getattro = PyObject_GenericGetAttr;
> + Random_Type.tp_alloc = PyType_GenericAlloc;
> + Random_Type.tp_free = _PyObject_Del;
in the module init function.
Please don't make this kind of change -- it makes the code so much harder to
follow. If this is needed for Cygwin, then, e.g., do
#define DEFERRED(x) 0 /* some boxes can't resolve addresses at compile-time
and make the "after" line
IOW, the type slots should be readable on their own, as a static unit.
I humbly submit this PEP for your dissection.
As I am not subscribed to python-dev, please make sure that your
comments are CC:ed to <bellman+pep-divmod(a)lysator.liu.se>, so I
can see them.
I have also posted this to comp.lang.python.
I have written an implementation also, but I need to check it
some more to see if I've got the reference counting correct
before I dare post it. :-)
Title: Extend divmod() for Multiple Divisors
Version: $Revision: 1.2 $
Last-Modified: $Date: 2002/12/31 16:02:49 $
Author: Thomas Bellman <bellman+pep-divmod(a)lysator.liu.se>
Type: Standards Track
This PEP describes an extension to the built-in divmod() function,
allowing it to take multiple divisors, chaining several calls to
divmod() into one.
The built-in divmod() function would be changed to accept multiple
divisors, changing its signature from divmod(dividend, divisor) to
divmod(dividend, *divisors). The dividend is divided by the last
divisor, giving a quotient and a remainder. The quotient is then
divided by the second to last divisor, giving a new quotient and
remainder. This is repeated until all divisors have been used,
and divmod() then returns a tuple consisting of the quotient from
the last step, and the remainders from all the steps.
A Python implementation of the new divmod() behaviour could look
def divmod(dividend, *divisors):
modulos = ()
q = dividend
q,r = q.__divmod__(divisors[-1])
modulos = (r,) + modulos
divisors = divisors[:-1]
return (q,) + modulos
Occasionally one wants to perform a chain of divmod() operations,
calling divmod() on the quotient from the previous step, with
varying divisors. The most common case is probably converting a
number of seconds into weeks, days, hours, minutes and seconds.
This would today be written as:
m,s = divmod(seconds, 60)
h,m = divmod(m, 60)
d,h = divmod(h, 24)
w,d = divmod(d, 7)
This is tedious and easy to get wrong each time you need it.
If instead the divmod() built-in is changed according the proposal,
the code for converting seconds to weeks, days, hours, minutes and
seconds then become
w,d,h,m,s = divmod(seconds, 7, 24, 60, 60)
which is easier to type, easier to type correctly, and easier to
Other applications are:
- Astronomical angles (declination is measured in degrees, minutes
and seconds, right ascension is measured in hours, minutes and
- Old British currency (1 pound = 20 shilling, 1 shilling = 12 pence)
- Anglo-Saxon length units: 1 mile = 1760 yards, 1 yard = 3 feet,
1 foot = 12 inches.
- Anglo-Saxon weight units: 1 long ton = 160 stone, 1 stone = 14
pounds, 1 pound = 16 ounce, 1 ounce = 16 dram
- British volumes: 1 gallon = 4 quart, 1 quart = 2 pint, 1 pint
= 20 fluid ounces
The idea comes from APL, which has an operator that does this. (I
don't remember what the operator looks like, and it would probably
be impossible to render in ASCII anyway.)
The APL operator takes a list as its second operand, while this
PEP proposes that each divisor should be a separate argument to
the divmod() function. This is mainly because it is expected that
the most common uses will have the divisors as constants right in
the call (as the 7, 24, 60, 60 above), and adding a set of
parentheses or brackets would just clutter the call.
Requiring an explicit sequence as the second argument to divmod()
would seriously break backwards compatibility. Making divmod()
check its second argument for being a sequence is deemed to be too
ugly to contemplate. And in the case where one *does* have a
sequence that is computed other-where, it is easy enough to write
divmod(x, *divs) instead.
Requiring at least one divisor, i.e rejecting divmod(x), has been
considered, but no good reason to do so has come to mind, and is
thus allowed in the name of generality.
Calling divmod() with no divisors should still return a tuple (of
one element). Code that calls divmod() with a varying number of
divisors, and thus gets a return value with an "unknown" number of
elements, would otherwise have to special case that case. Code
that *knows* it is calling divmod() with no divisors is considered
to be too silly to warrant a special case.
Processing the divisors in the other direction, i.e dividing with
the first divisor first, instead of dividing with the last divisor
first, has been considered. However, the result comes with the
most significant part first and the least significant part last
(think of the chained divmod as a way of splitting a number into
"digits", with varying weights), and it is reasonable to specify
the divisors (weights) in the same order as the result.
The inverse operation:
def inverse_divmod(seq, *factors):
product = seq
for x,y in zip(factors, seq[1:]):
product = product * x + y
could also be useful. However, writing
seconds = (((((w * 7) + d) * 24 + h) * 60 + m) * 60 + s)
is less cumbersome both to write and to read than the chained
divmods. It is therefore deemed to be less important, and its
introduction can be deferred to its own PEP. Also, such a
function needs a good name, and the PEP author has not managed to
come up with one yet.
Calling divmod("spam") does not raise an error, despite strings
supporting neither division nor modulo. However, unless we know
the other object too, we can't determine whether divmod() would
work or not, and thus it seems silly to forbid it.
Any module that replaces the divmod() function in the __builtin__
module, may cause other modules using the new syntax to break. It
is expected that this is very uncommon.
Code that expects a TypeError exception when calling divmod() with
anything but two arguments will break. This is also expected to
be very uncommon.
No other issues regarding backwards compatibility are known.
Not finished yet, but it seems a rather straightforward
new implementation of the function builtin_divmod() in
This document has been placed in the public domain.
Thomas Bellman, Lysator Computer Club, Linköping University, Sweden
"Adde parvum parvo magnus acervus erit" ! bellman @ lysator.liu.se
(From The Mythical Man-Month) ! Make Love -- Nicht Wahr!
Here's some problems I found in the stdlib.
I already fixed one problem in ConfigParser, but I'm not sure it's correct.
Lib/cgitb.py:202: No global (path) found
Lib/cgitb.py:204: No global (path) found
Lib/ConfigParser.py:581: Invalid arguments to (__init__), got 1, expected 4
Lib/imaplib.py:444: No global (root) found
Lib/pprint.py:118: Invalid arguments to (format), got 3, expected 4
Lib/pprint.py:121: Invalid arguments to (format), got 3, expected 4