The documentation for __hash__ seems to be outdated. I'm happy to submit
a patch, so long as I am not misunderstanding something.
The documentation states:
If a class does not define a __cmp__() or __eq__() method it should not
define a __hash__() operation either; if it defines __cmp__() or
__eq__() but not __hash__(), its instances will not be usable as
dictionary keys. If a class defines mutable objects and implements a
__cmp__() or __eq__() method, it should not implement __hash__(), since
the dictionary implementation requires that a key’s hash value is
immutable (if the object’s hash value changes, it will be in the wrong
This may have been true for old style classes, but as new style classes
inherit a default __hash__ from object - mutable objects *will* be
usable as dictionary keys (hashed on identity) *unless* they implement a
__hash__ method that raises a type error.
Shouldn't the advice be that classes that implement comparison methods
should always implement __hash__ (wasn't this nearly enforced?), and
that mutable objects should raise a TypeError in __hash__.
Additionally the following documentation states that __reversed__ is new
in Python 2.6 and I think it was actually new in Python 2.4 (it
certainly works for Python 2.5 and IronPython 1 which targets 2.4...).
I was working on a recursion overflow checking bug
(http://bugs.python.org/issue2548) and, while I've managed to produce a working
patch, I've also become uncomfortable with the very idea of trying to plug all
those holes just for the sake of plugging them. I'll try to explain why, by
describing the conflicting factors I've identified:
- more and more, we are adding calls to Py_EnterRecursiveCall() and
Py_LeaveRecursiveCall() all over the interpreter, to avoid special/obscure
cases of undetected infinite recursion; this can probably be considered a good
- after a recursion error has been raised (technically a RuntimeError), usually
some code has to do cleanup after noticing the exception; this cleanup now can
very easily bump into the recursion limit again, due to the point mentioned
above (the funniest example of this is PyErr_ExceptionMatches, which makes a
call to PyObject_IsSubclass which itself increases the recursion count because
__subclasscheck__ can be recursively invoked...).
- to counter the latter problem, py3k has introduced a somewhat smarter
mechanism (which I've tracked down to a commit in the defunct p3yk branch by
Martin): when the recursion limit is exceeded, a special flag named
"overflowed" is set in the thread state structure which disables the primary
recursion check, so that cleanup code has a bit of room to increase the
recursion count a bit. A secondary recursion check exists (equal to the primary
one /plus/ 50) and, if it is reached, the interpreter aborts with a fatal error.
The "overflowed" flag is cleared when the recursion count drops below the
primary recursion limit /minus/ 50. Now it looks rather smart but:
- unfortunately, some functions inside the interpreter discard every exception
by design. The primary example is PyDict_GetItem(), which is certainly used
quite a lot :-)... When PyDict_GetItem() returns NULL, the caller can only
assume that the key isn't in the dict, it has no way to know that there was a
critical problem due to a recursion overflow.
I encountered the latter problem when trying to backport the py3k recursion
overflow algorithm to trunk. A fatal error suddenly appeared in test_cpickle,
and it turned out that the recursion count was exceeded in
PyObject_RichCompare(), the error was then cleared in PyDict_GetItem(), but the
"overflowed" flag was still set so that a subsequent recursion overflow would
trigger the secondary check and lead to the fatal error.
I guess that, if it doesn't happen in py3k, it's just by chance: the recursion
overflow is probably happening at another point where errors don't get
discarded. Indeed, the failure I got on trunk was manifesting itself when
running "regrtest.py test_cpickle" but not directly "test_cpickle.py"... which
shows how delicate the recursion mechanism has become.
My attempt to solve the latter problem while still backporting the py3k scheme
involves clearing the "overflowed" flag in PyErr_Clear(). This makes all tests
pass ok, but also means the "overflowed" flag loses a lot of its meaning...
since PyErr_Clear() is called in a lot of places (and, especially, in
Also, at this point I fear that the solution to the problem is becoming,
because of its complexity, perhaps worse than the problem itself. That's why
I'm bringing it here, to have your opinion.
(I also suggest that we stop trying to fix recursion checking bugs until the
stable release, so as to give us some time to do the Right Thing later - if
there is such a thing)
i would like to get a python script which executes all interpreter's opcodes, or
how am i supposed to create such script... i just need to make sure that all
opcodes (as defined in Include/opcode.h) are executed by this scrip
i need this script for testing purposes while rewritting the python's ceval.c
Many buildbots are running bsddb 4.7, particularly the debian/ubuntu
ones (4.7.25 which seems to be the latest). Some of them are
crashing, others are not. The max version we support in both 2.6 and
3.0 is 4.7. Should we allow this version or should we use a lower
maximum that is more likely to work (ie, not crash)?
It looks like the WIndows buildbots use 4.4.20. Unfortunately, the
Windows bots aren't in great shape either.
Additionally, there are reference leaks in both 2.6 and 3.0:
test_bsddb3 leaked [80, 80] references, sum=160 (2.6)
test_bsddb3 leaked [63, 63] references, sum=126 (3.0)
It would be nice to get as many of these things fixed up before
release. Jesus, Greg, Trent, can you make some time over the next
week to stabilize bsddb support?
PS. To change the max version of bsddb supported in Unix, modify
max_db_ver in setup.py.
At 05:50 PM 8/28/2008 +0200, Michele Simionato wrote:
>On Aug 28, 5:30 pm, "Phillip J. Eby" <p...(a)telecommunity.com> wrote:
> > How is that making things easier for application programmers?
>We have different definitions of "application programmer". For me a typical
>application programmer is somebody who never fiddles with metaclasses,
>which are the realm of framework builders.
Application programmers use frameworks, and sometimes more than
one. If they're subclassing from two different frameworks, each
using a different metaclass, they will need to also multiple-inherit
This is in fact so annoying that I created a "universal metaclass" in
DecoratorTools whose sole function is to delegate metaclass __new__,
__init__, and __call__ to class-level methods (e.g. __class_new__,
__class_call__, etc.), thereby eliminating the need to have custom
metaclasses for most use cases in the first place. Now, wherever
possible, I use that single metaclass in my frameworks, so that
there's no need to mix them.
That, IMO, would be a more useful change than getting rid of super();
it would get rid of the explicit metaclass mixing. (It would still
not remove the need for co-operative methods, as the class-delegated
methods still need to be co-operative for MI to work.)
There are, of course, other ways to create co-operative function
calls besides super(), and I've certainly created more a few of them
in my time. (E.g. generic function method combination,
instancemethod() chains, and next-method-iterators, to name the ones
that occur to me right off.) But these are more for cases where
super() is wholly inadequate to the purpose, and none are anywhere
near as convenient as super().
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: exec() arg 1 must be a string, file, or code object, not
so what's "file" referring to here?
(the above works under 2.5, of course)
At 06:35 AM 8/28/2008 +0200, Michele Simionato wrote:
>Multiple inheritance of metaclasses is perhaps
>the strongest use case for multiple inheritance, but is it strong
>enough? I mean, in real code how many times did I need that?
>I would not mind make life harder for gurus and simpler for
Then you need to leave MI and co-operation the hell alone. Right
now, an application programmer can mix metaclasses like this:
class FooBar(Foo, Bar):
class __metaclass__(Foo.__class__, Bar.__class__): pass
Or, in 3.x:
class FooBarClass(Foo.__class__, Bar.__class__): pass
class FooBar(Foo, Bar, metaclass=FooBarClass):
Either way, this is useful in cases where Foo and Bar come from
different frameworks. That's the *only* way to get such things to
co-operate, in fact.
>I do not think removing cooperation
>would be so bad in practice. In many practical cases, one could just write
>the metaclass by hand,
How is that making things easier for application programmers?
>Maybe you would need to duplicate a couple of lines and/or to introduce
>an helper function,
...which then has to have an agreed-upon protocol that all metaclass
authors have to follow... which we already have... but which you're
proposing to get rid of... so we can re-invent it lots of
times... in mutually incompatible ways. :)
At 03:16 AM 8/27/2008 +0200, Michele Simionato wrote:
>It is just a matter of how rare the use cases really are. Cooperative
>methods has been introduced 6+ years ago. In all this time surely
>they must have been used. How many compelling uses of cooperation
>we can find in real life code? For instance in the standard library or
>in some well known framework? This is a serious question I have been
>wanting to ask for years. I am sure people here can find some example,
>so just give me a pointer and we will see.
ISTR pointing out on more than one occasion that a major use case for
co-operative super() is in the implementation of metaclasses. The
__init__ and __new__ signatures are fixed, multiple inheritance is
possible, and co-operativeness is a must (as the base class methods
*must* be called). I'm hard-pressed to think of a metaclass
constructor or initializer that I've written in the last half-decade
or more where I didn't use super() to make it co-operative.
That, IMO, is a compelling use case even if there were not a single
other example of the need for super. However, I'm pretty sure I've
had other cases where it was necessary to co-operate in cases where
multiple inheritance occurred later; ie. where it was possible for a
subclass to add a new class between parents. Remember that
subclasses of a new-style class do not always have the same MRO tail
as the original class; i.e., a subclass of "class A(B, C):" is only
constrained to have [A...B...C] in its MRO; semi-arbitrary classes
may be inserted between e.g. A and B. So, a new-style class cannot,
as a general rule, statically determine what base class
implementation of a method should be invoked. I personally consider
the rare case where I have to force such static knowledge to be an
unfortunate wart in the design (of that code, not Python).