I would like to create 2.5.3 and 2.4.6 release candidates next week,
December 12, and final releases on December 19. If there are any open
issues that you think need to be considered, please create a bug in
the bug tracker, mark it as release blocker, and label it with version
2.5.3 (or 2.4). Of course, a number of such issues are already in the
tracker, some already being worked on.
Remember: 2.5.3 will be the last bug fix release for Python 2.5;
afterwards, only security patches will be accepted for the 2.5
branch. The 2.4 branch is already in that state (the 2.3 branch
is not maintained anymore; 2.4 security patches will be produced
until November 2009).
About a month ago, I submitted two patches that address Pdb and
doctest inability to load source code from modules with custom loaders
such as modules loaded from zip files:
The patches are very simple, basically calls to linecache.getline()
need to be provided with the module's dict to enable linecache to find
the module's __loader__.
Is there a chance that these patches could make it to 2.6.1?
This is my first message in this list, therefore I would like to say
hello to everyone. I also hope, that I am not breaking any rules or
guidelines for sending proposals.
I would like to ask, if it is possible to provide type name of the
object invoking the exception, when Attribute error is catched. It is
done for functions, like:
AttributeError: 'function' object has no attribute 'getValue'
but for some objects there is only:
This is cool, when you know exactly what type of object cast the
exception. But if there might be many of them, you must do one of two
things: add print statement just before the line with the exception
and check the type or iterate over all classes that might appear them.
Showing the class name would solve this problem and could save a lot
-----BEGIN PGP SIGNED MESSAGE-----
I believe we are on track for releasing Python 3.0 final and 2.6.1
tomorrow. There is just one release blocker for 3.0 left -- Guido
needs to finish the What's New for 3.0.
This is bug 2306.
So that Martin can have something to work with when he wakes up
tomorrow morning, I would like to tag and branch the tree some time
today, Tuesday 02-Dec US/Eastern. Therefore I am freezing both the
2.6 and 3.0 trees, with special dispensation to Guido for the updated
Ping me on irc @ freenode #python-dev if you have anything else to
check in to either tree before then. As soon as I hear from Guido, or
issue 2306 is closed, I'm branching 3.0 and tagging it for release.
Great work everyone, we're almost there!
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Darwin)
-----END PGP SIGNATURE-----
Currently, Parser/parsetok.c has a dependency on graminit.h. This can
cause headaches when rebuilding after adding new syntax to
Grammar/Grammar because parsetok.c is part of pgen, which is responsible
for *generating* graminit.h.
This circular dependency can result in parsetok.c using a different
value for encoding_decl to what is used in ast.c, which causes
PyAST_FromNode to fall over at runtime. It effectively looks something
* Grammar/Grammar is modified
* build begins -- pgen compiles, parsetok.c uses encoding_decl=X
* graminit.h is rebuilt with encoding_decl=Y
* ast.c is compiled using encoding_decl=Y
* when python runs, parsetok() emits encoding_decl nodes that
PyAST_FromNode can't recognize:
SystemError: invalid node XXX for PyAST_FromNode
A nice, easy short term solution that doesn't require unwinding this
dependency would be to simply move encoding_decl to the top of
Grammar/Grammar and add a big warning noting that it needs to come
before everything else. This will help to ensure its value never changes
when syntax is added/removed.
I'm happy to provide a patch for this (including some additional
dependency info for files dependent upon graminit.h and Python-ast.h),
but was wondering if there were any opinions about how this should be
I encountered a weird problem using distutils.
Generally, distutils try to use the same compiler options used for
building Python interpreter,
but it looks like some of them are omitted sometimes.
- CPPFLAGS are not retrieved from the config and only ones in env are used.
- OPT is retrieved from the config, but it's only used when env has CFLAGS.
if compiler.compiler_type == "unix":
(cc, cxx, opt, cflags, ccshared, ldshared, so_ext) = \
get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS',
'CCSHARED', 'LDSHARED', 'SO')
if 'CC' in os.environ:
cc = os.environ['CC']
if 'CXX' in os.environ:
cxx = os.environ['CXX']
if 'LDSHARED' in os.environ:
ldshared = os.environ['LDSHARED']
if 'CPP' in os.environ:
cpp = os.environ['CPP']
cpp = cc + " -E" # not always
if 'LDFLAGS' in os.environ:
ldshared = ldshared + ' ' + os.environ['LDFLAGS']
if 'CFLAGS' in os.environ:
cflags = opt + ' ' + os.environ['CFLAGS']
ldshared = ldshared + ' ' + os.environ['CFLAGS']
if 'CPPFLAGS' in os.environ:
cpp = cpp + ' ' + os.environ['CPPFLAGS']
cflags = cflags + ' ' + os.environ['CPPFLAGS']
ldshared = ldshared + ' ' + os.environ['CPPFLAGS']
cc_cmd = cc + ' ' + cflags
compiler_so=cc_cmd + ' ' + ccshared,
compiler.shared_lib_extension = so_ext
Are these logics are intentional or just a bug?
If this is intentional behavior, why is that being this way?
Ok, now I'm implementing __format__ support for IronPython. The format spec mini-language docs say that a presentation type of None is the same as 'g' for floating point / decimal values. But these two formats seem to differ based upon how they handle whole numbers:
The docs also say that 'g' prints it as fixed point unless the number is too large. But the fixed point format differs from what 'f' would print. I guess it didn't say they'd both print it as fixed point w/ a precision of 6 or anything but it seems a little unclear.
Finally providing any sign character seems to cause +1.0#INF and friends to be returned instead of inf as is documented:
Are these just doc bugs? The inf issue is the only one that seems particularly weird to me.
Hrvoje Niksic wrote:
> A friend pointed out that running python under valgrind (simply "valgrind
> python") produces a lot of "invalid read" errors. Reading up on
> Misc/README.valgrind only seems to describe why "uninitialized reads" should
> occur, not invalid ones. For example:
> I suppose valgrind could be confused by PyFree's pool address validation
> that intentionally reads the memory just before the allocated block, and
> incorrectly attributes it to a previously allocated (and hence freed) block,
> but I can't prove that. Has anyone investigated this kind of valgrind
Did you use the suppressions file as suggested in Misc/README.valgrind?
Amaury Forgeot d'Arc