At 10:53 AM 3/17/2008 -0500, Guido van Rossum wrote:
>I don't think this should play games with scripts being overridden or
>whatever. If a bootstrap script is to be installed it should have a
>separate name. I'm not sure what the advantage is of a bootstrap
>script over "python -m bootstrap_module ..." though.
And -m also makes explicit:
1. that it's a Python-specific tool
2. which Python version it will apply to
>The PEP suggests that other package managers also benefit. How do they
>benefit if the bootstrap script installs setuptools?
Because those other package managers depend, in fact, on setuptools,
or at least pkg_resources... which was why the original proposal was
to just include pkg_resources in the first place. :)
>I'd also like to avoid the specific name "easy_install" for any of
>this. That's a "brand name" (and a misleading one if you ask me, but
>that's politics again :-).
Ok, so if someone will propose a name and API for the thing, I'll
implement it. (Assuming the proposed API is sane and reasonably
implementable, of course.)
Here is a simple proposal: make the standard Python "import"
mechanism notice eggs on the PYTHONPATH and insert them (into the
*same* location) on the sys.path.
This eliminates the #1 problem with eggs -- that they don't easily
work when installing them into places other than your site-packages
and that if you allow any of them to be installed on your system then
they take precedence over your non-egg packages even you explicitly
put those other packages earlier in your PYTHONPATH. (That latter
behavior is very disagreeable to more than a few prorgammers.)
This also preserves most of the value of eggs for many use cases.
This is backward-compatible with most current use cases that rely on
This is very likely forward-compatible with new schemes that are
currently being cooked up and will be deployed in the future.
It appears to me that if you can make mapping mechanisms faster in
Python you can make significant
overall speed improvements. I also think the proposed concept could
add flexibility to persistence formats
and RMI interfaces.
My basic idea is to have a constant string type with an interpreter
globally unique hash. If the original constant
is created in a manner different from string constants, it can be
tracked and handled differently by the interpreter.
Obviously most object attribute references are done with the dot
operator, so I guess the interpreter already has
an efficient mapping mechanism. But there must be a crossover with
__getattr__ etc, where a map of some sort is
used. I imagine that having a global namespace to translate attribute
names into integers could be used for several
purposes by the interpreter as well as an application exchanging
objects with other applications.
I imagine these expressions to be supported:
* attrname(string) - creates an attrname value from the string
* int(attrname) - gets the hash value
* string(attrname) - gets the string value
Hope this makes sense
me if this would make little difference in Python.
It seems this subject has had quite a bit of history. Tim Peters demonstrated
the problem in 2003 in this message:
In short, Python file objects release the GIL before calling any C stdlib
function on their embedded FILE pointer. Unfortunately, if another thread
calls fclose on the FILE pointer concurrently, the contents pointed to can
become garbage and the interpreter process crashes. Just by using the same
file object in two threads running pure Python code, you can crash the
(another, easier-to-solve problem is that the FILE pointer stored in the
file object could become NULL at the point it is used by another thread.
If that was the only problem you could just store the FILE pointer in a
local variable before releasing the GIL et voilà)
There was some discussion at the time about the possible resolution. I've
tried to fix the problem, and I've come to what I think is a satisfying
solution, which I can sum up as the following bullet points:
* Each file object gets a dedicated counter, which is incremented before
the bject releases the GIL and decremented after the GIL is taken again; thus
this counter keeps track of how many running "unlocked" sections of code are
using that particular file object. (please note the counter doesn't need its
own lock, since it is only modified in GIL-protected sections)
* In the close() method, if the aforementioned counter is greater than 0,
we refuse to call fclose and instead raise an IOError.
This may seem like a worrying semantic change, but I don't think it is, for the
1) if we closed the FILE pointer anyway, the interpreter would likely crash
because another thread would be using garbage data (that's what we are trying
to fix after all!)
2) if close() raises an IOError, it can be called again later, or at worse
fclose will be called when the file object is garbage collected
3) close() can already raise an IOError if fclose fails for whatever reason
(although for sure it's probably very rare)
4) it doesn't seem wrong to notify the programmer that his code is very
The patch is attached at http://bugs.python.org/issue815646 . It addresses
(or at least I hope it does) all potential problems with pure Python code,
threads, and the file object. It doesn't try to fix C extensions using the
PyFile_AsFile API and doing their own dirty things with the FILE pointer. It
could be a second step if the approach is accepted, but as noted in the 2003
discussions it would probably involve a new API. Whether we want to introduce
such an API in Python 2.x while Python 3.0 has a different IO model anyway
is left open to discussion :)
-----BEGIN PGP SIGNED MESSAGE-----
I'm happy to announce that we now have available for public
consumption, the Python source code for 2.5, 2.6 and 3.0 available
under the Bazaar distributed version control system.
The current Subversion repository is still the master copy of the
source code. We have not made a decision to move to Bazaar
officially, nor have we made a decision to even move off of
Subversion. We're making these branches available exactly so that
you, the Python developer community, can kick the tires and see if it
makes sense to move to a different vcs. Nothing will happen until
after the Python 2.6/3.0 releases anyway.
All the gory details are documented here:
These branches are available both for core Python developers with
commit privileges, and the wider world of developers without commit
privileges. It's this latter group that I think will find the most
compelling immediate benefit from using Bazaar, because they will no
longer need to maintain their own changes using a mass of patch files.
For more information on Bazaar in general, see:
You will probably be most interested in the Bazaar mirrors of the
Subversion master repository. We have a cron job that updates Bazaar
from Subversion every 15 minutes. It is also possible to push changes
made in your Bazaar branches into the Subversion master, so you can
keep reasonably up-to-date and interact with the Python source code
solely via Bazaar.
Please let me know if you have any questions or anything in the docs
referenced above aren't clear. I know I need to document the Bazaar-
>Subversion workflow in more detail.
Huge thanks go out especially to Thomas Wouters who sprinted with me
yesterday on getting the whole infrastructure up and running. Thanks
also to Martin v. Loewis, Sean Reifschneider, and the folks here at
Pycon from the Bazaar project, Ian, Andrew, John, and Edwin.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (Darwin)
-----END PGP SIGNATURE-----
FYI, I've uploaded a patch that provides for cross-compilation on Windows between 32 and 64 bit platforms - all comments invited!
From: Mark Hammond [mailto:email@example.com]
Sent: Sunday, 30 March 2008 6:01 PM
Subject: [issue2513] 64bit cross compilation on windows
New submission from Mark Hammond <mhammond(a)users.sourceforge.net>:
I've taken the liberty of adding Trent, Christian and Martin to the nosy
list as I know they are actively, if reluctantly interested in this.
This patch allows the distutils to cross-compile on Windows. It has
been tested on x86 and amd64 platforms, with both platforms successfully
able to natively and cross-compile extension modules and create binary
To cross-compile, specify '--plat-name' to the build command (valid
values are 'win32', 'win-amd64' and 'win-ia64'). This option name was
chosen to be consistent with the bdist_dumb command. I've included the
docs I added below (which are also in the patch), but note that as with
native compilation using distutils, it's not necessary to set any
environment variables or do anything else special with your environment
to make this work.
The patch also adds a x64 target for the 'bdist_wininst' target, which
it creates as distutils/command/wininst-9.0-amd64.exe. This executable
is necessary even for bdist_wininst to work natively on x64, but is
still included here for simplicity.
To assist with testing, I've also added a distutils setup.py script to
the PC/example_nt directory. This is capable of creating bdist_wininst
executables for both native and cross platforms; 'setup.py build
--platname=win-amd64 bdist_wininst' will create an amd64 installer on an
The patch has not been tested with a Visual Studio environment without
cross-compile tools installed - it will obviously fail, but its not
clear how ugly this failure will be.
Below is the text I added to docs/distutils/builtdist.rst:
Cross-compiling on Windows
Starting with Python 2.6, distutils is capable of cross-compiling
between Windows platforms. In practice, this means that with the
correct tools installed, you can use a 32bit version of Windows to
create 64bit extensions and vice-versa.
To build for an alternate platform, specify the :option:`--plat-name`
option to the build command. Valid values are currently 'win32',
'win-amd64' and 'win-ia64'. For example, on a 32bit version of Windows,
you could execute::
python setup.py build --plat-name=win-amd64
to build a 64bit version of your extension. The Windows Installers
also support this option, so the command::
python setup.py build --plat-name=win-amd64 bdist_wininst
would create a 64bit installation executable on your 32bit version of
Note that by default, Visual Studio 2008 does not install 64bit
compilers or tools. You may need to reexecute the Visual Studio setup
process and select these tools.
keywords: 64bit, patch
nosy: Trent.Nelson, ctheune, loewis, mhammond
title: 64bit cross compilation on windows
type: feature request
versions: Python 2.6
Added file: http://bugs.python.org/file9900/windows-cross-compile.patch
-----BEGIN PGP SIGNED MESSAGE-----
Just another heartbeat reminder that I intend to release the next
alphas for Python 2.6 and 3.0 this Wednesday April 2nd, at
approximately 6pm Eastern time (UTC 2200).
Current status: the buildbots for both the trunk and 3.0 look
relatively good, though a few are purple or red:
There is currently one release blocker bug for 3.0:
I'll be looking at these in more detail later today. If you have
time, feel free to comment on the bug or send a follow up about the
stable buildbots. Please try to be very conservative in your commits
to the trunk and 3.0 over the next few days. Concentrate on fixing
existing code rather than committing new features. Your release
manager thanks you for your diligence! :)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (Darwin)
-----END PGP SIGNATURE-----
While preparing the Python-AST compilation patch, I noticed that each
class nested in a class leaks one reference (2.5 and trunk).
It wasn't found by regrtest -R because it only happens on compiling,
and it seems that all snippets compiled during the tests as opposed to
on import didn't contain such a construct.
The AST generation stage is fine, the leak happens somehwere
after that (I suspect the symtable code). It would be nice if someone
who understands more about that code than I do could fix this.
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.
So I added my first buildbot yesterday (for FreeBSD and I hope to add a few
more different BSD ones to the fray) and I see that it is failing in the
I tracked it by hand and somewhere along the test it segfaults. The
resulting coredump is not really helpful.
#0 0x281aa61f in pthread_testcancel () from /lib/libpthread.so.2
#1 0x281a2a52 in pthread_mutexattr_init () from /lib/libpthread.so.2
#2 0x28167450 in ?? ()
Anybody have any hints where I should poke around?
Jeroen Ruigrok van der Werven <asmodai(-at-)in-nomine.org> / asmodai
イェルーン ラウフロック ヴァン デル ウェルヴェン
http://www.in-nomine.org/ | http://www.rangaku.org/
Looking for the Sun that eclipsed behind black feathered wings...