Many of the distutils "commands" use distutils.util.get_platform() as the
basis for file and directory names used to package up extensions. On
Windows, this returns the value of sys.platform. On all (desktop) Windows
versions, this currently returns 'win32'.
This causes a problem when trying to create a 64bit version of an extension.
For example, using bdist_msi, the pywin32 extensions end up with a filename
of 'pywin32-211.win32-py2.5.msi' for both 32bit and 64bit versions. This is
not desirable for (hopefully) obvious reasons.
I'd like to propose that an (untested and against 2.5) patch similar to the
following be adopted in distutils:
--- util.py (revision 56286)
+++ util.py (working copy)
@@ -29,8 +29,19 @@
- For non-POSIX platforms, currently just returns 'sys.platform'.
+ For Windows, the result will be one of 'win32', 'amd64' or 'itanium'
+ For other non-POSIX platforms, currently just returns 'sys.platform'.
+ if os.name == 'nt':
+ # copied from msvccompiler - find the processor architecture
+ prefix = " bit ("
+ i = string.find(sys.version, prefix)
+ if i == -1:
+ return sys.platform
+ j = string.find(sys.version, ")", i)
+ return sys.version[i+len(prefix):j].lower()
if os.name != "posix" or not hasattr(os, 'uname'):
# XXX what about the architecture? NT is Intel or Alpha,
# Mac OS is M68k or PPC, etc.
This will result in both the final version of most bdist_* installations
having the architecture in the filename. It also has the nice side effect
of having the temp directories used by these commands include the
architecture in their names, meaning its possible to build multiple Windows
architectures from the same build tree, although that is not the primary
motivation. Also note that bdist_msi has 'win32' hard-coded in one place,
where a call to get_platform() would be more appropriate, but I'm assuming
that is a bug (ie, bdist_msi should use get_platform() regardless of the
outcome of this discussion about what get_platform() should return)
Note that this issue is quite different than, but ultimately impacted by,
the cross-compiling issue. Its quite different as even when building x64
natively on x64, 'win32' is used in the generated filename and this patch
fixes that. It is impacted by cross-compiling, as it assumes the host
environment is the target environment - but so does the rest of that
function on all platforms.
At 06:23 PM 7/23/2007 -0400, Stanley A. Klein wrote:
>I've been able to generate Fedora rpms for several Python packages since
>you provided the info on the setup.cfg [install] optimize=1 option when
>I'm now having the same problem trying to generate rpms for particular
>packages in the Enthought system that use some special setuptools in numpy
>that import the regular setuptools and make changes. I tried the
>setup.cfg [install] optimize=1 option and it appeared to have no effect
>(i.e., I got the old error).
>Where in setuptools is this option processed? I would like to find the
>place in the numpy-modified setuptools I need to fix to get the same
>processing as the regular setuptools.
It's not a setuptools option, it's a distutils option. It's applied
when the spec file generated by bdist_rpm runs "setup.py install" to
copy the files to the target directory.
I'm using easy_install for managing automatic updates to an application.
I include the "--multi-version", "--upgrade", and "--install-dir"
options, so that only an egg is copied to the directory I specify.
I see that the current recommendation is to manually remove old
packages, since there isn't an uninstall feature of setuptools. How
would I go about detecting obsolete eggs myself, so that I can remove
them programatically after downloading the latest version of an egg?
-----BEGIN PGP SIGNED MESSAGE-----
Well, I hacked this together pretty quickly but it seems to work for
my particular use case. The Mailman source code is now kept under
the Bazaar revision control system, and I'm about to merge a branch
that converts the Mailman 3 branch from autoconf-based builds to
setuptools. So I needed Bazaar support in order to build my sdist
files and such.
Writing the plugin code was pretty easy after Michael Hudson gave me
the key bzrlib clues. Maybe this could be included by default in the
next version of setuptools?
I've added it as an attachment to SF patch #1757782
Side note: I found it interesting and a bit annoying that setuptools
doesn't call find_files_for_bzr() recursively. It would seem like
the framework should do the recursion instead of the plugin, as I'd
think it would be the most common use case. In any event, the
setuptools documentation should probably be clear that it's up to the
plugin to recurse into subdirectories.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (Darwin)
-----END PGP SIGNATURE-----
I'm not sure if I've painted Enthought into a corner or not, but I can't
figure out if there is a way to help users of a library, delivered as an
egg, to know and/or maintain dependencies on extras declared in that
egg. By which I mean that if my library, called X, declares extras 1,
2, and 3, and the source for X includes all the code that implements the
features in extras 1, 2, and 3, and the extras document the external
dependencies component X has to get those extras to work, how does
someone importing from the API in X know whether their dependency on it
should be 'X', 'X', X[1,2]', etc. ? After all, all the API methods
are already there even if they didn't install the extras. So far the
only mechanisms I can see are (a) a manual one which depends on people
(everyone using X!) knowing the internals of X such that they can tell
that if they import symbols a, b, or c, then that means they need extra
1, etc., and (b) trial and error iteration driven by unit / integration
Am I just misusing or misunderstanding extras here?
BTW, what I'm trying to do is convert Enthought's old monolithic
distribution of ETS into components that could be installed
individually. The first step of this was to package each component as
an egg; the second was to ensure that cross-component dependencies were
put in place (simply declaring A requires B if code in component A
imported a symbol from packages in component B); the third was to try to
minimize these cross-component dependencies by minor refactoring and
introduction of extras to represent non-core functionality; and the
fourth is to ensure that cross-component dependencies properly include
extras as needed.
It's this last step that has got me thinking about this problem. I've
been able to mostly automate the previous steps, or at least write tools
to help me, but for this fourth step, I can't figure out how to do it
consistently. It is quite possible that I've missed some best practice
of when to create or use extras. Any advice would be greatly appreciated!
As a follow up to my last posting, I have found a not very
elegant workaround which consists in modifying the appropriate
compiler flags in the compiler's compiler_so list. These ultimately
come from Python's make and include files via the function
"customize_compiler" in "distutils.sysconfig". Is there any
convenient way of altering these flags between calls to "setup"
which would avoid having to change or extend the distutils
Any help would be appreciated.
Apologies for the length of this post. I am using zc.buildout to
compile a specific python version with various extra libraries (e.g.
aspell, mysql). This is all working fine, and I have it creating an
interpreter with all the required libraries via the "interpreter" parameter.
However, I then want to compile/make/make install zope2 against this
specific python interpreter via ./configure --with-python=./bin/mypython
etc... I am using the zc.recipe.cmmi recipe to do this.
This fails (it works if I use the python executable as the --with-python
argument). I've tracked it down to a small difference between the
standard python interpreter and the one generated by the zc.recipe.egg
- the standard python interpreter seems to add the directory of the
file you are running to the path
- the generated one does not add the directory to the path
This means that
./Python2.3/bin/python inst/configure.py runs fine
but ./bin/mypython inst/configure.py fails with an import error as it
cannot import anything relative to the file being run
I don't want to add it in as an "extra_path" because it is not needed
after this step and I don't want it to be part of the standard environment.
My question is:
- is this a desirable difference in behaviour between the two
interpreters? I expected them to work exactly the same...
- what's the cleanest way to get this working consistently/in an
automated way through the buildout?
At the moment, I have manually added the following into the generated
script to recreate the behaviour:
+ import os
+ sys.argv[:] = _args
+ f = sys.argv
+ p = os.path.split( os.path.abspath(f) )[:-1]
+ sys.path[0:0] = os.path.join( p )
Is there a blessed-as-stable, official, release of setuptools?
Perhaps it's just me but a version number of the form '0.6c6' or '0.7a1'
just doesn't seem like the developers think it is stable yet, even if it
is being widely used. :-)
The reason I'm asking is that there is some debate going on at Enthought
about whether we should make available an egg of setuptools in our
'stable' egg repository. My personal opinion is that we shouldn't be
publishing binaries of sources and calling them 'stable' unless the
developers of that source have said 'this version is stable' in some
form. Said with a bit more detail, I think for us to call something
stable, we need to start with stable source and then do testing to
verify that our builds are being done correctly. So, I'd want someone
to say publicly that 0.6c6 (or whatever version) is a stable release of
the source before we say our build of that should go in our stable
However, there are others who think 'stable' just means that we've
found, in our own testing, that things generally work as advertised and
that really, our build process is building correctly. (I may be
paraphrasing incorrectly, but since other Enthought-ers read this list,
I'll trust them to correct me!) This would mean that we could put a
binary of setuptools 0.7a1 up in our stable repo.
Dave Peterson wrote:
> However, there are others who think 'stable' just means that we've
> found, in our own testing, that things generally work as advertised and
> that really, our build process is building correctly.
I think you'll find that this is how most software packagers (check
any Linux distribution) do it, after the package has been in the
testing or unstable branch for a reasonable amount of time.
Also, you may find you have to mark a pre-alpha package stable the day
it's released if the previous stable version has a serious security
I'm writing with what is probably a very simple question about
the build_clib command in distutils. My problem is that I have
part of a library that I am building which should be built with no
optimization (O0) instead of the default O2 (with gcc). After
various attempts, I ended up subclassing the "build_clib" class
so that I could pass the "extra_preargs" argument in the call
to "self.compiler.compile" in the class's "build_libraries" method.
Unfortunately, both O2 (the default) and O0 now appear in
the compilation options and the first (O2!) takes precedence
it seems. I wonder if you have come across this problem and
whether you have a solution?
I would really appreciate any help that you can give me.