I was installing setuptools on another machine so I could use the
egg-based plugin support in the upcoming Trac 0.9
(projects.edgewall.com/trac). It's looking really neat - you'll be
able to upload an egg via the Web and have it installed into the
currently active Trac, or just drop an egg into a directory.
Previously I had no version of setuptools installed on this machine,
so I downloaded ez_setup.py and ran it. It didn't work the first
time, because setuptools.pth got installed with the wrong permissions
(my umask is 077). The egg itself had the correct permissions,
Nicholas Riley <njriley(a)uiuc.edu> | <http://www.uiuc.edu/ph/www/njriley>
This morning, I updated to Mac OS 10.4.2 and then discovered that one
of my eggs could no longer be found. Rebuilding the egg gave me the
answer: Darwin had bumped from 8.1 to 8.2.
My guess is that an egg built under Mac OS 10.4.2 will work just fine
on a 10.4.1 Mac. It would be a drag to have a separate egg for each
minor revision of the OS.
Should this be done differently? I seem to remember an option for
setting the platform, but I wouldn't want to do something that would
break how easy_install does things.
I'm currently looking at integrating bgen with distutils.
Bgen is a little-known part of the Python core distribution: it is
similar to swig, in that it generates C extension modules. In some
respects it is more powerful than swig, the main one being that it
reads standard C .h files in stead of adorned .i files.
(Incidentally, this is also the reason it's part of core python, it
is used to generate the MacOS API modules, so these are (almost)
automatically updated when Apple adds new functionality. At least,
that was true under MacOS9 and will be again when I get my act
together:-). But bgen has a lot of disadvantages when compared to
swig, the main one being that it is a rather fearsome tool to try and
master. Integration with distutils is one of the things I want to do
to lower that barrier.
But now I'm at a loss as to how to proceed. I had a look at how swig
is integrated into distutils, and I don't really like it, it smells
like a hack. And, according to the comments in the source and the
manual, the author agrees with me:-) Swig support is basically done
in the build_ext command, by filtering out all ".i" files in the
source file list very early in the process, running swig on them, and
replacing them by the .c or .cpp equivalents.
I can see various ways of adding bgen support, but I'm not sure which
one is the best one, and/or whether there are other options. So I'd
be interested in hearing what other people think, and how other
packages have added a preprocessor to distutils.
There's a fair amount of Python code needed to drive bgen, at least
for interfaces to complex APIs (bridging C types to Python, handling
callbacks, how to parse the specific .h files for this API, etc).
Currently that code is in two .py files but it will be put in a
class, probably modeled somewhat after Extension (but having C/C++
source files as output in stead of dynamic extension modules).
What I don't know is how I'd connect this to the Extension object
that will create the extension module. Ideally I'd like the bgen
process to be optional. In other words, the distribution packager has
three options: (a) include the bgen C output in the distribution and
don't run bgen unless the end users specifically asks for it; (b)
include the bgen C output but only run bgen if the normal timestamp
dependencies require it; or (c) always run bgen.
But it doesn't seem the Extension object currently has any support
for such make-like chaining, and I'm not sure how to add it. One way
would be to allow non-strings in the sources argument, and do
something smart there. A similar mod could be used for libraries and
extra_objects to allow chaining there too. Another way would be to
add a "dependencies" argument, where those dependencies are objects
that get run early, and can add their results to sources, libraries
and extra_objects. I think this latter solution is probably better,
as such a dependency object could modify multiple arguments of
Extension in one fell swoop. As a somewhat contrived example, an
"OptionalJPEGSupport" dependency could check whether the relevant
libraries and include files are available to enable JPEG support in
an imaging package, and then add the right source files, libraries,
defines, library paths and include paths to the relevant Extension
But all of this is made quite a bit more difficult (I think) by the
fact the Extension doesn't really do anything, it's only a container
and all the logic is in build_ext. Maybe I should follow the paradigm
set by "build_clib", and add a "build_bgen" command with build_ext
picking up the results? And maybe there are better solutions that I
haven't thought of yet?
Jack Jansen, <Jack.Jansen(a)cwi.nl>, http://www.cwi.nl/~jack
If I can't dance I don't want to be part of your revolution -- Emma
FYI, I've just checked in 0.6a0, which is a pretty major refactoring of
certain aspects of the pkg_resources implementation and API. The original
purpose was to make the API conform to the design spec I posted a few weeks
ago, but in the process I ended up finding lots of latent bugs that I was
able to stomp. For example, much of the setuptools/EasyInstall internals
were vulnerable to being confused by case-insensitive filesytems, and to
being fooled by symlinks into thinking that a pair of paths were different
even though they referred to the same file.
There are also a lot of new API features, mainly to do with the new
WorkingSet class, which embeds the logic needed to manage a "working set"
of active (importable) distributions. This allows you to do things like:
# List all distributions that can be imported without another
# require() call
for dist in pkg_resources.working_set:
You can also use the 'subscribe' method of a working set to add a callback
that will be invoked for every active distribution (those already present
in the working set, plus any that are added later). This is so that
frameworks and extensible applications can register to scan plugins for new
and useful metadata.
Anyway, because of the breadth of the changes, I'd like assistance with
testing on different platforms, Python versions, etc. before I turn this
into an official 0.6a1 release. Note that if you have any scripts that use
any pkg_resources APIs for Distribution, Requirement, or
AvailableDistributions objects, they may need to be updated; see the
changelog in setuptools.txt (in the CVS version of setuptools) for details
of the changes.
P.S. I've also implemented a stricter version of Ian Bicking's #egg syntax,
and included docs on how to use it in setuptools.txt under the heading,
"Making your package available for EasyInstall". I have not, however,
implemented a --develop option yet.
I tried doing a bit of EasyInstall evangelism, and apparently, the
Kool-Aid isn't quite sweet enough yet. ;-)
I got feedback from someone trying the very first example with SQLObject
in the documentation. He is using Debian Linux and thus does not install
anything that's not from a .deb into /usr/lib. User-installed Python
packages need to go into /usr/local/lib/python2.x/site-packages. To
encourage this, Debian's site.py is patched to add that to the list of
.pth-enabled directories. I imagine that similarly conscientious/anal
distributions do likewise.
My friend is a bit more conservative than that, even. He manages
/usr/local using GNU Stow so he can make installations as non-root. This
is very important to him. He's even willing to use tricky .pth hacks
to permit this.
Fortunately, all of his technical concerns can be addressed by adding
 As documented here:
"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
I've attached a patch to detect fragments. There was a couple ideas
bandied about; this seems the simplest. You use a link do:
You can include a version, otherwise it's unversioned. The only package
I've tested it on is "flup", listed on
http://pythonpaste.org/package_index.html -- in that case there's no
competing or versioned listings in PyPI; if there was the entry would
simply be ignored and would be useless.
PJE aso mentioned calling the packages things like flup_devel, so that
it's a completely different package from flup. The problem, then, is
that requirements can't be satisfied by the development package. It
seems better to have some magic version string for development, that is
neither more or less than other versions (or at least it depends on
context -- it's the highest version number when using --develop [should
that be implemented], and the lowest otherwise). But that's only useful
given a --develop option to easy_install.py.
Ian Bicking / ianb(a)colorstudy.com / http://blog.ianbicking.org
RCS file: /cvsroot/python/python/nondist/sandbox/setuptools/setuptools/package_index.py,v
retrieving revision 1.11
diff -r1.11 package_index.py
> # make emacs happy: '
> EGG_FRAGMENT = re.compile(r'egg=([^=&]*)')
< path = urlparse.urlparse(url)
> scheme, server, path, parameters, query, fragment = \
> if fragment:
> match = EGG_FRAGMENT.search(fragment)
> if match:
> return interpret_distro_name(
> url, match.group(1), metadata)
> # Remove any fragments from the URL
> url = url.split('#', 1)
I scanned the distutils mailing list archive to no avail, so hopefully
this question hasn't been asked and answered before.
Basically, my ReleaseForge application uses distutils to generate zip, tar
and rpm packages. Everything has worked perfectly up until now because I
now want to generate a python2.4 specific rpm in addition to a
My development system is FC3 and has rpms of Python2.3 (default system
wide) and Python2.4 installed:
$ rpm -q python python2.4
When I issue the following command:
$ python setup.py bdist_rpm
the generated rpm files have references to
/usr/lib/python2.3/site-packages. This is ofcourse expected since
python2.3 is the default.
However, what is unexepected is that when I invoke python2.4 directly to
generate the rpm:
$ python2.4 setup.py bdist_rpm
the generated rpm still has references to /usr/lib/python2.3/site-packages
rather than /usr/lib/python2.4/site-packages.
Python 2.4 (#1, Nov 30 2004, 11:25:14)
[GCC 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)] on linux2
>> import sys
['', '/usr/lib/python24.zip', '/usr/lib/python2.4',
What I'd like to see is that when I invoke "python2.4 setup.py bdist_rpm"
are for the files in the generated rpm to reference
/usr/lib/python2.4/site-packages rather than the respective 2.3 directory.
FWIW, the 2.3 and 2.4 generated rpms will be essentially the same (w/ the
only difference being the location of site-packages)
What painfully obvious point am I missing?
Thanks for any help,
I have been looking at how I can use eggs to manage the packages I currently
use in my Python installation. Mostly, everything has gone extremely smoothly,
and I can build eggs either from bdist_wininst executables, or from source
packages, without any problems (and without needing to modify the package
With one package (Python Dateutils) I noticed that the resulting egg was "not
zip safe", and I thought I'd take a look at why. It turns out to be a
relatively simple issue, as the package contains a data file which isn't
accessed via the resource manager protocol.
Patching the source isn't hard, but it left me with a dilemma. I'd like to
submit the change back to the package author, but I'm not sure how receptive
he'd be to converting to setuptools (ie, adding a dependency on setuptools,
changing setup.py, etc). I'd like to be able to offer a minimal change, which
just switched to using the resource management protocol - people who wanted to
build eggs could change setup.py, or just use the --command-packages option in
The problem is, how to depend on *just* the pkg_resources module. I can bundle
it with the application, which is simplest, but produces a maintenance burden.
I can just say that users must install setuptools, but this seems ugly, as
there is no obvious reason why a date utility package should depend on a setup
Having pkg_resources in the standard library is clearly the best long-term
solution, but in the shorter term, shouldn't pkg_resources be unbundled from
setuptools? In many ways, isn't that the *point* of setuptools/eggs, to ease
unbundling of packages - and so, shouldn't setuptools "practice what it
preaches", and unbundle itself...? I realise that this could cause chicken and
egg problems, where setuptools needs pkg_resources to ensure that it can
support not having pkg_resources bundled, but even so...
I've just had a look at the new documentation for setuptools. I've not
read it all in detail yet, but one thing struck me regarding the
"automatically download dependencies" feature.
It isn't going to work for people (like me) stuck behind a firewall
that Python doesn't support (Windows NTLM based firewall). Obviously,
setuptools is never going to be able to resolve a situation like this,
nor would I expect it to. But can I suggest two possible changes to
make it easier for people with limited internet access?
1. A "manual download" mode, where setuptools lists the files which it
wants you to obtain, and then leaves it to you how you get them. I'm
not sure how plausible this would be, given the necessarily iterative
process involved in resolving dependencies, but even a little help
would be useful (a report of unresolved dependencies when run with a
--no-download flag would be the most basic help).
2. A way of specifying an external command to use to download files
over HTTP. This would (for example) allow me to use curl, which does
support HTLM proxies, rather than relying on Python's built-in HTTP
support, which doesn't.
I can't see any way to get extras_require to be installed, unless some
other package indicates an extra feature in the package that needs to be
For example, if you want HTTP support in Paste, you can install
WSGIUtils. But this isn't required. How can I get WSGIUtils installed?
Or, more specifically, there's a bunch of examples distributed in the
Paste source distribution; these examples require lots of extra
software, even if Paste doesn't. How can I easily get all the examples
Ian Bicking / ianb(a)colorstudy.com / http://blog.ianbicking.org