In a nutshell, here is what I want to be able to do:
1. Download several Python eggs into a directory, including multiple
versions of some eggs.
2. Add that directory to sys.path at runtime.
3. Use pkg_resources.require() to select versions of the eggs.
4. 'import Foo' for each package "Foo".
Python Eggs are great, but I have a major problem with setuptools: Where I
work, I am not allowed to alter PYTHONPATH or to install into site-packages.
I can add paths to sys.path at runtime, but normally we load modules via
sys.meta_path. We have a large database of modules by many different
developers, and we have per-project configuration files which select
versions of these modules to be accessible via sys.meta_path. I'd like to
continue to select our own modules via the configuration files, but I'd like
to load Eggs from PYPI via the procedure above.
This may seem impossible, but I have a fairly simple solution: Put an Egg
importer on sys.meta_path. I'll outline the solution:
First, realize that no site.py or .pth files will be processed in this usage
Second, when a version range is specified via pkg_resources.require(),
register that specification into a global Egg importer (attached to the
setuptools module). Also, append that importer to the end of sys.meta_path,
if that has not already been done. This Egg importer should follow the
protocol of PEP 302.
Third, when a module registered as an Egg is imported, Python will first
look for an importer on sys.meta_path. If not intercepted by another
meta_path importer, eventually our new Egg importer will be tried. The Egg
importer should search for the Egg by scanning sys.path, in order. It
should look not only for Eggs, but also for directories which might contain
Eggs. It should accept the first Egg discovered which matches the
registered criteria and load it. Otherwise, it should throw an ImportError
Some things to note about this approach:
1. sys.path_hooks is ignored. This is because the call to require()
implied a change in import semantics.
2. If an egg is not registered via 'require', then the Egg importer on
sys.meta_path will not look for it, but the egg can still be loaded if it
was 'easy_installed' and if the site.py and .pth files were properly read.
In other words, we have complete backward-compatibility.
3. If an appropriate egg cannot be found, the ImportError is raised at
'import' time, not when 'require()' is called. For this reason, we may need
a different 'require' function, maybe 'eggs.require()'. Or, sys.path could
be scanned inside 'require()'.
4. We have eliminated the O(n^2) look-up cost of a large number of eggs
listed on sys.path directly. In fact, we don't have to put any eggs on
sys.path, but if we do, they will take precedence over later eggs or
directories containing eggs, as a user would expect.
I hope that everyone takes this request seriously. The current system is
almost completely broken for me. The only way I can use eggs is to add each
one to sys.path explicitly at runtime. That's a huge PITA and a big
maintenance problem. I'm very disappointed by the whole PYPI system at the
moment. Something that could be extremely easy is instead very difficult.
I downloaded ETSProjectTools and used "python setup.py bdist_rpm" to build
the rpms. I did this on both Fedora 7 and Centos 5.2.
In both cases I had the rpm for python-configobj installed.
I don't know why, but during the builds the setup went out to the net to
find configobj. It found it (although I don't know what it did with it)
and the builds completed properly. (The setup.cfg had the proper
statements in it for doing bdist_rpm and putting files in the right
places, including docs.)
However, after I installed (using rpm) and went to run ets, I got a
traceback complaining about not being able to find configobj.
I was able to fix it by going into the egg-info directory and editing
requires.txt to comment out configobj.
I don't know if this is a problem in the ETS setup.py or the
distutils/setuptools, but there ought to be a way to fix it without having
to hack the files in the egg-info directory. Either whatever is doing the
"requires" function ought to be able to see that configobj.py was sitting
right there in site-packages or there ought to be a way to tell the
setup.py call to ignore the "requires" functionality for configobj.
Hello Phillip J. Eby,
I am having trouble setting up an installer package on Mac OS 10.5
that installs setuptools and nose testing for Python 2.5. The
installer .pkg drops the payload as a temporary file with the
distributions and runs short postinstall script to do the installation
of the packages. The scripts I have running are fairly simple:
easy_install -v nose
When I run the equivalent script as an executable outside of the
Installer it works great:
postinstall ; exit;
Corbie:~ chris$ /Users/chris/Documents/PyGraphics/Installer/Nose/
resources/postinstall ; exit;
Copying setuptools-0.6c8-py2.5.egg to /Library/Frameworks/
setuptools 0.6c8 is already the active version in easy-install.pth
Installing easy_install script to /Library/Frameworks/
Installing easy_install-2.5 script to /Library/Frameworks/
Processing dependencies for setuptools==0.6c8
Finished processing dependencies for setuptools==0.6c8
Searching for nose
Best match: nose 0.10.3
nose 0.10.3 is already the active version in easy-install.pth
Installing nosetests script to /Library/Frameworks/Python.framework/
changing mode of /Library/Frameworks/Python.framework/Versions/2.5/
bin/nosetests to 755
Processing dependencies for nose
Finished processing dependencies for nose
Note that this method has the correct default install location.
/Library/Frameworks/Python.framework/Versions/2.5/bin for scripts and /
packages for the modules.
When the SAME script is run as part of the installer package I get:
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
[Errno 2] No such file or directory: '/Library/Python/2.5/site-
The installation directory you specified (via --install-dir, --
the distutils default setting) was:
This directory does not currently exist. Please create it and try
choose a different installation directory (using the -d or --install-
It seems like a different default directory is being used and I can't
figure out why or how to change that. On top of that, when I try
forcing it in various ways (including .cfg files) suggested in the
setuptools documentation it says that those directories aren't in the
python path as my $PYTHONPATH variable is ''. This is strange, because
sys.path DOES come up with the correct paths.
I tried fooling setuptools by assigning the directory to PYTHONPATH
and calling the installer then. It manages to get things in the right
place, but somehow fails to configure itself properly and throws the
Corbie:~ chris$ easy_install nose
Traceback (most recent call last):
easy_install", line 5, in <module>
from pkg_resources import load_entry_point
Extras/lib/python/pkg_resources.py", line 2559, in <module>
Extras/lib/python/pkg_resources.py", line 518, in resolve
raise DistributionNotFound(req) # XXX put more info here
I don't understand why Is there some quirk of shells that I don't
understand, does have it have something to do with different
environment variables? Is there something in the behaviour of
setuptools.egg that can explain this? Can you think of a way around it?
Sorry for the massive email! and thanks!
At 01:41 PM 8/4/2008 -0400, Alexander Michael wrote:
>Again, attempting to offer up practical solutions. Edit the
>setup.cfg's to drop the dev option in the release branches and update
>the trunk to the next version (i.e. 3.1.dev-rXXXXX)? That way,
>checkouts of the release branches will be 3.0-rXXXXX (a post release
>of 3) and the trunk will be a post of a pre-release that is newer than
>anything else in the repository. Just a thought...
This is basically what I do, except I don't bother having release
branches or tags, and instead of editing the setup.cfg, I just use my
"release" alias (which maps to 'egg_info -RDb""').
So, when I did two back-to-back releases of BytecodeAssembler today
(due to finding a bug after the first release), my command sequence was:
# start: version in setup.py is 0.4, release on Pypi is 0.3
# do development of version 0.4 w/periodic checkins
setup.py wikiup # upload wiki pages
setup.py release sdist bdist_egg upload
# ... edit version number from 0.4 to 0.5 and check in
# ... find bug, fix it, check it in
setup.py wikiup # upload wiki pages
setup.py release sdist bdist_egg upload
# ... edit version number from 0.5 to 0.6 and check in
# end: version in setup.py is 0.6, release on Pypi is 0.5
Of course, a more robust procedure would probably be to use x.1
versions (e.g. 0.4.1 instead of 0.5), and then bump to the next
non-bugfix version number when development begins on the next
release. If you have release branches, then I guess you'd do the
bugfix bump on the branch, and a non-bugfix increment on the trunk.
Am I missing something or is the following a bug whereby adding the
'.dev' tag is doing something weird?
>>> from pkg_resources import parse_requirement as pv
>>> pv('1.0a1.dev') < pv('1.0a1')
>>> pv('1.0a1') < pv('1.0')
>>> pv('1.0a1.dev') < pv('1.0.dev')
>>> pv('1.0a1') < pv('1.0.dev')
>>> import setuptools
This is mainly causing us problems when projects try to track alpha and
beta level bumps in dependencies, such as when project Foo requires
project Bar version 3.0b1 via a requirement like 'Bar >= 3.0b1' (which
means we don't want the development prereleases of Bar's first beta
release, but anything after that should be okay.) But then when we
actually want to release Bar 3.0 and change the version number to just
'3.0', suddenly installs fail while we try to run the last set of tests
because '3.0.dev' is older than '3.0b1'.
If it is not a bug, how do you handle situations where you want to run
that last round of testing prior to tagging and building releases? I'd
rather do that AFTER making all source changes, and not have to change
the version number after the testing.
Mike Orr wrote:
> Alberto, this is a wonderful idea. It may help with some issues I'm
> currently facing, running Pylons on App Engine and in Py2exe,
> distributing projects to assistant developers, and in training
> sessions for new developers.
> My organization has developers on all three platforms (Linux, Windows,
> and Mac), so we'd definitely be interested interested in all-platform
> or cross-platform builds if they become feasable.
I've implemented "universal" installer support in latest release, 0.1.4.
The support is limited however by setuptools limitation of not
distinguish between different builds (UCS2 vs UCS4 on linux, fink vs mac
ports, etc.. on macosx, etc) though this might be fixed in the future if
someone more knowledgeable than me on this issues wants to lend a hand :)
A sample universal installer for TurboGears2 which supports linux-i686,
win32 and macosx-10.3-i386 (with the mentioned limitiations) is here:
P.S: This cross-post replies are getting out of control. Since I'm the
one who began all this madness by cross-posting the announcement, I
should be one trying to stop it: Lets continue this thread in
distutils-sig please, sorry about this.
New submission from Philip Jenvey <pjenvey(a)underboss.org>:
(Extracted from #9)
This patch disables failing tests on Jython. Per our last conversation on
distutils-sig about the failing functionality -- it isn't actually being used,
nosy: pje, pjenvey
title: [PATCH] Skip failing tests on Jython
Added file: http://bugs.python.org/setuptools/file13/jython_failures-r65223.diff
Setuptools tracker <setuptools(a)bugs.python.org>
In command/sdist.py, line 90 refers to a global log function which does not exist. Thus, instead of logging a warning about not understanding Subversion, setuptools crashes.
The default setup scripts for Pylons (and probably others) use svn revision tagging. Thus, the following causes a crash:
paster create -t Pylons proj
And, since the user hasn't done any config (or anything but follow a tutorial), he isn't able to easily figure out what went wrong. Also, since setuptools was likely installed automatically for him via the egg in the cheeseshop, he isn't aware of anything that may be in dev for setuptools. A new user's likely response is to get frustrated for 15 minutes, then move on to another web framework / language.
Even if you don't provide svn 1.5 support, can you at least post an egg that won't crash in the presence of svn 1.5?
New submission from Dave Peterson <dpeterson(a)enthought.com>:
It would be convenient to be able to upload pre-built eggs using the 'python
setup.py upload' command as the owner/author of a project may not have machines
available to cover the platforms for which pre-built binaries are desired.
Having this feature would allow the project owner / maintainer to accept
pre-built eggs from others in their project community and get them onto PyPi.
I've put together and attached a patch that adds a new '--bdist-path' (or '-b')
option to the upload command that allows calls like 'python setup.py upload -b
dist/foo-1.0-py2.5-win32.egg' to work.
The patch is actually for the distutils.command.upload module as I've found that
in setuptools-0.6c8, even though there is a setuptools/command/upload module, it
isn't used at all. I'm assuming this is because there isn't much difference
between the setuptools and distutils versions of this command but that something
was intended eventually. I'm making this a setuptools ticket because I'd like
to see this improvement in setuptools :-)
title: [PATCH] ability to upload a pre-built egg to PyPi
Added file: http://bugs.python.org/setuptools/file11/upload.py.patch
Setuptools tracker <setuptools(a)bugs.python.org>
Benji York wrote:
>> In case anybody's wondering how this complies with our "no removal of any
>> release whatsoever" policy , be assured that a 3.4dev-r73090 thing isn't
>> a release by our standards. This version number not only contains the 'dev'
>> marker, meaning it must have come from a development branch (possibly the
>> trunk), it also contains the -rXXX suffix meaning it was made right from a
>> subversion checkout without having created a tags first (why else would you
>> want to include the revision number).
> Still, it's likely that someone was using it and their buildouts are now
> broken. We should have instead generated a proper release with a higher
> version number and left the dev release alone.
This is silly.
Mistakes happen. Buildout and/or setuptools should be tolerant of
accidental releases that are then removed from PyPI.
What currently happens in cases like this?
Simplistix - Content Management, Zope & Python Consulting