Well... there seems a bug in setuptools.
There is a variable log accesed in Line 98:
in the method entries_finder
log.warn("unrecognized .svn/entries format in %s", dirname)
NameError: global name 'log' is not defined
SO at the top of the file please use :
from distutils import log
-- command/sdist.py 2008-10-20 16:02:09.000000000 +0530
+++ sdist_old.py 2008-10-20 16:03:45.000000000 +0530
@@ -1,6 +1,5 @@
from distutils.command.sdist import sdist as _sdist
from distutils.util import convert_path
=2Dfrom distutils import log
import os, re, sys, pkg_resources
I've renamed pyinstall to pip (last renaming, I promise). It now uses
commands like "pip install something". This will make it easier to add
new commands in the future, with entirely different option signatures.
New site: http://pip.openplans.org
Ian Bicking : ianb(a)colorstudy.com : http://blog.ianbicking.org
Hi again.. :)
>> Recipes work find for your use case afaict.
So, I tried removing the lines which pointed at my various parts' eggs and
develop options collectively in the [buildout] section, and now I get a
BadStatusLine from within setuptools trying to install my dev egg.
Speaking of which, I ran into this BadStatusLine issue with some other eggs,
it seems that setuptools could more gracefull deal with the situation and be
more informative, but perhaps it appears simpler than it is.
Best, and TIA!
Robert Half - "When one teaches, two learn."
Hi ! Sorry for the crosspost, but this mail concerns both lists,
There are two points that need to be discussed to finish the work on
the PyPI proposal started in D.C. (http://wiki.python.org/PEP_374):
- the fail-over mechanism
- merging several indexes
PyPI will provide a static page that lists all its mirrors. Each line
of this file describes a mirror.
It provides the root url, followed by the relative url of:
- the index : the root of the package index
- the last-modified page : a static text file that gives the date of
the last sync.
- the local stats page: a static text file that gives the number of
downloads of a file, per package, per user-agent
- the global stats page, calculated by pypi that gives the grand
total of all downloads (the sum of PyPI local stats + mirrors local
- the mirrors page, that lists all mirrors
(see the proposal doc for more info)
This mirror list says for example that a mirror is available at
http://example.com/pypi/index, and that its last modified
date is available at http://example.com/pypi/last-modified.
On client side it means that it is possible to list mirrors of a given
package index to implement a fail-over
mechanism. Moreover, it makes it possible to select the nearest mirror.
*Merging several indexes*
Besides fail over, another thing needs to be implemented
on client side: being able to use different indexes.
This is an obvious missing feature: we don't want to push in PyPI all
our customers package.
In the meantime we do want to use tools like distutils, setuptools
etc., the same way with any kind of package.
So using private package indexes easily besides PyPI is needed.
It is now possible in Python 2.6 with the new .pypirc file to define
>From there softwares like PloneSoftwareCenter allows developers to
work with other indexes than PyPI.
But tools like setuptools need to evolve the same way.
Each one of this index can have its own mirrors, as defined
previously, but the client needs to combine all the different
index, into a "super" index.
This can be implemented by working with a sorted list of index. When a
client is looking for a package, it can
look in each index and pick the first package that fits.
Any comments ?
Tarek Ziadé | Association AfPy | www.afpy.org
Blog FR | http://programmation-python.org
Blog EN | http://tarekziade.wordpress.com/
New submission from Zooko O'Whielacronx <zooko(a)zooko.com>:
If you follow the install instructions on
Safari on Mac OS X, you will probably accidentally rename the egg to
setuptools-0.6c9-py2.5.egg.sh, since Safari does that for you automatically
without really pointing out what it is doing.
Then if you run it, it will say "./setuptools-0.6c9-py2.5.egg.sh is not the
correct name for this egg file. Please rename it back to
setuptools-0.6c9-py2.5.egg and try again.".
Why does it matter what the name is?
I ask because I don't want to advise my Macintosh users to follow the
http://pypi.python.org/pypi/setuptools/0.6c9#cygwin-mac-os-x-linux-other if it
is going to lead to these sorts of issues for them, and nor do I want to explain
to them how to download a file without letting Safari rename it.
title: "Please rename it back to setuptools-0.6c9-py2.5.egg and try again."
Setuptools tracker <setuptools(a)bugs.python.org>
using vanilla distutils with vanilla Python 2.5, I would like to pass
different extra_compiler_args to the setup() call based on the compiler
In fact, if I'm using GCC (whether on Windows, Linux or Mac), I have a
few options to specify; if I'm using MSVC (on Windows), I have others.
This looks like a basic need, but I couldn't find a way to achieve it.
Most existing scripts in the wild seem to simply do a check on the
platform, which neglects the fact that one can specify
--compiler=mingw32 to use GCC MinGW to build on Windows.
Any suggestion on how to achieve this?
Here's the lists of tasks we are going to work on. They are simple.
- PyPI : write a patch to enforce (or display a warning) the source
distribution to be uploaded. so if a binary distribution or a zipped
egg is uploaded
we are sure we provide the source as well.
- Documentation: write a glossary for the distutils/setuptools/Pypi
terminology on python.org wiki
- PyPI mirroring: write a PEP to implement a mirroring protocol, where
mirrors can register at PyPI. Then when a package is uploaded, mirrors
will be ping through RPC
so they know they can eventually get synced.
- setuptools: finish the patch for the multiple index support, with a
CPAN-like mechanism on the client side, with a socket timeout
- distutils: code cleaning: better test coverage, remove logging, etc.
Tarek Ziadé | Association AfPy | www.afpy.org
Blog FR | http://programmation-python.org
Blog EN | http://tarekziade.wordpress.com/
So I have a question for all the developers on this list. Philip thinks
that using symlinks will drive adoption better than an API to access
package data. I think an API will have better adoption than a symlink
hack. But the real question is what do people who maintain packages
think? Since Philip's given his reasoning, here's mine:
1) Philip says that with symlinks distributions will likely have to
submit patches to the build scripts to tag various files as belonging to
certain categories. If you, as an upstream are going to accept a patch
to your build scripts to place files in a different place wouldn't you
also accept a patch to your source code to use a well defined API to
pull files from a different source? This is a distribution's bread and
butter and if there's a small, useful, well-liked, standard API for
accessing data files you will start receiving patches from distributions
that want to help you help them.
2) Symlinks cannot be used universally. Although it might not be common
to want an FHS style install in such an environment, it isn't unheard
of. At one time in the distant past I had to use cygwin so I know that
while this may be a corner case, it does exist.
3) The primary argument for symlinks is that symlinks are compatible
with __file__. But this compatibility comes at a cost -- symlinks can't
do anything extra. In a different subthread Philip argues that
setuptools provides more than distutils and that's why people switch and
that the next generation tool needs to provide even more than
setuptools. Symlinks cannot do that.
4) In contrast an API can do more: It can deal with writable files. On
Unix, persistent, per user storage would go in the user's home
directory, on other OS's it would go somewhere else. This is
abstractable using an API at runtime but not using symlinks at install time.
5) cross package data. Using __file__ to detect file location is
inherently not suitable for crossing package boundaries. Egg
Translations would not be able to use a symlink based backend to do its
work for this reason.
6) zipped eggs. These require an API. So moving to symlinks is
actually a regression.
7) Philip says that the reason pkg_resources does not see widespread
adoption is that the developer cost of using an API is too high compared
to __file__. I don't believe that the difference between file and API
is that great. An example of using an API could be something like this:
icondirectory = os.path.join(os.path.basename(__file__), 'icons')
icondirectory = pkgdata.resource(pkg='setuptools', \
Instead I think the data handling portion of pkg_resources is not more
widely adopted for these reasons:
* pkg_resources's package handling is painful for the not-infrequent
corner cases. So people who have encountered the problems with
require() not overriding a default or not selecting the proper version
when multiple packages specify overlapping version ranges already have a
negative impression of the library before they even get to the data
* pkg_resources does too much: loading libraries by version really has
nothing to do with loading data for use by a library. This is a
drawback because people think of and promote pkg_resources as a way to
enable easy_install rather than a way to enable abstraction of data
* The only benefit (at least, being promoted in the documentation) is to
allow zipped eggs to work. Distributions have no reason to create
zipped eggs so they have no reason to submit patches to upstream to
support the pkg_resources api.
* Distributions, further, don't want to install all-in-one egg
directories on the system. The pkg_resources API just gets in the way
of doing things correctly in a distribution. I've had to patch code to
not use pkg_resources if data is installed in the FHS mandated areas.
Far from encouraging distributions to send patches upstream to make
modules use pkg_resources this makes distributions actively discourage
upstreams from using it.
* The API isn't flexible enough. EggTranslations places its data within
the metadata store of eggs instead of within the data store. This is
because the metadata is able to be read outside of the package in which
it is included while the package data can only be accessed from within
8) To a distribution, symlinks are just a hack. We use them for things
like php web apps when the web application is hardcoded to accept only
one path for things (like the writable state files being intermixed with
the program code). Managing a symlink farm is not something
distributions are going to get excited over so adoption by distributions
that this is the way to work with files won't happen until upstreams
move on their own.
Further, since the install tool is being proposed as a separate project
from the metadata to mark files, the expectation is that the
distributions are going to want to write an install tool that manages
this symlink farm. For that to happen, you have to get distributions to
be much more than simply neutral about the idea of symlinks, you have to
have them enthused enough about using symlinks that they are willing to
spend time writing a tool to do it.
So once again, I think this boils down to these questions: if we have a
small library whose sole purpose is to abstract a data store so you can
find out where a particular non-code file lives on this system will you
use it? If a distribution packager sends you a patch so the data files
are marked correctly and the code can retrieve their location instead of
hardcoding an offset against __file__ will you commit it?
I'm intrigued ...
Why the scripts buildout produces don't use pkg_resources. They all add
each egg to sys.path manually and import script entry points manually.
It would seem more consistent to use
I can imagine there might be a good reason, such as making absolutely
sure the correct egg is at the top of sys.path. This is another example
of the TIMTOWTDI malaise that is rife with anything to do with
Stephen Pascoe +44 (0)1235 445980
British Atmospheric Data Centre
Rutherford Appleton Laboratory
Scanned by iCritical for STFC.
At 04:44 PM 10/19/2008 -0700, Garrett Cooper (garrcoop) wrote:
> > -----Original Message-----
> > From: Phillip J. Eby [mailto:email@example.com]
> > Sent: Sunday, October 19, 2008 4:31 PM
> > To: Garrett Cooper (garrcoop); distutils-sig(a)python.org
> > Subject: Re: [Distutils] Potential issue with multiple
> > easy_install instances and single easy_install.pth
> > At 07:14 PM 10/18/2008 -0700, Garrett Cooper (garrcoop) wrote:
> > >Hi Python folks,
> > > As part of a build system I work with, my group installs
> > >multiple Python packages via source using easy_install. One
> > such issue
> > >I've seen before in the past is that when using multiple
> > easy_install
> > >instances (via multiple make jobs), the last instance that opened up
> > >easy_install.pth records its changes; the file should
> > contain entries
> > >for all packages installed by easy_install.
> > easy_install doesn't support simultaneous parallel
> > installations to the same target directory, and there are no
> > plans at the moment to add that support.
> > Note, however, that if you use the -m (--multi-version)
> > option, then easy_install will not save the .pth file unless
> > there was already a default version of the target package
> > present. However, if you begin by deleting the .pth file
> > altogether, then using -m will avoid creating or updating it.
> > Of course, the downside of -m is that you will not be able to
> > access the installed packages except via setuptools-built
> > scripts or by using explicit require() calls.
> Do you or anyone else know where would I need to look into the
>distutils source to implement this enhancement?
It's setuptools, not distutils, and the module in question is
>I forsee using Python's
>version of flock, with possibly the use of a simple semaphore
Note that you can probably also do this in your makefile, or in a
script wrapper around easy_install. For that matter, you can create
an easy_install subclass that does this, and then register it under a
new command name and script. See the setuptools manual for more info.