At 11:12 AM 4/22/2009 +0200, Tarek Ziadé wrote:
>We worked during Pycon on version comparisons. Distutils has one but
>it is a bit strict, setuptools has another one, but it's a bit
>We would like to propose the inclusion for Python 2.7 of a new version
>comparison algorithm, based on the discussion
>Fedora, Ubuntu and Python people had. The plan would be to deprecate
>the current one (which is not really used anyway)
>and provide, promote this one.
>Trent Mick took the lead on this work at the end of Pycon, and worked
>on a prototype.
>It's explained here, and there's an implementation (I've put it at
>the top of the page for conveniency):
I don't see how it can manage, e.g. a development version of a
postrelease, with an SVN rev or date stamp on it. Such versions
might not be found on PyPI or on RPMs, but would be needed in development.
(Btw, the wiki page pseudo-regex doesn't match what the code actually
I have reworked the PEP a little bit with people feedback.
It needs more feedback : http://svn.python.org/projects/peps/trunk/pep-0376.txt
- install/uninstall script
I think the best solution is not to provide an install script since
third-party tool do it. Furthermore,
there's already the simplest install script available today: you can
run "python setup.py install" on a given package
so it gets installed.
So what about adding just a global uninstall feature, that
uninstalls the files installed for a package, using the record
file, and let the third party tool have better features.
- MANIFEST (SOURCES.txt)
Ronald pointed out that it was not necessary to have a MANIFEST file
included if we are going to have a RECORD
file, as described in the PEP.
The MANIFEST (which in a way is equivalent to SOURCES.txt pip or
easy_install adds) is the list of source files,
whereas the RECORD is the list if installed files (which might
include more elements in case of compilations)
What would be the interest of having the list of source files in egg-info ?
Tarek Ziadé | http://ziade.org
Distutils allows you to use a handy --rpath option to the build_ext
command to add an RPATH value into the linked object. However some of
the semantics are not great IMHO.
For the first issue some background first. On SysV-like systems
(systems using the ELF binary format) the RPATH field was created in
the .dynamic section to allow the shared object to specify extra
locations to look for their required shared objects. However this did
overwrite the use of the LD_LIBRARY_PATH and that was no good for
administrators, so another new field was added: RUNPATH. This was
only used *after* LD_LIBRARY_PATH instead of before it solving the
problem. When both fields are present the runtime linker ignores
RPATH (it is impossible to create a shared object with a RUNPATH but
The Solaris linker decided to always encode RUNPATH into the shared
object whenever an RPATH is specified (using the -R option). This is
a very sensible option most likely the correct behaviour. The GNU
linker however decided that backwards compatibility was more important
and to make it add the RUNPATH field you have to pass
--enable-new-dtags to the linker, if you use -R or -rpath on it's own
you will only get an RPATH, no RUNPATH.
Now when using the --rpath option to the build_ext command
use some heuristics to figure out which option to pass to compiler to
get the runpath in (i.e. "-R" or "-Wl,-R" etc). I'm going to argue
that this needs to be extened pass in -Wl,--enable-new-dtags,-R if the
GNU linker is used so that the newer and better RUNPATH gets put into
the shared objects all the time. (I don't yet know how to detect the
GNU linker but would like a consensus on the desired behaviour before
looking into this).
Note that this is not completely a disaster, thanks to
distutils.sysconfig.customize_compiler() adding the contents of the
LDFLAGS environment variable to the command line of the linker
invocation you can work around this currently.
The second issue with build_ext --rpath is on AIX. Again some
background on AIX shared objects, AIX is not SysV-like and uses the
XCOFF binary format instead of ELF. Therefore they don't have a RPATH
or RUNPATH, but they do have a think called LIBPATH which does
something similar. The difference between XCOFF's LIBPATH and ELF's
RUNPATH is that AIX's runtime linker does not have a default search
path, hence the full search path needs to be encoded into the LIBPATH
of the shared object.
Now I would propose for build_ext --rpath to encode the LIBPATH when
used on AIX since that is the correct thing to do IMHO.
(Implementation note: since this is done by passing
-blibpath:/opt/example.com/lib:/usr/lib:/lib to the linker note that
distutils.unixccompiler.UnixCCompiler.link() would have to be changed
not to strip out /usr/lib and /lib from the runtime_library_dirs on
would have to use -blibpath:... or -Wl,-blibpath as appropriate)
Again, this can currently be circumvented by using the LDFLAGS
Do these improvements sound sensible? And if so should I create one
patch for each (and two bug reports) or combine them into one patch?
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org
More and more python projects are "switching" to buildout.
The one I face directly is Plone.
I also administer servers and really love using their package management
tools: If I use a Fedora (resp. Debian, Gentoo,...), I insist using yum/rpm
(resp. apt/dpkgn, emerge,...) to install a software, and if the software
needs some tunning, I act on the source package, then compile+install my
The question is to know wether the current "fashion" to switch to buildout
will make the distribution packager work easier or harder.
Taking the example of Perl and CPAN, it seems there is no nig problem, as
well as there seems to be (in Debian at least) a specific tool to package
Perl modules. Do you thing it will evolve that way for Python "buildouted"
Chef de projet chez Vectoris
Phone: +261 33 11 207 36
System: xUbuntu 8.10 with almost all from package install
We worked during Pycon on version comparisons. Distutils has one but
it is a bit strict, setuptools has another one, but it's a bit
We would like to propose the inclusion for Python 2.7 of a new version
comparison algorithm, based on the discussion
Fedora, Ubuntu and Python people had. The plan would be to deprecate
the current one (which is not really used anyway)
and provide, promote this one.
Trent Mick took the lead on this work at the end of Pycon, and worked
on a prototype.
It's explained here, and there's an implementation (I've put it at
the top of the page for conveniency):
Tarek Ziadé | http://ziade.org
Setuptools non-support for Python 3 is currently a serious hindrance
towards Python 3 aceptance. I'm trying to figure out what to do as a
next step in the Python 3 support for setuptools. And I have
encountered some obstacles. The first one is that setuptools requires
itself for installing and running tests. That makes it hard to install
it under Python 3. There are various solutions to this, but the next
obstacle I encounter in choosing the right solution is that the code
is hard to understand, and it makes me want to just rip it out and
start over, or in even more frustrated moments, avoid the problems by
not using setuptools at all. But the third obstacle for that is that I
don't actually know what features of setuptools people use.
I personally use setuptools for these reasons:
1. When I create projects with paster, it uses setuptools.
2. Setuptools makes it possible to specify requirements, which is then
used by buildout.
3. Namespace packages require pkg_resources?
4. The test command.
What are the other major reasons people use setuptools?
Is there any good reason to not extract the namespace package support
into a separate package?
Lennart Regebro: Python, Zope, Plone, Grok
+33 661 58 14 64
At 03:03 AM 4/23/2009 +0000, Chad wrote:
>I feel that the above is a bug. '.' should be treated as a path not a package
>name, since '.' is not a valid package name.
Actually, it is.
> i haven't looked at the code yet,
>but it seems that a regular expression should be able to determine between a
>valid package name, an http address, and a path on disk. it seems
>like '.' was
>added as a special case, because other relative paths don't seem to
>$ cd /path/to/mypackage
>$ easy_install ../mypackage[extraFeature]
>error: Not a URL, existing file, or requirement spec:
As it says in the error message, arguments are checked for a URL,
existing file, or a requirement spec -- and in that order. URL-ness
is checked by regex. Filename-ness is checked by checking for the
existence of the file or directory, not by parsing. If the file
doesn't exist, an attempt is made to parse it as a requirement
spec. '.[foo]' is neither a URL nor an existing file, but it *is* a
valid requirement spec, for a package named '.'.
I don't know of any way to change any of this without changing the
command-line API. The recommended workaround for building a local
package with extras is:
easy_install . packagename[extras]
as this will first install the local package, then install the
New submission from Chad <chadrik(a)gmail.com>:
1) easy_install can be used to install "extra" dependencies, using the following
syntax (undocumented on the PEAK website, as far as i can tell):
$ easy_install mypackage[extraFeature]
where "extraFeature" is a feature specified by the package's extras_require
keyword in setup.py.
2) easy_install can be used to install a package in the current directory (as of
$ easy_install .
however, these two features can't be combined:
$ easy_install .[extraFeature]
Searching for .[extraFeature]
No local packages or download links found for .[extraFeature]
error: Could not find suitable distribution for Requirement.parse('.[extraFeature]')
I feel that the above is a bug. '.' should be treated as a path not a package
name, since '.' is not a valid package name. i haven't looked at the code yet,
but it seems that a regular expression should be able to determine between a
valid package name, an http address, and a path on disk. it seems like '.' was
added as a special case, because other relative paths don't seem to be supported:
$ cd /path/to/mypackage
$ easy_install ../mypackage[extraFeature]
error: Not a URL, existing file, or requirement spec: '../mypackage[extraFeature] '
I'd be glad to submit a patch, if you just point me to the proper repository to
checkout and the proper procedures for getting it integrated into the main branch.
title: install local package with extras with easy_install
Setuptools tracker <setuptools(a)bugs.python.org>
At 04:52 PM 4/22/2009 +0200, Lennart Regebro wrote:
>On Wed, Apr 22, 2009 at 16:18, P.J. Eby <pje(a)telecommunity.com> wrote:
> > Er, no. It only means that you need Python 2 to be installed
> *while porting
> > a package* to Python 3.
>No. It means it needs to be installed when installing the package from
>a source distribution. Which is the normal way of distributing
I don't understand you. Here is what I understand:
1. Setuptools requires setuptools
2. Setuptools doesn't run on Python 3 (yet)
3. There needs to be a way to build a Py3 version of setuptools in
order to fix #2
Therefore, adding a new setuptools comand to do #3, that runs under
either Py2 or Py3, fixes #1 in the context of #2.
However, once setuptools *does* run on Python 3, then there is no
longer a need for the build process to run exclusively under...
Aha! Now I (finally) get what you're talking about! In order for
this to work, there'd have to be a separate Py3 source distro for
setuptools, or else setup.py would need to have a (non-setuptools
depending) way to build its own Python3 version.
Okay, now that I actually understand the problem, I will give it some
more thought. I see now that what I was proposing works only for the
porting process and for non-self-dependent packages, but not for
distribution of self-dependent packages like setuptools. Either the
sdist would need to ship with a Python3 version already included (or
have a distinct Py3 sdist), or there'd need to be a
non-setuptools-dependent bootstrap process.
I'll have to think about this one a bit more.
At 08:27 AM 4/22/2009 +0200, Lennart Regebro wrote:
>On Tue, Apr 21, 2009 at 19:57, P.J. Eby <pje(a)telecommunity.com> wrote:
> > At 04:06 PM 4/21/2009 +0200, Lennart Regebro wrote:
> >> On Tue, Apr 21, 2009 at 15:03, P.J. Eby <pje(a)telecommunity.com> wrote:
> >> > python2 setup.py 2to3 test
> >> Well, yes, but it should be
> >> python 3 setup.py 2to3 test
> >> Otherwise it can't reasonably have any idea of which python to use.
> > Why not? The 2to3 command could simply take an option for the python3
> > executable, and be set from the standard config files (e.g. setup.cfg).
>Because that would mean that Python 2 needs to be installed to use
>Python 3. It also means all programs that do any sort of installing
>need to either know the position of the python 3 executable to use
>when installing, and be run with Python 2, or they need to be run with
>Python 3 and know the position of a Python 2 interpreter to run
Er, no. It only means that you need Python 2 to be installed *while
porting a package* to Python 3.