Hi all --
at long last, I have fixed two problems that a couple people noticed a
* I folded in Amos Latteier's NT patches almost verbatim -- just
changed an `os.path.sep == "/"' to `os.name == "posix"' and added
some comments bitching about the inadequacy of the current library
installation model (I think this is Python's fault, but for now
Distutils is slavishly aping the situation in Python 1.5.x)
* I fixed the problem whereby running "setup.py install" without
doing anything else caused a crash (because 'build' hadn't yet
been run). Now, the 'install' command automatically runs 'build'
before doing anything; to make this bearable, I added a 'have_run'
dictionary to the Distribution class to keep track of which commands
have been run. So now not only are command classes singletons,
but their 'run' method can only be invoked once -- both restrictions
enforced by Distribution.
The code is checked into CVS, or you can download a snapshot at
Hope someone (Amos?) can try the new version under NT. Any takers for
BTW, all parties involved in the Great "Where Do We Install Stuff?"
Debate should take a good, hard look at the 'set_final_options()' method
of the Install class in distutils/install.py; this is where all the
policy decisions about where to install files are made. Currently it
apes the Python 1.5 situation as closely as I could figure it out.
Obviously, this is subject to change -- I just don't know to *what* it
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
I've been aware that the distutils sig has been simmerring away, but
until recently it has not been directly relevant to what I do.
I like the look of the proposed api, but have one question. Will this
support an installed system that has multiple versions of the same
package installed simultaneously? If not, then this would seem to be a
significant limitation, especially when dependencies between packages
Assuming it does, then how will this be achieved? I am presently
managing this with a messy arrangement of symlinks. A package is
installed with its version number in it's name, and a separate
directory is created for an application with links from the
unversioned package name to the versioned one. Then I just set the
pythonpath to this directory.
A sample of what the directory looks like is shown below.
I'm sure there is a better solution that this, and I'm not sure that
this would work under windows anyway (does windows have symlinks?).
So, has this SIG considered such versioning issues yet?
Tim Docker timd(a)macquarie.com.au
Quantative Applications Division
qad16:qad $ ls -l lib/python/
drwxr-xr-x 2 mts mts 512 Nov 11 11:23 1.1
-r--r----- 1 root mts 45172 Sep 1 1998 cdrmodule_0_7_1.so
drwxr-xr-x 2 mts mts 512 Sep 1 1998 chart_1_1
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Fnorb_0_7_1
dr-xr-x--- 3 mts mts 512 Nov 11 11:21 Fnorb_0_8
drwxr-xr-x 3 mts mts 1536 Mar 3 12:45 mts_1_1
dr-xr-x--- 7 mts mts 512 Nov 11 11:22 OpenGL_1_5_1
dr-xr-x--- 2 mts mts 1024 Nov 11 11:23 PIL_0_3
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Pmw_0_7
dr-xr-x--- 2 mts mts 512 Nov 11 11:21 v3d_1_1
qad16:qad $ ls -l lib/python/1.1
lrwxrwxrwx 1 root other 29 Apr 10 10:43 _glumodule.so -> ../OpenGL_1_5_1/_glumodule.so
lrwxrwxrwx 1 root other 30 Apr 10 10:43 _glutmodule.so -> ../OpenGL_1_5_1/_glutmodule.so
lrwxrwxrwx 1 root other 22 Apr 10 10:43 _imaging.so -> ../PIL_0_3/_imaging.so
lrwxrwxrwx 1 root other 36 Apr 10 10:43 _opengl_nummodule.so -> ../OpenGL_1_5_1/_opengl_nummodule.so
lrwxrwxrwx 1 root other 27 Apr 10 10:43 _tkinter.so -> ../OpenGL_1_5_1/_tkinter.so
lrwxrwxrwx 1 mts mts 21 Apr 10 10:43 cdrmodule.so -> ../cdrmodule_0_7_1.so
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 chart -> ../chart_1_1
lrwxrwxrwx 1 root other 12 Apr 10 10:43 Fnorb -> ../Fnorb_0_8
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 mts -> ../mts_1_1
lrwxrwxrwx 1 root other 15 Apr 10 10:43 OpenGL -> ../OpenGL_1_5_1
lrwxrwxrwx 1 root other 33 Apr 10 10:43 opengltrmodule.so -> ../OpenGL_1_5_1/opengltrmodule.so
lrwxrwxrwx 1 root other 33 Apr 10 10:43 openglutil_num.so -> ../OpenGL_1_5_1/openglutil_num.so
lrwxrwxrwx 1 root other 10 Apr 10 10:43 PIL -> ../PIL_0_3
lrwxrwxrwx 1 mts mts 10 Apr 10 10:43 Pmw -> ../Pmw_0_7
lrwxrwxrwx 1 root other 10 Apr 10 10:43 v3d -> ../v3d_1_1
I'm hoping for some advice. I've got a Django web app that I've used
buildout to build. It includes effectively three source trees (a Celery
task tree, a set of re-usable Django database models, and the actual
Django web application project. Yes, these could all be different repos,
and if they should be, I will make them such, it just makes early
development much easier. :) Each of them has their own setup.py since
they each have their own deployment story (celery tasks on worker
systems, shared models just about anywhere, and the web app on the web
tier). buildout's collective.recipe.template and collective.recipe.cmd
are also being used to install node modules (lessc, coffeescript, and
So, one option for deploying is "git clone myproj.git && cd myproj &&
buildout". I'm not a fan for a variety of reasons: it is *slow*; it is
not necessarily deterministic (even with version pinning on our own
simple index, somebody could replace a file); and it requires additional
software to be installed on production systems that I'd rather not be
installed. An ideal world would solve all of them. but I could live with
the (unlikely) non-determinism.
I'm really happy with the automation of the development environment that
buildout gives. The question is whether I can take that and turn it into
a deployable system or if I should be looking somewhere else. An ideal
world would be three RPMs (and a "meta"-RPM that installed all three)
with all the Python and Node dependencies built in.
I have seen
but, unfortunately, it is hard to follow without the various .cfg files
(well, the entire source would probably be more helpful) it references.
Is there a way to make this work, or should I just use a local buildout
as a starting point to put together RPMs (not sure how to handle script
with hard-coded paths; I suppose there is always sed :).
Finally, how does one get virtualenv and RPMs to play together? It
isn't, strictly speaking, necessary, since I technically only need
zc.buildout to be installed, and I could live with that in the system
packages. I definitely don't want all of my dependencies installed in
the system packaged, though!
Here's a short setup.py replacement that makes setup-requires work:
https://bitbucket.org/dholth/setup-requires/src/ . I'd appreciate a
Use by renaming your own package's setup.py to real-setup.py and
copying this setup.py in its place.
List only the requirements setup.py itself needs to run in the
`setup-requires =` key of the `[metadata]` section of setup.cfg,
one per line::
setup-requires = cffi
pycparser >= 2.10
(Only the name and required versions are allowed, not the full pip
syntax of URLs to specific repositories. Instead, install internal
setup-requires dependencies manually or set PIP_FIND_LINKS=... to point
to the necessary repositories.)
When run, setup-requires' setup.py checks that each distribution
listed in setup-requires is installed; if not, it installs them into
the ./setup-requires directory in a pip subprocess. Then real-setup.py
continues to execute with the same arguments.
Why a custom section in setup.cfg? Users are accustomed to editing
setup.cfg to configure random things like unit tests, bdist_wheel
etc.; this just adds a field instead of a new file. Unlike a .txt file
it should be more intuitive that setup.cfg does not support the full
pip requirements syntax.
Please note that not every package installs correctly with pip -t.
Now let's see some setup.py helper packages.
Glyph has suggested something that I've been wanting to do for a long
time. "let me use setup_requires somehow so I can have abstractions in
setup.py allows you to pass setup_requires =  to the setup() call.
Unfortunately, since setup.py needs setup_requires to be installed
before it can run, this feature is crap. It also has the unfortunate
side effect of recursively calling easy_install and installing the
listed packages even when pip or no installer is being used.
Instead, I'd like to allow a list of requirements in setup.cfg:
[a section name]
setup-requires = somepackage > 4.0
anotherpackage >= 3.2.1
As an alternative we could look for a file setup-requires.txt which
would be the same as a pip-format requirements file.
I prefer doing it in setup.cfg because people are used to that file,
and because it only accepts package names and not general pip or
easy_install command line arguments.
This would be simple and a tremendous boon to anyone considering
implementing a setup.py-emulating distutils replacement, or to anyone
who just likes abstractions.
Metadata 2.0 is not a solution for this problem. It's late, it's more
complicated, and for any "legacy" packages setup.py is the thing that
generates metadata.json - not something that can only run if you parse
a skeletal metadata.json, do what it says, and then overwrite
metadata.json when dist_info is called.
I'm sure you're all aware of this, but I wonder if there's any progress
for me to be aware of. I've got an extension that I build with
distutils. It requires numpy both to build and to run, so I have numpy
in both setup_requires and install_requires. Yet setup.py builds numpy
twice -- once for the build stage, and then again on installation. This
seems inefficient to me -- why not just build it once? Is this by
Toby St Clere Smithe
Just a quick report...
When I go to download something from PyPI, I always get an SSL certificate
mismatch error, e.g.:
$ wget https://pypi.python.org/packages/source/...
Resolving pypi.python.org... 184.108.40.206
Connecting to pypi.python.org|220.127.116.11|:443... connected.
ERROR: certificate common name `*.c.ssl.fastly.net' doesn't match requested
host name `pypi.python.org'.
To connect to pypi.python.org insecurely, use `--no-check-certificate'.
I don't know if it's just the mirror I'm getting or if it's a broader problem,
or indeed if this is a known issue.
Charles Cazabon <charlesc-distutils-python.org(a)pyropus.ca>
Software, consulting, and services available at http://pyropus.ca/
I occasionally receive requests from package maintainers asking to
have their PyPI package renamed (for example, renaming
"eyepea_monitoring_agent" to "tanto"). The only response I have at the
moment is to tell them to release their package under both the new and
old names in parallel, and promote only the new name, as the PyPI name
must match the name defined in setup.py.
I'd like to open up discussion to ideas about how to handle this better.
Somewhat related would be *perhaps* allowing a package named "Pillow"
to be installed when a requirement requests "PIL" via some kind of