Hi all --
at long last, I have fixed two problems that a couple people noticed a
while ago:
* I folded in Amos Latteier's NT patches almost verbatim -- just
changed an `os.path.sep == "/"' to `os.name == "posix"' and added
some comments bitching about the inadequacy of the current library
installation model (I think this is Python's fault, but for now
Distutils is slavishly aping the situation in Python 1.5.x)
* I fixed the problem whereby running "setup.py install" without
doing anything else caused a crash (because 'build' hadn't yet
been run). Now, the 'install' command automatically runs 'build'
before doing anything; to make this bearable, I added a 'have_run'
dictionary to the Distribution class to keep track of which commands
have been run. So now not only are command classes singletons,
but their 'run' method can only be invoked once -- both restrictions
enforced by Distribution.
The code is checked into CVS, or you can download a snapshot at
http://www.python.org/sigs/distutils-sig/distutils-19990607.tar.gz
Hope someone (Amos?) can try the new version under NT. Any takers for
Mac OS?
BTW, all parties involved in the Great "Where Do We Install Stuff?"
Debate should take a good, hard look at the 'set_final_options()' method
of the Install class in distutils/install.py; this is where all the
policy decisions about where to install files are made. Currently it
apes the Python 1.5 situation as closely as I could figure it out.
Obviously, this is subject to change -- I just don't know to *what* it
will change!
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all,
I've been aware that the distutils sig has been simmerring away, but
until recently it has not been directly relevant to what I do.
I like the look of the proposed api, but have one question. Will this
support an installed system that has multiple versions of the same
package installed simultaneously? If not, then this would seem to be a
significant limitation, especially when dependencies between packages
are considered.
Assuming it does, then how will this be achieved? I am presently
managing this with a messy arrangement of symlinks. A package is
installed with its version number in it's name, and a separate
directory is created for an application with links from the
unversioned package name to the versioned one. Then I just set the
pythonpath to this directory.
A sample of what the directory looks like is shown below.
I'm sure there is a better solution that this, and I'm not sure that
this would work under windows anyway (does windows have symlinks?).
So, has this SIG considered such versioning issues yet?
Cheers,
Tim
--------------------------------------------------------------
Tim Docker timd(a)macquarie.com.au
Quantative Applications Division
Macquarie Bank
--------------------------------------------------------------
qad16:qad $ ls -l lib/python/
total 110
drwxr-xr-x 2 mts mts 512 Nov 11 11:23 1.1
-r--r----- 1 root mts 45172 Sep 1 1998 cdrmodule_0_7_1.so
drwxr-xr-x 2 mts mts 512 Sep 1 1998 chart_1_1
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Fnorb_0_7_1
dr-xr-x--- 3 mts mts 512 Nov 11 11:21 Fnorb_0_8
drwxr-xr-x 3 mts mts 1536 Mar 3 12:45 mts_1_1
dr-xr-x--- 7 mts mts 512 Nov 11 11:22 OpenGL_1_5_1
dr-xr-x--- 2 mts mts 1024 Nov 11 11:23 PIL_0_3
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Pmw_0_7
dr-xr-x--- 2 mts mts 512 Nov 11 11:21 v3d_1_1
qad16:qad $ ls -l lib/python/1.1
total 30
lrwxrwxrwx 1 root other 29 Apr 10 10:43 _glumodule.so -> ../OpenGL_1_5_1/_glumodule.so
lrwxrwxrwx 1 root other 30 Apr 10 10:43 _glutmodule.so -> ../OpenGL_1_5_1/_glutmodule.so
lrwxrwxrwx 1 root other 22 Apr 10 10:43 _imaging.so -> ../PIL_0_3/_imaging.so
lrwxrwxrwx 1 root other 36 Apr 10 10:43 _opengl_nummodule.so -> ../OpenGL_1_5_1/_opengl_nummodule.so
lrwxrwxrwx 1 root other 27 Apr 10 10:43 _tkinter.so -> ../OpenGL_1_5_1/_tkinter.so
lrwxrwxrwx 1 mts mts 21 Apr 10 10:43 cdrmodule.so -> ../cdrmodule_0_7_1.so
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 chart -> ../chart_1_1
lrwxrwxrwx 1 root other 12 Apr 10 10:43 Fnorb -> ../Fnorb_0_8
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 mts -> ../mts_1_1
lrwxrwxrwx 1 root other 15 Apr 10 10:43 OpenGL -> ../OpenGL_1_5_1
lrwxrwxrwx 1 root other 33 Apr 10 10:43 opengltrmodule.so -> ../OpenGL_1_5_1/opengltrmodule.so
lrwxrwxrwx 1 root other 33 Apr 10 10:43 openglutil_num.so -> ../OpenGL_1_5_1/openglutil_num.so
lrwxrwxrwx 1 root other 10 Apr 10 10:43 PIL -> ../PIL_0_3
lrwxrwxrwx 1 mts mts 10 Apr 10 10:43 Pmw -> ../Pmw_0_7
lrwxrwxrwx 1 root other 10 Apr 10 10:43 v3d -> ../v3d_1_1
(Doing my bi-monthly perusal of Distutils activity, I find...)
[Mark]
> .... Specifically, Greg Stien and Gordon McMillan (and plenty of
> people before them :-) seem to have what is considered "state of the
> art" in where this is heading.
[Jim Ahlstrom]
I don't know of techniques which aren't Windows specific.
Greg??Gordon??
Mark is referring to Greg's imputil.py
(http://www.lyra.org/greg/small/) and some of the places I've taken
it. My Win32-specific installer
(ftp://ftp.python.org/pub/python/contrib/System/Installer_r_01.exe)
makes use of this, but not all of it is Windows specific. (I
originally packaged it so the cross-platform stuff was available
separately, but there was no apparent interest from non-Windows
users.)
Greg's imputil.py basically makes it possible to create a chain of
importers, with the standard mechanism pushed to the end. Writing an
importer is easy. You set up the chain is site.py.
I created a way of building archives of .pyc's (or .pyo's, though
I've never worked with them). These are compressed with zlib. The
standard library fits in about 500K and (subjectively) it is no
slower and perhaps faster than the regular method. The mechanism
handles modules and packages, and the building of an archive can be
done in all kinds of ways (including using modulefinder from freeze).
I also created another kind of archive that can package up arbitrary
stuff, and is fairly easily unpacked from C code.
All of the above is cross-platform.
On Windows, this means you can have a complete Python installation
(independent of any other Python installation) in a single directory:
myPython/
python.exe (and/or pythonw.exe)
python15.dll (completely vanilla)
py_lib.pyz
exceptions.py (from the std distr)
site.py (hacked to load all the .pyz's)
[any other .pyd's or .dll's you want]
[more .pyz's if you want]
[more .py's if you want]
(The Window's installation / standalone stuff goes further than this).
These .pyz's do work on Linux, but the steps Python goes through to
determine where it lives is simpler on Windows. I'm not sure what it
would take to get a single directory Python installation
(independent of any other installed Pythons) working on *nix.
- Gordon
My installer package now has a homepage on my starship site:
http://starship.python.net/crew/gmcm/install.html
This page gives a pretty complete writeup. The package itself has not
yet been updated (either with Robin Dunn's enhancements, or a couple
bug fixes I have planned).
Also from this page you can download a tar.gz of just the archiving
stuff. For you Unix-weenies, I've even stripped the Windows line
endings!
- Gordon
[I'm only subscribed to the digest; please bear with me if I'm out
of sync with the current status of the discussion]
Greg Ward wrote:
> Sooner or
> later (probably sooner), somebody is going to want to say:
>
> if compiler == 'gcc' and platform == 'linux-x86':
> compiler_options.append ('-funroll-loops')
>
> ...and that's probably only the beginning. ...
and, later on, suggested variables like OS class, OS name, OS version,
OS vendor, architecture, etc. etc.
IIRC, Marc-Andre's suggestion pointed into a somewhat different
direction: that the properties of a certain [instance of an] OS
would be described by a set of variables indicating system
"behaviour", rather than the formal vendor/version/architecture
description.
I would second this idea (and, IIDNRC, first it <wink>); maybe there
are some situations where I'd like to know if a certain system is a
SuSE or RedHat Linux, but usually I'd rather be interested in whether
it has SysV or Simple init (and where the init.d directory is), if
ps and friends want BSD or SysV style operands, if we have cc or gcc
or egcs, and the like.
The "intelligence" about the peculiarities of a vendor-x/version-y
installation should IMO not be built into a tool like distutils; it
would fail anyway for mangled or personalized systems (e.g. a SuSE
Linux updated with a few RH and some home-grown rpm's).
All the vendor-, installation- and configuration-specific data should
be specified as individual variables assignments, probably in several
files which are processed in a hierarchical order (like /etc/system.inst:
/var/lib/inst/*.inst:~/.inst/*.inst:.instrc, just to give a silly
example). They could be provided by (contributors to) distutils as a
first start, but ideally by the vendors or distribution builders
themselves <0.99 dream>.
I'd even go so far and suggest shell variable assignment format for
these files (name=value or name="value with blanks" etc.); it's far
from ideal, but since it could be used from non-pythonic tools there
would be an incentive for others to support and provide these files.
Python programs can, of course, parse these files, although it's not
trivial. I've written a ShellAssignment parser which understands "",
'', $var, ${var}, \ escapes, at least in the most common cases; I
wished there was a string.parsequotedstring() function ...
Detlef
Hi there,
Recently in the discussion on autoconf I've seen it mentioned that lots
of Python extensions build on external non-python packages (that may be
configured by autoconf or in many other ways).
A developer using these external packages will obviously have them
installed, and it's not too strange to expect co-developers to download
these packages and install them as well.
However, now we get to the two other audiences of distutils: the
distributor/packager, who prepackages everything for one or more
particular platforms, and most importantly the user who doesn't want to
know about anything, just wants to run the program.
If I'm a plain user and want to try out Python-Powered XML-database
Warpdrive Enhancer version 4.2, I don't want to be required to download
and install the Warpdrive package, and the XML package, etc, if I can
avoid it. I just want to install and go.
For many systems of course the separate install is unavoidable; we can't
package Oracle for download, for instance. But for many smaller packages
(such as libraries) that may or may not be on the user's system it
becomes more important.
So, as a reminder, how are we going to handle this? Some random
requirements and ramblings:
* The packager/distributor would like a standard way to somehow include
these packages (or at least check for them). This doesn't mean
*building* these packages, but it does mean a standard way to pack them
up and install them.
* We don't have any control over these external packages. Still, we
don't want everybody to grow their own way to deal with particular
packages.
* If two Python packages are installed that both use FooLib, we don't
want the disutils to install FooLib twice. Same if FooLib is already
there. That's also why we need a standard way to handle these.
* An idea I aired previously was to provide somekind of disutils wrapper
for common packages. For instance, if FooLib is often used by software
written in Python, we make a wrapper for it. The wrapper calls Foolib's
installation/configuration methods (which may for instance be rpm
commands, or a windows installer program) where necessary:
* Install FooLib
* If this is not possible without manual intervention, some way to
tell the
user what to do in easy steps.
* Check if FooLib is already there (and what properties it has)
* Also check if is it already registered with a Disutils Wrapper; if
not,
some way to try to add the wrapper so that the next pyapps that get
installed
won't need to go through this process again.
* Possibly also uninstall FooLib, though this may be far too tricky.
Eventually if this gets popular a Disutils Wrapper can even be
distributed with FooLib itself, but the Disutils Archive should also
offer some consistent way to get at known external package wrappers.
All of this may be simply too hard, as packages can vary in many ways,
but I think there are least *some* common things one can standardize.
The idea is to hide all the nonstandard ways behind some standard
interface, as far as possible, to help packagers and developers.
It occurs to me that all of this is somewhat analogous to autoconf
again; autoconf checks out capabilities of the system, in particular of
the C compiler, and libraries. But as far as I know autoconf isn't
modular as this package approach would be.
Now you all start shooting this down. :)
Regards,
Martijn
Greg,
Only a few comments:
1) There's no command line interface on the Mac, but the compiler they
use (Metroworks) can be controlled through an external "event"
interface (Apple's response to COM, except Apple had it long before
COM even existed :-). So there's no reason why it couldn't be done on
the Mac.
2) I'm not sure if you're trying to add a method for each of the
typical cc options (I recognized -I, -D, -U, etc.). Looks like you're
missing -R, which is like -L but works at runtime, and is needed if
you're using shared libraries that live in non-standard places.
3) I stringly prefer shared lib over dynamic lib (on Windows everyone
calls them "DLL" anyway).
4) Do you really not want to support cross-compiling? The Windows/ce
folks will regret that ;-) But I don't think you need to support -I
(without a directory) for that; typically the compiler has a different
name and all is well. I bet -I without args is intended for people
who want to experiment with a hacked set of system includes; they can
add it to the compiler command name if they want to.
5) When creating a shared lib from libraries only, you need to specify
an entry point. Is this useful to support?
6) I haven't been following distutils very closely, so forgive me if
the above makes no sense :-)
--Guido van Rossum (home page: http://www.python.org/~guido/)
Hmmm, just as well I was away last week: I *think* I agree with almost
all the points-of-view expressed concerning accessing autoconf-like
features from distutils. I would have had a hard time participating in
that discussion, agreeing with *everyone*. For my next trick, I'll try
to figure out a position that's not inherently self-contradictory. ;-)
But seriously; I think Marc-Andre's initial post had two points:
* testing compiler/header/library features, existence, etc. (the
bread and butter of autoconf, but only the beginning of what it
is used for)
* getting a more detailed platform description (compiler/linker name
and version, and an OS/hardware description more detailed than
os.name)
The autoconf-ish stuff would be fun to do in Python, and a good test of
how general my compiler framework is. It wouldn't be as hairy as
autoconf itself, because all the horrible things about shell programming
magically disappear. As Fred pointed out, it would also be more
portable than the M4-generated shell script approach. But it would
probably be trickier than I like to admit, and not necessary in most
cases.
Consider: many (most?) Python extensions are probably thin glue layers
to larger C libraries. It is the large C libraries that have configure
scripts to adapt themselves to a particular platform; given the
existence of Python, a C compiler and library, and the big C library
being wrapped, the Python glue extension should build trouble-free. I
suspect a similar situation in Perl-land; I've just posted a question to
perl-xs(a)perl.org to see if the experts over there agree with my
assessment that autoconf-like features are occasionally handy, but not
necessary for most extensions.
(I further suspect that most Perl extension developers who need
something autoconf-ish have rolled their own in Makefile.PL; if we do
nothing autoconf-ish in the distutils, then I suspect that Python
extension developers will do similarly in setup.py. It would be nice to
have some basic "Does this header file exist?" "Does this library define
that symbol?" type functionality, though.)
Bottom line is, I suspect it's not essential for building basic Python
extensions, and would be a layer on top of the platform-specific
CCompiler subclasses... so it's not an immediate concern.
Regarding Marc-Andre's other wish: yes, this is important! Sooner or
later (probably sooner), somebody is going to want to say:
if compiler == 'gcc' and platform == 'linux-x86':
compier_options.append ('-funroll-loops')
...and that's probably only the beginning. The CCompiler framework only
has a barebones notion of what's needed to compile and link Python
extensions (and will know about binary executables, in order to build a
new static Python). I'm not even sure of how to handle optimization/
debugging options in a portable way, much less something like -Wall or
-funroll-loops (considering only one popular compiler). Ultimately some
sort of "Oh, here, just add your own damn compiler options" cop-out will
be needed, and the only way to make that workable is to allow the
extension developer to make decisions based on the platform and
compiler.
Perhaps a distutils.platform module is called for, which could expose
all these things in a standard way. How's this sound for a start:
* OS class ('posix', 'windows') (aka os.name)
* OS ('linux', 'solaris', 'irix', 'winnt')
* OS version ('2.2.10', '2.7', '6.1', '4.0')
* OS vendor ('redhat', 'suse', 'sun', 'sgi', 'microsoft')
* architecture ('x86', 'sparc', 'mips', 'alpha', 'ppc')
* processor ('586', '686', ... ???)
* compiler ('gcc', 'mipspro', 'sunpro', 'msvc')
* compiler version ('2.8.1', '7.5', '??', '5.0') (does Sun even have
version numbers on their compiler?)
OK, flame away. I'm sure someone will tell me that this is a good
start, but not nearly detailed enough, and Greg Stein will tell me to
stop being so obsessive about finicky little details. I'm inclined to
the latter... anyone want to try their hand at hacking up such a module?
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Arcege wrote:
> Gordon,
> I haven't looked at your own installer (I've been meaning to). But
> the one issue I have with it is about the compression. When
> compiling the Python distribution, zlib is not standard. This
> means that pyz files aren't altogether portable.
> BTW, what again is the URL of your installer? I have my own that I
> had written (spamcan) and wanted to compare it.
zlib is standard on Windows and comes in the RH RPMs. That's close
enough for me <wink>.
It's on the contrib site under System, but that's in the
Windows-only form (an exe). It will soon have a homepage on starship.
I'll again offer the inner pieces as tar.gz, but when I did that
before (on my now defunct corporate site) I didn't get any takers.
[BTW, I'm not on this SIG, I just check it periodically].
- Gordon