Hi again --
[cc'd to Paul Dubois: you said you weren't following the distutils sig
anymore, but this directly concerns NumPy and I'd like to get your
input!]
here's that sample setup.py for NumPy. See below for discussion (and
questions!).
------------------------------------------------------------------------
#!/usr/bin/env python
# Setup script example for building the Numeric extension to Python.
# This does sucessfully compile all the .dlls. Nothing happens
# with the .py files currently.
# Move this file to the Numerical directory of the LLNL numpy
# distribution and run as:
# python numpysetup.py --verbose build_ext
#
# created 1999/08 Perry Stoll
__rcsid__ = "$Id: numpysetup.py,v 1.1 1999/09/12 20:42:48 gward Exp $"
from distutils.core import setup
setup (name = "numerical",
version = "0.01",
description = "Numerical Extension to Python",
url = "http://www.python.org/sigs/matrix-sig/",
ext_modules = [ ( '_numpy', { 'sources' : [ 'Src/_numpymodule.c',
'Src/arrayobject.c',
'Src/ufuncobject.c'
],
'include_dirs' : ['./Include'],
'def_file' : 'Src/numpy.def' }
),
( 'multiarray', { 'sources' : ['Src/multiarraymodule.c'],
'include_dirs' : ['./Include'],
'def_file': 'Src/multiarray.def'
}
),
( 'umath', { 'sources': ['Src/umathmodule.c'],
'include_dirs' : ['./Include'],
'def_file' : 'Src/umath.def' }
),
( 'fftpack', { 'sources': ['Src/fftpackmodule.c', 'Src/fftpack.c'],
'include_dirs' : ['./Include'],
'def_file' : 'Src/fftpack.def' }
),
( 'lapack_lite', { 'sources' : [ 'Src/lapack_litemodule.c',
'Src/dlapack_lite.c',
'Src/zlapack_lite.c',
'Src/blas_lite.c',
'Src/f2c_lite.c'
],
'include_dirs' : ['./Include'],
'def_file' : 'Src/lapack_lite.def' }
),
( 'ranlib', { 'sources': ['Src/ranlibmodule.c',
'Src/ranlib.c',
'Src/com.c',
'Src/linpack.c',
],
'include_dirs' : ['./Include'],
'def_file' : 'Src/ranlib.def' }
),
]
)
------------------------------------------------------------------------
First, what d'you think? Too clunky and verbose? Too much information
for each extension? I kind of think so, but I'm not sure how to reduce
it elegantly. Right now, the internal data structures needed to compile
a module are pretty obviously exposed: is this a good thing? Or should
there be some more compact form for setup.py that will be expanded later
into the full glory we see above?
I've already made one small step towards reducing the amount of cruft by
factoring 'include_dirs' out and supplying it directly as a parameter to
'setup()'. (But that needs code not in the CVS archive yet, so I've
left the sample setup.py the same for now.)
The next thing I'd like to do is get that damn "def_file" out of there.
To support it in MSVCCompiler, there's already an ugly hack that
unnecessarily affects both the UnixCCompiler and CCompiler classes, and
I want to get rid of that. (I refer to passing the 'build_info'
dictionary into the compiler classes, if you're familiar with the code
-- that dictionary is part of the Distutils extension-building system,
and should not propagate into the more general compiler classes.)
But I don't want to give these weird "def file" things standing on the
order of source files, object files, libraries, etc., because they seem
to me to be a bizarre artifact of one particular compiler, rather than
something present in a wide range of C/C++ compilers.
Based on the NumPy model, it seems like there's a not-too-kludgy way to
handle this problem. Namely:
if building extension "foo":
if file "foo.def" found in same directory as "foo.c"
add "/def:foo.def" to MSVC command line
this will of course require some platform-specific code in the build_ext
command class, but I figured that was coming eventually, so why put it
off? ;-)
To make this hack work with NumPy, one change would be necessary: rename
Src/numpy.def to Src/_numpy.def to match Src/_numpy.c, which implements
the _numpy module. Would this be too much to ask of NumPy? (Paul?)
What about other module distributions that support MSVC++ and thus ship
with "def" files? Could they be made to accomodate this scheme?
Thanks for your feedback --
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all --
at long last, I found the time to hack in the ability to compile
extension modules to the Distutils. Mainly, this meant adding a
'build_ext' command which uses a CCompiler instance for all its dirty
work. I also had to add a few methods to CCompiler (and, of course,
UnixCCompiler) to make this work.
And I added a new module, 'spawn', which takes care of running
sub-programs more efficiently and robustly (no shell involved) than
os.system. That's needed, obviously, so we can run the compiler!
If you're in the mood for grubbing over raw source code, then get the
latest from CVS or download a current snapshot. See
http://www.python.org/sigs/distutils-sig/implementation.html
for a link to the code snapshot.
I'm still waiting for more subclasses of CCompiler to appear. At the
very least, we're going to need MSVCCompiler to build extensions on
Windows. Any takers? Also, someone who knows the Mac, and how to run
compilers programmatically there, will have to figure out how to write a
Mac-specific concrete CCompiler subclass.
The spawn module also needs a bit of work to be portable. I suspect
that _win32_spawn() (the intended analog to my _posix_spawn()) will be
easy to implement, if it even needs to go in a separate function at all.
It looks from the Python Library documentation for 1.5.2 that the
os.spawnv() function is all we need, but it's a bit hard to figure out
just what's needed. Windows wizards, please take a look at the
'spawn()' function and see if you can make it work on Windows.
As for actually compiling extensions: well, if you can figure out the
build_ext command, go ahead and give it a whirl. It's a bit cryptic
right now, since there's no documentation and no example setup.py. (I
have a working example at home, but it's not available online.) If you
feel up to it, though, see if you can read the code and figure out
what's going on. I'm just hoping *I'll* be able to figure out what's
going on when I get back from the O'Reilly conference next week... ;-)
Enjoy --
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all --
at long last, I have fixed two problems that a couple people noticed a
while ago:
* I folded in Amos Latteier's NT patches almost verbatim -- just
changed an `os.path.sep == "/"' to `os.name == "posix"' and added
some comments bitching about the inadequacy of the current library
installation model (I think this is Python's fault, but for now
Distutils is slavishly aping the situation in Python 1.5.x)
* I fixed the problem whereby running "setup.py install" without
doing anything else caused a crash (because 'build' hadn't yet
been run). Now, the 'install' command automatically runs 'build'
before doing anything; to make this bearable, I added a 'have_run'
dictionary to the Distribution class to keep track of which commands
have been run. So now not only are command classes singletons,
but their 'run' method can only be invoked once -- both restrictions
enforced by Distribution.
The code is checked into CVS, or you can download a snapshot at
http://www.python.org/sigs/distutils-sig/distutils-19990607.tar.gz
Hope someone (Amos?) can try the new version under NT. Any takers for
Mac OS?
BTW, all parties involved in the Great "Where Do We Install Stuff?"
Debate should take a good, hard look at the 'set_final_options()' method
of the Install class in distutils/install.py; this is where all the
policy decisions about where to install files are made. Currently it
apes the Python 1.5 situation as closely as I could figure it out.
Obviously, this is subject to change -- I just don't know to *what* it
will change!
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all,
I've been aware that the distutils sig has been simmerring away, but
until recently it has not been directly relevant to what I do.
I like the look of the proposed api, but have one question. Will this
support an installed system that has multiple versions of the same
package installed simultaneously? If not, then this would seem to be a
significant limitation, especially when dependencies between packages
are considered.
Assuming it does, then how will this be achieved? I am presently
managing this with a messy arrangement of symlinks. A package is
installed with its version number in it's name, and a separate
directory is created for an application with links from the
unversioned package name to the versioned one. Then I just set the
pythonpath to this directory.
A sample of what the directory looks like is shown below.
I'm sure there is a better solution that this, and I'm not sure that
this would work under windows anyway (does windows have symlinks?).
So, has this SIG considered such versioning issues yet?
Cheers,
Tim
--------------------------------------------------------------
Tim Docker timd(a)macquarie.com.au
Quantative Applications Division
Macquarie Bank
--------------------------------------------------------------
qad16:qad $ ls -l lib/python/
total 110
drwxr-xr-x 2 mts mts 512 Nov 11 11:23 1.1
-r--r----- 1 root mts 45172 Sep 1 1998 cdrmodule_0_7_1.so
drwxr-xr-x 2 mts mts 512 Sep 1 1998 chart_1_1
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Fnorb_0_7_1
dr-xr-x--- 3 mts mts 512 Nov 11 11:21 Fnorb_0_8
drwxr-xr-x 3 mts mts 1536 Mar 3 12:45 mts_1_1
dr-xr-x--- 7 mts mts 512 Nov 11 11:22 OpenGL_1_5_1
dr-xr-x--- 2 mts mts 1024 Nov 11 11:23 PIL_0_3
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Pmw_0_7
dr-xr-x--- 2 mts mts 512 Nov 11 11:21 v3d_1_1
qad16:qad $ ls -l lib/python/1.1
total 30
lrwxrwxrwx 1 root other 29 Apr 10 10:43 _glumodule.so -> ../OpenGL_1_5_1/_glumodule.so
lrwxrwxrwx 1 root other 30 Apr 10 10:43 _glutmodule.so -> ../OpenGL_1_5_1/_glutmodule.so
lrwxrwxrwx 1 root other 22 Apr 10 10:43 _imaging.so -> ../PIL_0_3/_imaging.so
lrwxrwxrwx 1 root other 36 Apr 10 10:43 _opengl_nummodule.so -> ../OpenGL_1_5_1/_opengl_nummodule.so
lrwxrwxrwx 1 root other 27 Apr 10 10:43 _tkinter.so -> ../OpenGL_1_5_1/_tkinter.so
lrwxrwxrwx 1 mts mts 21 Apr 10 10:43 cdrmodule.so -> ../cdrmodule_0_7_1.so
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 chart -> ../chart_1_1
lrwxrwxrwx 1 root other 12 Apr 10 10:43 Fnorb -> ../Fnorb_0_8
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 mts -> ../mts_1_1
lrwxrwxrwx 1 root other 15 Apr 10 10:43 OpenGL -> ../OpenGL_1_5_1
lrwxrwxrwx 1 root other 33 Apr 10 10:43 opengltrmodule.so -> ../OpenGL_1_5_1/opengltrmodule.so
lrwxrwxrwx 1 root other 33 Apr 10 10:43 openglutil_num.so -> ../OpenGL_1_5_1/openglutil_num.so
lrwxrwxrwx 1 root other 10 Apr 10 10:43 PIL -> ../PIL_0_3
lrwxrwxrwx 1 mts mts 10 Apr 10 10:43 Pmw -> ../Pmw_0_7
lrwxrwxrwx 1 root other 10 Apr 10 10:43 v3d -> ../v3d_1_1
From: Sean Reifschneider [mailto:jafo@tummy.com]
> On Tue, Feb 27, 2001 at 09:30:13AM -0700, Evelyn Mitchell wrote:
> >But it will also discover and resolve dependences in your perl
> >site-packages, and automatically fetch them from your closest
> >CPAN archive.
>
> Not according to my tests the night before last. I did a test CPAN
> install of "News::Newsrc", which failed because the "make test" was
> failing. I then installed the "Set::BitSet" (? Something like that)
> module and tried News::Newsrc again and it worked...
>
> Maybe this was just a fluke and News::Newsrc is the exception and/or
> isn't used enough that people have gotten the prereqs right yet. If
> anyone knows for sure, I'm curious.
There are basically a number of aspects to "CPAN", which need separating
out.
MakeMaker
---------
This is a perl module which implements a build process. You write a
Makefile.PL, which calls MakeMaker defining the module's structure and
metadata. Thanks to MakeMaker, the process for building a Perl module is
(nearly always) simply
perl Makefile.PL
make
make test <-- optional, but pretty much standard - runs module unit
tests
make install <-- installs the module
This is, in both concept and functionality, almost identical to Distutils.
There are some areas in which distutils is better (building of
platform-specific installers, for instance) and some where MakeMaker is
better (it hooks into a very standard structure for modules, generated by
thye h2xs program, which practically forces module authors to write test
suites and documentation in a standard format), but these are details.
We can consider this covered. (Although the distutils-sig could still
usefully learn from MakeMaker).
The system of FTP sites and mirrors
-----------------------------------
Frankly, this is nothing special. It has some nice features (automated
uploads for module authors, plus quite nice indexing and server multiplexing
features), but it isn't rocket science as far as I know. We could quite
happily start with a simple FTP site for this (that's what CPAN did - the
mirroring came later as popularity grew).
CPAN.pm
-------
This is a Perl module, which automates the process of downloading and
installing modules. I don't use this personally, for a number of reasons.
Frankly, I find that manual downloading and running the 4 lines noted above
is fine.
It relies for things like dependency tracking on metadata added into the
Makefile.PL which is not necessary for the straight 4-line build above. As
such, the level to which that metadata is added by module authors is
variable (to be polite). In practice, I wouldn't rely on it - accurate
dependency data seems to be the exception rather than the rule.
I *don't* regard CPAN.pm to be important to the overall CPAN "phenomenon".
But some people like it. Writing something "at least as good as" CPAN.pm
shouldn't be hard in Python - not least because the standard library is rich
enough that things like FTP client support is available out of the box
(whereas CPAN.pm won't work until you manually install libnet, and possibly
some other modules, I forget which...)
But writing a "perfect" utility for automated download-and-install, with
dependency tracking, etc etc, is going to be VERY HARD. Don't get misled -
Perl doesn't have such a beast. And We won't even have what Perl has if we
focus on perfection rather than practicality.
The h2xs program
----------------
This is VERY important. The idea is that when you write a Perl module,
either pure perl or a C (XS) extension, you run h2xs first, to generate a
template build directory. It automatically includes
* The perl module, with some basic template code and embedded POD
documentation
* The XS extension, with template code (if requested)
* A Makefile.PL shell
* A basic test script - all it does is test that the module loads,
but it includes a placeholder for your own tests
Essentially, h2xs forces a standard structure on all Perl modules. This is
important for Perl, where modules have to conform to some standards in order
to work at all. However, it brings HUGE benefits in standardisation of all
the "other" parts of the process (documentation, tests, etc).
Python is at a disadvantage here, precisely because writing a Python module
involves so little in the way of a specific structure. So people will likely
rebel against having a structure "imposed"...
A social structure
------------------
This is a bit of a chicken and egg issue, but Perl developers expect to
write modules using h2xs and MakeMaker, they expect to write tests (even if
they are minimal), they expect to fill in the sections in the POD
documentation, and they expect to submit their modules to CPAN. So this all
"just works".
Interestingly, developers probably don't "expect" to have to include
dependency information, and hence many don't - resulting in the problems you
hit. But then again, Perl users don't "expect" to be totally shielded from
dependency issues.
Python is VERY far behind here. This is a maturity issue - distutils is
still (relatively) new, and so there are LOTS of packages which don't come
with a setup.py yet. Often, adding one isn't hard, but it is still to
happen. And when you are distributing a pure python module, as a single .py
file, it's hard to see the benefit of changing that into a .tar.gz file
containing the module, plus a setup.py. (Once you start adding test suites
and documentation, the point of the whole archive bit is clearer, but we're
not even close to that stage yet).
Things are looking better with Python 2.1, though. Included with 2.1, it
looks like there will be a standard unit testing module, and a new "pydoc"
package, which will extract documentation from module docstrings. So the
infrastructure will be there, it just becomes a case of getting developers
to use it consistently. Distutils can help with this, by "just working" if
people follow a standard structure.
Sorry, this wasn't meant to get so long...
Paul.
> I'm not sure why the "what do you have" question is needed. The "send me
> that(mandrake.rpm)" interaction is what we want.
Well, one of the things I'd like to do is integrate all this in MacPython IDE,
and the same probably holds for PythonWin and Idle. Then the "what do you
have" is pretty useful for building menus and such.
--
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm
Unfortunately, this week has been pretty busy. I've been giving the idea
some thought again (it's been something I've wanted to work on for quite
a while), and just finally today had the chance to look at pythonsiphon.
I'm not really clear (from the code what) as to what it's job is to be.
It smells like it's wanting to be something that understands, given a
distutils package, how to install it.
My thoughts on that are that it's a job that's probably going to be
something subject largely to being handed off to appropriate scripts
based on the platform and some user input (for example, a user may
prefer to have packages they download installed in ~/lib/python1.5,
instead of /usr/lib/python1.5).
Ideally though, the tool should be able to deal with allowing the user
to select their preferred distribution media. I'd prefer to see an ix86
RedHat 7.0 RPM, a SRPM, and then would fall back to a distutils file or
tar file.
As far as the server side for allowing users to query packages,
dependencies, and locations...
I had actually hoped to pound out some code on this, but that just
didn't happen. I have built up a schema which I think encompasses
everything I see such a system wanting to track. Initial versions
probably won't track all that, but I believe they should be considered.
The schema is attached below.
A quick overview of the tables. I hate the name "items", but it's
the best I could come up with. That's what I'm calling the general
module (things that can be imported) or package (for example, programs
like Sketch -- why not have them in here as well) description. In
this case, let's use "sockserv". The URL would point to the main
location you can find "sockserv" module information.
"Packages" are actually meant to describe what you can download.
This database will track multiple versions, include a checksum
and information on what platform it's meant for (for binary
distributions, for example).
Mirrors are handled by a special table. The idea is that one should
be able to add mirrors without having to list all the items in that
particular mirror (again). New entries should be able to be listed
without having to list all the mirrors they might reside on. So,
I've come up with the idea that a mirror simply specifies a "prefix"
for the URL listed in the "locations" table.
To be honest, my plan is to have a special-case "null" mirror which
non-mirror members are listed under -- that would have an empty
prefix.
So, given a package, version, format, architecture, etc... The
"locations" table will produce list of all locations which that
file can be found at.
Note that there is a dependency table which lists that one "item"
relies on another "item".
As far as the network interface goes, at the current moment I'm thinking
that queries will be submitted to a web server CGI as a POST, and the
results will be returned as the body as text/plain... I'll probably use
netstrings (as discussed before) to encode things for transport.
So, any comments?
Sean
=========================
#ISSUES:
# This doesn't handle partial mirrors. Should it? How?
CREATE TABLE items (
id serial primary key,
lastUpdated datetime default 'now',
name text,
description text,
type text, # module, package
homepageURL text,
)
CREATE TABLE packages (
id serial primary key,
itemID int4,
version text,
md5sum text,
filesize text,
insertedOn datetime default 'now',
description text, # description about this specific file
platformName text,
platformVersion text,
architecture text,
packageFormat text,
)
# a collection of URLs that all contain the same set of files (within some
# precision). A mirror can be added without having to add all the URLs it
# contains. Not useful for handling partial-mirrors though.
CREATE TABLE mirrors (
id serial primary key,
name text,
baseURL text,
)
CREATE TABLE locations (
id serial primary key,
packageID int4,
mirrorID int4, # if found on a mirror, list it
URL text, # append this text to the mirror baseURL
)
CREATE TABLE depends (
id serial primary key,
itemID int4,
requiresItemID int4
)
CREATE TABLE users (
id serial primary key,
name text,
password text,
lastLogin datetime default NULL,
loginCookie text
)
CREATE TABLE maintainers (
userID int4,
type text, # package, item
piID int4
)
CREATE TABLE ranking (
id serial primary key,
rank int2,
itemID int4,
userID int4
date datetime default 'now',
)
--
Millions long for immortality who don't know what to do with themselves
on a rainy Sunday afternoon. -- Heinlein
Sean Reifschneider, Inimitably Superfluous <jafo(a)tummy.com>
tummy.com - Linux Consulting since 1995. Qmail, KRUD, Firewalls, Python
Following on from my message about "CPAN", I was thinking that one extremely
useful addition to distutils would be to add a "test" action. The basic
approach would be that
python setup.py test
would run a test suite. Of course, for this to work, it would require
modules to put tests in "a standard place". There are two main options - the
first is for modules to have tests inline, via if __name__=="__main__".
While this is common, I would suggest instead that running a separate
test.py would be better. There's nothing stopping the developer having
inline tests, it just means that test.py becomes a simple wrapper
(execfile("module.py")).
If test.py does not exist, distutils should print a simple message "No tests
defined".
Do people think this is a good idea? I would be willing to look into
implementing this, although I have no experience in hacking distutils.
Paul.
Ok, I looked at the help but missed the line --help-compiler which gave the
list of compilers. I have now tried --compiler=mingw32 as suggested and that
works quite well. The documentation inside the code doesn't mention anything
about compilers except unix and msvc either -- or at least the part I looked
at.... :-) In any case, thanks for your help!
Anthony
-----Original Message-----
From: Rene Liebscher [mailto:R.Liebscher@gmx.de]
Sent: Wednesday, February 28, 2001 1:08 AM
To: Anthony Tuininga
Cc: distutils-sig(a)python.org
Subject: Re: [Distutils] Potential patch for sysconfig.py
Anthony Tuininga wrote:
>
> Part 1.1Type: Plain Text (text/plain)
> I added these lines to _int_nt() to support creating extensions under NT
> with gcc (Mingw32). I don't know a whole lot about distutils as of yet but
> these lines worked for me. Any comments?
>
> g['CC'] = "gcc"
> g['OPT'] = ""
> g['CCSHARED'] = ""
> g['LDSHARED'] = "dllwrap"
>
> Anthony Tuininga
>
Did you ever try "python setup.py build --compiler=mingw32"
with your package?
Mingw32 and cygwin are supported for more than a half year now.
(Ok, we should have updated the documentation, but to use
help with the build command isn't a problem?)
Kind regards
Rene Liebscher