Hi again --
[cc'd to Paul Dubois: you said you weren't following the distutils sig
anymore, but this directly concerns NumPy and I'd like to get your
input!]
here's that sample setup.py for NumPy. See below for discussion (and
questions!).
------------------------------------------------------------------------
#!/usr/bin/env python
# Setup script example for building the Numeric extension to Python.
# This does sucessfully compile all the .dlls. Nothing happens
# with the .py files currently.
# Move this file to the Numerical directory of the LLNL numpy
# distribution and run as:
# python numpysetup.py --verbose build_ext
#
# created 1999/08 Perry Stoll
__rcsid__ = "$Id: numpysetup.py,v 1.1 1999/09/12 20:42:48 gward Exp $"
from distutils.core import setup
setup (name = "numerical",
version = "0.01",
description = "Numerical Extension to Python",
url = "http://www.python.org/sigs/matrix-sig/",
ext_modules = [ ( '_numpy', { 'sources' : [ 'Src/_numpymodule.c',
'Src/arrayobject.c',
'Src/ufuncobject.c'
],
'include_dirs' : ['./Include'],
'def_file' : 'Src/numpy.def' }
),
( 'multiarray', { 'sources' : ['Src/multiarraymodule.c'],
'include_dirs' : ['./Include'],
'def_file': 'Src/multiarray.def'
}
),
( 'umath', { 'sources': ['Src/umathmodule.c'],
'include_dirs' : ['./Include'],
'def_file' : 'Src/umath.def' }
),
( 'fftpack', { 'sources': ['Src/fftpackmodule.c', 'Src/fftpack.c'],
'include_dirs' : ['./Include'],
'def_file' : 'Src/fftpack.def' }
),
( 'lapack_lite', { 'sources' : [ 'Src/lapack_litemodule.c',
'Src/dlapack_lite.c',
'Src/zlapack_lite.c',
'Src/blas_lite.c',
'Src/f2c_lite.c'
],
'include_dirs' : ['./Include'],
'def_file' : 'Src/lapack_lite.def' }
),
( 'ranlib', { 'sources': ['Src/ranlibmodule.c',
'Src/ranlib.c',
'Src/com.c',
'Src/linpack.c',
],
'include_dirs' : ['./Include'],
'def_file' : 'Src/ranlib.def' }
),
]
)
------------------------------------------------------------------------
First, what d'you think? Too clunky and verbose? Too much information
for each extension? I kind of think so, but I'm not sure how to reduce
it elegantly. Right now, the internal data structures needed to compile
a module are pretty obviously exposed: is this a good thing? Or should
there be some more compact form for setup.py that will be expanded later
into the full glory we see above?
I've already made one small step towards reducing the amount of cruft by
factoring 'include_dirs' out and supplying it directly as a parameter to
'setup()'. (But that needs code not in the CVS archive yet, so I've
left the sample setup.py the same for now.)
The next thing I'd like to do is get that damn "def_file" out of there.
To support it in MSVCCompiler, there's already an ugly hack that
unnecessarily affects both the UnixCCompiler and CCompiler classes, and
I want to get rid of that. (I refer to passing the 'build_info'
dictionary into the compiler classes, if you're familiar with the code
-- that dictionary is part of the Distutils extension-building system,
and should not propagate into the more general compiler classes.)
But I don't want to give these weird "def file" things standing on the
order of source files, object files, libraries, etc., because they seem
to me to be a bizarre artifact of one particular compiler, rather than
something present in a wide range of C/C++ compilers.
Based on the NumPy model, it seems like there's a not-too-kludgy way to
handle this problem. Namely:
if building extension "foo":
if file "foo.def" found in same directory as "foo.c"
add "/def:foo.def" to MSVC command line
this will of course require some platform-specific code in the build_ext
command class, but I figured that was coming eventually, so why put it
off? ;-)
To make this hack work with NumPy, one change would be necessary: rename
Src/numpy.def to Src/_numpy.def to match Src/_numpy.c, which implements
the _numpy module. Would this be too much to ask of NumPy? (Paul?)
What about other module distributions that support MSVC++ and thus ship
with "def" files? Could they be made to accomodate this scheme?
Thanks for your feedback --
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all --
at long last, I found the time to hack in the ability to compile
extension modules to the Distutils. Mainly, this meant adding a
'build_ext' command which uses a CCompiler instance for all its dirty
work. I also had to add a few methods to CCompiler (and, of course,
UnixCCompiler) to make this work.
And I added a new module, 'spawn', which takes care of running
sub-programs more efficiently and robustly (no shell involved) than
os.system. That's needed, obviously, so we can run the compiler!
If you're in the mood for grubbing over raw source code, then get the
latest from CVS or download a current snapshot. See
http://www.python.org/sigs/distutils-sig/implementation.html
for a link to the code snapshot.
I'm still waiting for more subclasses of CCompiler to appear. At the
very least, we're going to need MSVCCompiler to build extensions on
Windows. Any takers? Also, someone who knows the Mac, and how to run
compilers programmatically there, will have to figure out how to write a
Mac-specific concrete CCompiler subclass.
The spawn module also needs a bit of work to be portable. I suspect
that _win32_spawn() (the intended analog to my _posix_spawn()) will be
easy to implement, if it even needs to go in a separate function at all.
It looks from the Python Library documentation for 1.5.2 that the
os.spawnv() function is all we need, but it's a bit hard to figure out
just what's needed. Windows wizards, please take a look at the
'spawn()' function and see if you can make it work on Windows.
As for actually compiling extensions: well, if you can figure out the
build_ext command, go ahead and give it a whirl. It's a bit cryptic
right now, since there's no documentation and no example setup.py. (I
have a working example at home, but it's not available online.) If you
feel up to it, though, see if you can read the code and figure out
what's going on. I'm just hoping *I'll* be able to figure out what's
going on when I get back from the O'Reilly conference next week... ;-)
Enjoy --
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all --
at long last, I have fixed two problems that a couple people noticed a
while ago:
* I folded in Amos Latteier's NT patches almost verbatim -- just
changed an `os.path.sep == "/"' to `os.name == "posix"' and added
some comments bitching about the inadequacy of the current library
installation model (I think this is Python's fault, but for now
Distutils is slavishly aping the situation in Python 1.5.x)
* I fixed the problem whereby running "setup.py install" without
doing anything else caused a crash (because 'build' hadn't yet
been run). Now, the 'install' command automatically runs 'build'
before doing anything; to make this bearable, I added a 'have_run'
dictionary to the Distribution class to keep track of which commands
have been run. So now not only are command classes singletons,
but their 'run' method can only be invoked once -- both restrictions
enforced by Distribution.
The code is checked into CVS, or you can download a snapshot at
http://www.python.org/sigs/distutils-sig/distutils-19990607.tar.gz
Hope someone (Amos?) can try the new version under NT. Any takers for
Mac OS?
BTW, all parties involved in the Great "Where Do We Install Stuff?"
Debate should take a good, hard look at the 'set_final_options()' method
of the Install class in distutils/install.py; this is where all the
policy decisions about where to install files are made. Currently it
apes the Python 1.5 situation as closely as I could figure it out.
Obviously, this is subject to change -- I just don't know to *what* it
will change!
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all,
I've been aware that the distutils sig has been simmerring away, but
until recently it has not been directly relevant to what I do.
I like the look of the proposed api, but have one question. Will this
support an installed system that has multiple versions of the same
package installed simultaneously? If not, then this would seem to be a
significant limitation, especially when dependencies between packages
are considered.
Assuming it does, then how will this be achieved? I am presently
managing this with a messy arrangement of symlinks. A package is
installed with its version number in it's name, and a separate
directory is created for an application with links from the
unversioned package name to the versioned one. Then I just set the
pythonpath to this directory.
A sample of what the directory looks like is shown below.
I'm sure there is a better solution that this, and I'm not sure that
this would work under windows anyway (does windows have symlinks?).
So, has this SIG considered such versioning issues yet?
Cheers,
Tim
--------------------------------------------------------------
Tim Docker timd(a)macquarie.com.au
Quantative Applications Division
Macquarie Bank
--------------------------------------------------------------
qad16:qad $ ls -l lib/python/
total 110
drwxr-xr-x 2 mts mts 512 Nov 11 11:23 1.1
-r--r----- 1 root mts 45172 Sep 1 1998 cdrmodule_0_7_1.so
drwxr-xr-x 2 mts mts 512 Sep 1 1998 chart_1_1
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Fnorb_0_7_1
dr-xr-x--- 3 mts mts 512 Nov 11 11:21 Fnorb_0_8
drwxr-xr-x 3 mts mts 1536 Mar 3 12:45 mts_1_1
dr-xr-x--- 7 mts mts 512 Nov 11 11:22 OpenGL_1_5_1
dr-xr-x--- 2 mts mts 1024 Nov 11 11:23 PIL_0_3
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Pmw_0_7
dr-xr-x--- 2 mts mts 512 Nov 11 11:21 v3d_1_1
qad16:qad $ ls -l lib/python/1.1
total 30
lrwxrwxrwx 1 root other 29 Apr 10 10:43 _glumodule.so -> ../OpenGL_1_5_1/_glumodule.so
lrwxrwxrwx 1 root other 30 Apr 10 10:43 _glutmodule.so -> ../OpenGL_1_5_1/_glutmodule.so
lrwxrwxrwx 1 root other 22 Apr 10 10:43 _imaging.so -> ../PIL_0_3/_imaging.so
lrwxrwxrwx 1 root other 36 Apr 10 10:43 _opengl_nummodule.so -> ../OpenGL_1_5_1/_opengl_nummodule.so
lrwxrwxrwx 1 root other 27 Apr 10 10:43 _tkinter.so -> ../OpenGL_1_5_1/_tkinter.so
lrwxrwxrwx 1 mts mts 21 Apr 10 10:43 cdrmodule.so -> ../cdrmodule_0_7_1.so
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 chart -> ../chart_1_1
lrwxrwxrwx 1 root other 12 Apr 10 10:43 Fnorb -> ../Fnorb_0_8
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 mts -> ../mts_1_1
lrwxrwxrwx 1 root other 15 Apr 10 10:43 OpenGL -> ../OpenGL_1_5_1
lrwxrwxrwx 1 root other 33 Apr 10 10:43 opengltrmodule.so -> ../OpenGL_1_5_1/opengltrmodule.so
lrwxrwxrwx 1 root other 33 Apr 10 10:43 openglutil_num.so -> ../OpenGL_1_5_1/openglutil_num.so
lrwxrwxrwx 1 root other 10 Apr 10 10:43 PIL -> ../PIL_0_3
lrwxrwxrwx 1 mts mts 10 Apr 10 10:43 Pmw -> ../Pmw_0_7
lrwxrwxrwx 1 root other 10 Apr 10 10:43 v3d -> ../v3d_1_1
Following up on some IRC discussion with other folks:
There is precedent (Plone) for PyPI trove classifiers corresponding to
particular versions of a framework. So I'd like to get feedback on the idea
of expanding that, particularly in the case of Django.
The rationale here is that the ecosystem of Django-related packages is
quite large, but -- as I know all too well from a project I'm working on
literally at this moment -- it can be difficult to ensure that all of one's
dependencies are compatible with the version of Django one happens to be
using.
Adding trove classifier support at the level of individual versions of
Django would, I think, greatly simplify this: tools could easily analyze
which packages are compatible with an end user's chosen version, there'd be
far less manual guesswork, etc., and the rate of creation of new
classifiers would be relatively low (we tend to have one X.Y release/year
or thereabouts, and that's the level of granularity needed).
Assuming there's consensus around the idea of doing this, what would be the
correct procedure for getting such classifiers set up and maintained?
Hello Everyone!
Google released the list of accepted organizations for GSoC 2017 and PSF is
one of them. I guess this would a good time for me to seek feedback on the
approach I'm planning to take for my potential GSoC project. I hope this
mailing list is the right place to do so.
---
Here's my current plan of action along with reasoning for the choices made:
A separate PR will be made for each of these stages. Every stage does
depend on the previous ones being completed.
1. Refactor all dependency resolution responsibility in pip into a new,
separate module.
This would allow any future changes/improvements in the dependency
resolution to be added without major changes in the rest of the code-base.
As of today, the RequirementSet class within pip seems to be doing a lot
of work and dependency resolution is a responsibility that doesn't need to
given to it, especially when it's avoidable.
2. Implement dependency information caching.
This would allow the resolver to not cause the re-computation of the
dependencies of a package, if they have already been computed, speeding up
the resolution.
3. Implement a backtracking resolver.
A backtracking solver would be appropriate given that we don't have a
way to pre-compute the dependencies for *all* the packages or statically
determine the dependencies - a SAT solver would not be feasible.
4. (if time permits) Move any dependency resolution code out into a
separate library.
This would make it possible for other projects (like buildout or a
future pip replacement) to reuse the dependency resolver.
By making each of the stages separate PRs, incremental improvements would
be made so that even if I leave this project midway, there will be some
work merged already if someone comes back to this problem later. That said,
I don't intend to leave this project midway.
I do intend to reuse some of the work done by Robert Collins in PR #2716 on
pip's GitHub repository.
Stages 2 and 3 are separate because I see them as distinctly different
tasks which touch very different portions of the code-base. There's is
strong coupling between them though.
I'm looking forward to the feedback. :)
Regards,
Pradyun
Hi all,
I have a installation setup of an application of mine that uses zc.buildout
(see #1). Today I had to reinstall it on a new machine, using Python
3.6. Executing its bootstrap.py installed latest zc.buildout (2.8.0).
All the required packages are pinned to an exact version (see #2), so it
surprised me to hit the following error executing the buildout:
Version and requirements information containing transaction:
[versions] constraint on transaction: 1.4.4
Requirement of SoL==3.37: transaction
Requirement of zope.sqlalchemy: transaction
Requirement of transaction: zope.interface
Requirement of pyramid_tm: transaction>=2.0
While:
Installing sol.
Error: The requirement ('transaction>=2.0') is not allowed by your [versions] constraint (1.4.4)
Investigating the issue, I found that buildout was actually installing
“pyramid-tm” (with a dash), not “pyramid_tm” (underscore, the only one present
on PyPI, see #3):
...
Getting distribution for 'pyramid_tm'.
Got pyramid-tm 1.1.1.
...
And that of course is the source of the problem: hacking a local copy of the
versions.cfg adding the following line:
pyramid-tm = 0.12.1
and then adjusting the buildout.cfg to load that local copy fixed the problem.
So the question is: what is the real nature of the problem?
I downloaded current pyramid_tm sources, and there is no trace of a
“pyramid-tm” except in the project's URL:
$ git grep pyramid-tm
setup.py: url="http://docs.pylonsproject.org/projects/pyramid-tm/en/latest/",
but effectively when I dig inside the egg that buildout produced I can find
the following:
$ grep -r pyramid-tm *
EGG-INFO/PKG-INFO:Name: pyramid-tm
EGG-INFO/PKG-INFO:Home-page: http://docs.pylonsproject.org/projects/pyramid-tm/en/latest/
The differences since my last install-from-scratch of the application (that
worked flawlessly) are basically version 3.6 of Python and version 2.8.0 of
zc.buildout.
Can anyone shed some light on the problem? By what logic buildout used a
different name for that particular package?
Thank you in advance,
ciao, lele.
#1 https://bitbucket.org/lele/solista
#2 https://bitbucket.org/lele/sol/raw/master/requirements/versions.cfg
#3 https://pypi.python.org/pypi/pyramid_tm
--
nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
lele(a)metapensiero.it | -- Fortunato Depero, 1929.
> humpty in term uses uses distlib which seems to mishandle wheel> metadata. (For example, it chokes if there's extra distribution meta and
> makes it impossible for buildout to install python-dateutil from a wheel.)
I looked into the "mishandling". It's that the other tools don't adhere to [the current state of] PEP 426 as closely as distlib does. For example, wheel writes JSON metadata to metadata.json in the .dist-info directory, whereas PEP 426 calls for that data to be in pydist.json. The non-JSON metadata in the wheel (the METADATA file) does not strictly adhere to any of the metadata PEPs 241, 314, 345 or 426 (it has a mixture of incompatible fields).
I can change distlib to look for metadata.json, and relax the rules to be more liberal regarding which fields to accept, but adhering to the PEP isn't mishandling things, as I see it.
Work on distlib has slowed right down since around the time when PEP 426 was deferred indefinitely, and there seems to be little interest in progressing via metadata or other standardisation - we have to go by what the de facto tools (setuptools, wheel) choose to do. It's not an ideal situation, and incompatibilities can crop up, as you've seen.
Regards,
Vinay Sajip