Hi again --
[cc'd to Paul Dubois: you said you weren't following the distutils sig
anymore, but this directly concerns NumPy and I'd like to get your
input!]
here's that sample setup.py for NumPy. See below for discussion (and
questions!).
------------------------------------------------------------------------
#!/usr/bin/env python
# Setup script example for building the Numeric extension to Python.
# This does sucessfully compile all the .dlls. Nothing happens
# with the .py files currently.
# Move this file to the Numerical directory of the LLNL numpy
# distribution and run as:
# python numpysetup.py --verbose build_ext
#
# created 1999/08 Perry Stoll
__rcsid__ = "$Id: numpysetup.py,v 1.1 1999/09/12 20:42:48 gward Exp $"
from distutils.core import setup
setup (name = "numerical",
version = "0.01",
description = "Numerical Extension to Python",
url = "http://www.python.org/sigs/matrix-sig/",
ext_modules = [ ( '_numpy', { 'sources' : [ 'Src/_numpymodule.c',
'Src/arrayobject.c',
'Src/ufuncobject.c'
],
'include_dirs' : ['./Include'],
'def_file' : 'Src/numpy.def' }
),
( 'multiarray', { 'sources' : ['Src/multiarraymodule.c'],
'include_dirs' : ['./Include'],
'def_file': 'Src/multiarray.def'
}
),
( 'umath', { 'sources': ['Src/umathmodule.c'],
'include_dirs' : ['./Include'],
'def_file' : 'Src/umath.def' }
),
( 'fftpack', { 'sources': ['Src/fftpackmodule.c', 'Src/fftpack.c'],
'include_dirs' : ['./Include'],
'def_file' : 'Src/fftpack.def' }
),
( 'lapack_lite', { 'sources' : [ 'Src/lapack_litemodule.c',
'Src/dlapack_lite.c',
'Src/zlapack_lite.c',
'Src/blas_lite.c',
'Src/f2c_lite.c'
],
'include_dirs' : ['./Include'],
'def_file' : 'Src/lapack_lite.def' }
),
( 'ranlib', { 'sources': ['Src/ranlibmodule.c',
'Src/ranlib.c',
'Src/com.c',
'Src/linpack.c',
],
'include_dirs' : ['./Include'],
'def_file' : 'Src/ranlib.def' }
),
]
)
------------------------------------------------------------------------
First, what d'you think? Too clunky and verbose? Too much information
for each extension? I kind of think so, but I'm not sure how to reduce
it elegantly. Right now, the internal data structures needed to compile
a module are pretty obviously exposed: is this a good thing? Or should
there be some more compact form for setup.py that will be expanded later
into the full glory we see above?
I've already made one small step towards reducing the amount of cruft by
factoring 'include_dirs' out and supplying it directly as a parameter to
'setup()'. (But that needs code not in the CVS archive yet, so I've
left the sample setup.py the same for now.)
The next thing I'd like to do is get that damn "def_file" out of there.
To support it in MSVCCompiler, there's already an ugly hack that
unnecessarily affects both the UnixCCompiler and CCompiler classes, and
I want to get rid of that. (I refer to passing the 'build_info'
dictionary into the compiler classes, if you're familiar with the code
-- that dictionary is part of the Distutils extension-building system,
and should not propagate into the more general compiler classes.)
But I don't want to give these weird "def file" things standing on the
order of source files, object files, libraries, etc., because they seem
to me to be a bizarre artifact of one particular compiler, rather than
something present in a wide range of C/C++ compilers.
Based on the NumPy model, it seems like there's a not-too-kludgy way to
handle this problem. Namely:
if building extension "foo":
if file "foo.def" found in same directory as "foo.c"
add "/def:foo.def" to MSVC command line
this will of course require some platform-specific code in the build_ext
command class, but I figured that was coming eventually, so why put it
off? ;-)
To make this hack work with NumPy, one change would be necessary: rename
Src/numpy.def to Src/_numpy.def to match Src/_numpy.c, which implements
the _numpy module. Would this be too much to ask of NumPy? (Paul?)
What about other module distributions that support MSVC++ and thus ship
with "def" files? Could they be made to accomodate this scheme?
Thanks for your feedback --
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all --
at long last, I found the time to hack in the ability to compile
extension modules to the Distutils. Mainly, this meant adding a
'build_ext' command which uses a CCompiler instance for all its dirty
work. I also had to add a few methods to CCompiler (and, of course,
UnixCCompiler) to make this work.
And I added a new module, 'spawn', which takes care of running
sub-programs more efficiently and robustly (no shell involved) than
os.system. That's needed, obviously, so we can run the compiler!
If you're in the mood for grubbing over raw source code, then get the
latest from CVS or download a current snapshot. See
http://www.python.org/sigs/distutils-sig/implementation.html
for a link to the code snapshot.
I'm still waiting for more subclasses of CCompiler to appear. At the
very least, we're going to need MSVCCompiler to build extensions on
Windows. Any takers? Also, someone who knows the Mac, and how to run
compilers programmatically there, will have to figure out how to write a
Mac-specific concrete CCompiler subclass.
The spawn module also needs a bit of work to be portable. I suspect
that _win32_spawn() (the intended analog to my _posix_spawn()) will be
easy to implement, if it even needs to go in a separate function at all.
It looks from the Python Library documentation for 1.5.2 that the
os.spawnv() function is all we need, but it's a bit hard to figure out
just what's needed. Windows wizards, please take a look at the
'spawn()' function and see if you can make it work on Windows.
As for actually compiling extensions: well, if you can figure out the
build_ext command, go ahead and give it a whirl. It's a bit cryptic
right now, since there's no documentation and no example setup.py. (I
have a working example at home, but it's not available online.) If you
feel up to it, though, see if you can read the code and figure out
what's going on. I'm just hoping *I'll* be able to figure out what's
going on when I get back from the O'Reilly conference next week... ;-)
Enjoy --
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all --
at long last, I have fixed two problems that a couple people noticed a
while ago:
* I folded in Amos Latteier's NT patches almost verbatim -- just
changed an `os.path.sep == "/"' to `os.name == "posix"' and added
some comments bitching about the inadequacy of the current library
installation model (I think this is Python's fault, but for now
Distutils is slavishly aping the situation in Python 1.5.x)
* I fixed the problem whereby running "setup.py install" without
doing anything else caused a crash (because 'build' hadn't yet
been run). Now, the 'install' command automatically runs 'build'
before doing anything; to make this bearable, I added a 'have_run'
dictionary to the Distribution class to keep track of which commands
have been run. So now not only are command classes singletons,
but their 'run' method can only be invoked once -- both restrictions
enforced by Distribution.
The code is checked into CVS, or you can download a snapshot at
http://www.python.org/sigs/distutils-sig/distutils-19990607.tar.gz
Hope someone (Amos?) can try the new version under NT. Any takers for
Mac OS?
BTW, all parties involved in the Great "Where Do We Install Stuff?"
Debate should take a good, hard look at the 'set_final_options()' method
of the Install class in distutils/install.py; this is where all the
policy decisions about where to install files are made. Currently it
apes the Python 1.5 situation as closely as I could figure it out.
Obviously, this is subject to change -- I just don't know to *what* it
will change!
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hi all,
I've been aware that the distutils sig has been simmerring away, but
until recently it has not been directly relevant to what I do.
I like the look of the proposed api, but have one question. Will this
support an installed system that has multiple versions of the same
package installed simultaneously? If not, then this would seem to be a
significant limitation, especially when dependencies between packages
are considered.
Assuming it does, then how will this be achieved? I am presently
managing this with a messy arrangement of symlinks. A package is
installed with its version number in it's name, and a separate
directory is created for an application with links from the
unversioned package name to the versioned one. Then I just set the
pythonpath to this directory.
A sample of what the directory looks like is shown below.
I'm sure there is a better solution that this, and I'm not sure that
this would work under windows anyway (does windows have symlinks?).
So, has this SIG considered such versioning issues yet?
Cheers,
Tim
--------------------------------------------------------------
Tim Docker timd(a)macquarie.com.au
Quantative Applications Division
Macquarie Bank
--------------------------------------------------------------
qad16:qad $ ls -l lib/python/
total 110
drwxr-xr-x 2 mts mts 512 Nov 11 11:23 1.1
-r--r----- 1 root mts 45172 Sep 1 1998 cdrmodule_0_7_1.so
drwxr-xr-x 2 mts mts 512 Sep 1 1998 chart_1_1
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Fnorb_0_7_1
dr-xr-x--- 3 mts mts 512 Nov 11 11:21 Fnorb_0_8
drwxr-xr-x 3 mts mts 1536 Mar 3 12:45 mts_1_1
dr-xr-x--- 7 mts mts 512 Nov 11 11:22 OpenGL_1_5_1
dr-xr-x--- 2 mts mts 1024 Nov 11 11:23 PIL_0_3
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Pmw_0_7
dr-xr-x--- 2 mts mts 512 Nov 11 11:21 v3d_1_1
qad16:qad $ ls -l lib/python/1.1
total 30
lrwxrwxrwx 1 root other 29 Apr 10 10:43 _glumodule.so -> ../OpenGL_1_5_1/_glumodule.so
lrwxrwxrwx 1 root other 30 Apr 10 10:43 _glutmodule.so -> ../OpenGL_1_5_1/_glutmodule.so
lrwxrwxrwx 1 root other 22 Apr 10 10:43 _imaging.so -> ../PIL_0_3/_imaging.so
lrwxrwxrwx 1 root other 36 Apr 10 10:43 _opengl_nummodule.so -> ../OpenGL_1_5_1/_opengl_nummodule.so
lrwxrwxrwx 1 root other 27 Apr 10 10:43 _tkinter.so -> ../OpenGL_1_5_1/_tkinter.so
lrwxrwxrwx 1 mts mts 21 Apr 10 10:43 cdrmodule.so -> ../cdrmodule_0_7_1.so
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 chart -> ../chart_1_1
lrwxrwxrwx 1 root other 12 Apr 10 10:43 Fnorb -> ../Fnorb_0_8
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 mts -> ../mts_1_1
lrwxrwxrwx 1 root other 15 Apr 10 10:43 OpenGL -> ../OpenGL_1_5_1
lrwxrwxrwx 1 root other 33 Apr 10 10:43 opengltrmodule.so -> ../OpenGL_1_5_1/opengltrmodule.so
lrwxrwxrwx 1 root other 33 Apr 10 10:43 openglutil_num.so -> ../OpenGL_1_5_1/openglutil_num.so
lrwxrwxrwx 1 root other 10 Apr 10 10:43 PIL -> ../PIL_0_3
lrwxrwxrwx 1 mts mts 10 Apr 10 10:43 Pmw -> ../Pmw_0_7
lrwxrwxrwx 1 root other 10 Apr 10 10:43 v3d -> ../v3d_1_1
Following up on some IRC discussion with other folks:
There is precedent (Plone) for PyPI trove classifiers corresponding to
particular versions of a framework. So I'd like to get feedback on the idea
of expanding that, particularly in the case of Django.
The rationale here is that the ecosystem of Django-related packages is
quite large, but -- as I know all too well from a project I'm working on
literally at this moment -- it can be difficult to ensure that all of one's
dependencies are compatible with the version of Django one happens to be
using.
Adding trove classifier support at the level of individual versions of
Django would, I think, greatly simplify this: tools could easily analyze
which packages are compatible with an end user's chosen version, there'd be
far less manual guesswork, etc., and the rate of creation of new
classifiers would be relatively low (we tend to have one X.Y release/year
or thereabouts, and that's the level of granularity needed).
Assuming there's consensus around the idea of doing this, what would be the
correct procedure for getting such classifiers set up and maintained?
Hello everyone,
I'm not sure this is the right place to write to propose new trove classifiers
for PyPi -- if it's not, what would be the right place? If this is it, then
please read below.
The MicroPython project is quickly growing and becoming more mature, and as
that happens, the number of 3rd-party libraries for it grows. Many of those
libraries get uploaded to PyPi, as you can check by searching for
"micropython". MicroPython has even its own version of "pip", called "upip",
that can be used to install those libraries.
However, there is as of yet no way to mark that a library is written for that
particular flavor of Python, as there are no trove classifiers for it. I would
like to propose adding a number of classifiers to amend that situation:
For the MicroPython itself:
Programming Language :: Python :: Implementation :: MicroPython
For the hardware it runs on:
Operating System :: Baremetal
Environment :: Microcontroller
Environment :: Microcontroller :: PyBoard
Environment :: Microcontroller :: ESP8266
Environment :: Microcontroller :: Micro:bit
Environment :: Microcontroller :: WiPy
Environment :: Microcontroller :: LoPy
Environment :: Microcontroller :: OpenMV
I'm not sure if the latter makes sense, but it would certainly be nice to be
able to indicate in a machine-parseable way on which platforms the code works.
What do you think?
--
Radomir Dopieralski
I have a little package "huffman" where I build an sdist and wheel (python
setup.py sdist bdist_wheel) and both seem to get built and can install
fine. I can't seem to upload both to PyPI because the "File already exists":
$ twine upload dist/*
Uploading distributions to https://upload.pypi.org/legacy/
Uploading huffman-0.1.2-py2.py3-none-any.whl
Uploading huffman-0.1.2.tar.gz
HTTPError: 400 Client Error: File already exists. for url:
https://upload.pypi.org/legacy/
Subsequent call to upload *just* the tarball fails the same way. I can't
see an sdist anywhere, and uploading it via the website or twine just tells
me it's already there...somehow. Asking pip to try to give it to me fails
though (the binary works, however):
$ pip download --no-cache-dir --only-binary :all: huffman==0.1.2
Collecting huffman==0.1.2
Downloading huffman-0.1.2-py2.py3-none-any.whl
Saved ./huffman-0.1.2-py2.py3-none-any.whl
Successfully downloaded huffman
$ pip download --no-cache-dir --no-binary :all: huffman==0.1.2
Collecting huffman==0.1.2
Could not find a version that satisfies the requirement huffman==0.1.2
(from versions: )
No matching distribution found for huffman==0.1.2
Am I missing something? I am as sure as I can be that I didn't upload it
twice; I bumped my version up one because I figured that may have been it.
"twine register" that some guides mention just gets shot down with an HTTP
"410 [...] simply upload the file"
Cheers,
Nick
> On Dec 15, 2016, at 9:35 AM, Steve Dower <steve.dower(a)python.org> wrote:
>
> The "curated package sets" on PyPI idea sounds a bit like Steam's curator lists, which I like to think of as Twitter for game reviews. You can follow a curator to see their comments on particular games, and the most popular curators have their comments appear on the actual listings too.
>
> Might be interesting to see how something like that worked for PyPI, though the initial investment is pretty high. (It doesn't solve the coherent bundle problem either, just the discovery of good libraries problem.)
>
Theoretically we could allow people to not just select packages, but also package specifiers for their “curated package set”, so instead of saying “requests”, you could say “requests~=2.12” or “requests==2.12.2”. If we really wanted to get slick we could even provide a requirements.txt file format, and have people able to install the entire set by doing something like:
$ pip install -r https://pypi.org/sets/dstufft/my-cool-set/requirements.txt
—
Donald Stufft
Hi all,
Sorry to bump this 2 year+ old thread.
This recently caused some serious head scratching. Barry's excellent
analysis of what happens seems spot on on to me.
Does anyone know if this issue was ever "resolved" or turned into a
bug report? I can see that the Debian packaging of the lazr.* packages
that Barry used as example still has this in their debian/rules:
override_dh_auto_install:
dh_auto_install
find debian/python*-lazr.* -name '*.pth' -delete
I.e. they undo the "accidental" installation of nspkg.pth files that
install_egg_info caused.
I'm doing the same in our packages right now, but it feels like a
band-aid solution.
In my case it's a bunch of company internal packages that all install
under an "orexplore" namespace package. I'm actually distributing with
a __init__.py for the namespace package and using namespace_packages=
in setup.py, so it's not PEP 420 from the get go. It only becomes a
PEP 420 when the debhelper has its way with it (stripping the
__init__.py, but problematically also installing the nspkg.pth).
Many thanks in advance.
Elvis
On Mar 24, 2014, at 5:48 PM, Barry Warsaw <barry at python.org> wrote:
> Apologies for cross-posting, but this intersects setuptools and the import
> system, and I wanted to be sure it reached the right audience.
>
> A colleague asked me why a seemingly innocent and common use case for
> developing local versions of system installed packages wasn't working, and I
> was quite perplexed. As I dug into the problem, more questions than answers
> came up. I finally (think! I) figured out what is happening, but not so much
> as to why, or what can/should be done about it.
>
> This person had a local checkout of a package's source, where the package was
> also installed into the system Python. He wanted to be able to set
> $PYTHONPATH so that the local package wins when he tries to import it. E.g.:
>
> % PYTHONPATH=`pwd`/src python3
>
> but this didn't work because despite the setting of PYTHONPATH, the system
> version of the package was always found first. The package in question is
> lazr.uri, although other packages with similar layouts will also suffer the
> same problem, which prevents an easy local development of a newer version of
> the package, aside from being a complete head-scratcher.
>
> The lazr.uri package is intended to be a submodule of the lazr namespace
> package. As such, the lazr/__init__.py has the old style way of declaring a
> namespace package:
>
> try:
> import pkg_resources
> pkg_resources.declare_namespace(__name__)
> except ImportError:
> import pkgutil
> __path__ = pkgutil.extend_path(__path__, __name__)
>
> and its setup.py declares a namespace package:
>
> setup(
> name='lazr.uri',
> version=__version__,
> namespace_packages=['lazr'],
> ...
>
> One of the things that the Debian "helper" program does when it builds a
> package for the archive is call `$python setup.py install_egg_info`. It's
> this command that breaks $PYTHONPATH overriding.
>
> install_egg_info looks at the lazr.uri.egg-info/namespace_packages.txt file,
> in which it finds the string 'lazr', and it proceeds to write a
> lazr-uri-1.0.3-py3.4-nspkg.pth file. This causes other strange and unexpected
> things to happen:
>
> % python3
> Python 3.4.0 (default, Mar 22 2014, 22:51:25)
> [GCC 4.8.2] on linux
> Type "help", "copyright", "credits" or "license" for more information.
>>>> import sys
>>>> sys.modules['lazr']
> <module 'lazr'>
>>>> sys.modules['lazr'].__path__
> ['/usr/lib/python3/dist-packages/lazr']
>
> It's completely weird that sys.modules would contain a key for 'lazr' when
> that package was never explicitly imported. Even stranger, because a fake
> module object is stuffed into sys.modules via the .pth file, tracing imports
> with -v gives you no clue as to what's happening. And while
> sys.modules['lazr'] has an __path__, it has no other attributes.
>
> I really don't understand what the purpose of the nspkg.pth file is,
> especially for Python 3 namespace packages.
>
> Here's what the nspkg.pth file contains:
>
> import sys,types,os; p = os.path.join(sys._getframe(1).f_locals['sitedir'], *('lazr',)); ie = os.path.exists(os.path.join(p,'__init__.py')); m = not ie and sys.modules.setdefault('lazr',types.ModuleType('lazr')); mp = (m or []) and m.__dict__.setdefault('__path__',[]); (p not in mp) and mp.append(p)
>
> The __path__ value is important here because even though you've never
> explicitly imported 'lazr', when you *do* explicitly import 'lazr.uri', the
> existing lazr module object's __path__ takes over, and thus the system
> lazr.uri package is found even though both lazr/ and lazr/uri/ should have
> been found earlier on sys.path (yes, sys.path looks exactly as expected).
>
> So the presence of the nspkg.pth file breaks $PYTHONPATH overriding. That
> seems bad. ;)
>
> If you delete the nspkg.path file, then things work as expected, but even this
> is a little misleading!
>
> I think the Debian helper is running install_egg_info as a way to determine
> what namespace packages are defined, so that it can actually *remove* the
> parent's __init__.py file and use PEP 420 style namespace packages. In fact,
> in the Debian python3-lazr.uri binary package, you find no system
> lazr/__init__.py file. This is why removing the nspkg.pth file works.
>
> So I thought, why not conditionally define setup(..., namespace_packages) only
> for Python 2? This doesn't work because the Debian helper will see that no
> namespace packages are defined, and thus it will leave the original
> lazr/__init__.py file in place. This then breaks $PYTHONPATH overriding too
> because of __path__ extension of the pre-PEP 420 code only *appends* the local
> development path. IOW, the system import path is the first element of a
> 2-element list on lazr.__path__. While the local import path is the second
> element, in this case too the local import fails.
>
> It seems like what you want for Python 3 (and we're talking >= 3.2 here) is
> for there to be neither a nspkg.pth file, nor the lazr/__init__.py file, and
> let PEP 420 do it's thing. In fact if you set things up this way, $PYTHONPATH
> overriding works exactly as expected.
>
> Because I don't know why install_egg_info is installing the nspkg.pth file, I
> don't know which component needs to be changed:
>
> * Change setuptools install_egg_info command to not install an nspkg.pth file
> even for namespace_package declare packages, at least under Python 3.
> This behavior seems pretty nasty all by itself because it magically and
> untraceably installs stripped down module objects in sys.modules when
> Python first scans the import path.
>
> * Change the Debian helper to remove the nspkg.pth file, or not call
> install_egg_info *and* continue to remove <nspkg>/__init__.py in Python 3
> so as to take advantage of PEP 420. It's nice to know that PEP 420
> actually represents something sane. :)
>
> For added bonus, we have this additional oddity:
>
> % PYTHONPATH=`pwd`/src python3
> Python 3.4.0 (default, Mar 22 2014, 22:51:25)
> [GCC 4.8.2] on linux
> Type "help", "copyright", "credits" or "license" for more information.
>>>> import sys
>>>> sys.modules['lazr']
> <module 'lazr'>
>>>> sys.modules['lazr'].__path__
> ['/usr/lib/python3/dist-packages/lazr']
>>>> import lazr.uri
>>>> lazr.uri.__file__
> '/usr/lib/python3/dist-packages/lazr/uri/__init__.py'
>>>> sys.modules['lazr']
> <module 'lazr' from '/home/barry/projects/ubuntu/lazruri/trusty/src/lazr/__init__.py'>
>>>> sys.modules['lazr'].__path__
> ['/home/barry/projects/ubuntu/lazruri/trusty/src/lazr', '/usr/lib/python3/dist-packages/lazr']
>
>
> Notice how importing lazr.uri *replaces* sys.modules['lazr'] with the local
> development one, even though it still imports lazr.uri from the system path.
> I'm not exactly sure how this happens, but I've traced that to
> _LoaderBasics.exec_module()'s call of _call_with_frames_removed(), which
> exec's lazr.uri's code object into that module's __dict__. Nothing in
> lazr/uri/__init__.py should be doing that, afaict from both visual inspection
> of the code and disassembling the compiled code object.
>
> Hopefully I've explained the situation correctly and lucidly. Below I'll
> describe how to set up a reproducible environment on a Debian machine.
> Thoughts and comments are welcome!
>
> Cheers,
> -Barry
>
> % sudo apt-get install python3-lazr.uri
> % cd tmp
> % bzr branch lp:lazr.uri trunk
> % cd trunk
> % PYTHONPATH=`pwd`/src python3
> (Then try things at the Python prompt from above.)
> _______________________________________________
> Distutils-SIG maillist - Distutils-SIG at python.org
> https://mail.python.org/mailman/listinfo/distutils-sig