Over on the enthought-dev mailing list we're having a bit of a
discussion on what the best way to distribute documenation and examples
is for projects that we distribute binary eggs for. The general
consensus is that it would be very nice indeed if there was a way to
generate a tarball, or platform install, of just documentation and
examples so that people wouldn't need to download a full, presumably
significantly larger, source tarball. Another option would be that
eggs included the documentation and examples and that, during the
installation of the egg, those docs and examples were relocated to a
common location (outside of the zip) to make access by users more
convenient. This latter idea is similar to how Ruby Gems deal with docs.
I don't claim to be a distutils or setuptools guru, so it shouldn't be
too surprising that I can't seem to find anything about a setuptools or
distutils command to do either of these. Am I missing something?
If not, does it seem like something that might be worthy of putting into
setuptools?
-- Dave
Robert Kern wrote:
Stanley A. Klein wrote:
>> Robert -
>>
>> Thanks for illuminating the issue.
>>
>> The problem I had was as follows. Fedora (also RedHat) uses SE-Linux,
which needs to know all the files expected to be in sensitive
directories
>> such as the Python site-packages. This includes the pyc and pyo files
ordinarily generated by Python as the .py files are executed.
>>
>> It turns out that to do a bdist_rpm for Fedora, it is necessary to
create
>> a setup.cfg file containing the lines:
>>
>> [install]
>> optimize = 1
>>
>> or to add these lines to the existing setup.cfg file.
>>
>> If that is not done, the result in Fedora is an unpackaged files error.
>> This is due to the fact that if distutils/setuptools doesn't cause the
pyc
>> and pyo files to be created, Fedora will create them but they won't be
properly handled in the spec file created by distutils/setuptools.
>>
>> In trying to do bdist_rpm with kiva, I got the unpackaged files error.
This implies that numpy distutils did not properly handle the
optimize=1
>> in the setup.cfg (when I did "python setup.py bdist_rpm"). That's when
I
>> went to the workaround that resulted in this thread.
>>
>> I hope this clarifies the problem.
>Not quite. I don't know what "the unpackaged files error" looks like. Can
you
>try Phillip's suggestion using --root and --record and show us the
results? Did
>you run "python setup.py build" before "python setup.py bdist_rpm"? We've
often
>seen problems with the dependency-handling between distutils commands.
I tried your second suggestion and it didn't work. Running bdist_rpm
starts the whole process from scratch.
Regarding Phillip's suggestion, when running bdist_rpm I don't have direct
access to --root and --record. However, in trying to track it down, I
think I may have found something that is contributing to the problem.
When running bdist_rpm, the install command options appear to be set in a
script located in the %install part of the rpm spec file. Most
likely, the install is called by the rpm program and is controlled by the
spec file. The install script in the kiva spec file is "python setup.py
install --root=$RPM_BUILD_ROOT --record=INSTALLED_FILES". The install
scripts in the other spec files (that worked) are "python
setup.py install --single-version-externally-managed --root=
$RPM_BUILD_ROOT --record=INSTALLED_FILES"
I looked at the numpy distutils. In many cases they check to see if
setuptools is installed, and if so they import the relevant setuptools
modules (otherwise they import the distutils modules). In particular, the
install command does that. However, in the numpy distutils, the bdist_rpm
command does not. It only uses the distutils bdist_rpm, that produces the
spec file install script shown above. It seems to me that the setuptools
install is being called via the numpy distutils without the
--single-version-externally-managed option it would expect if its own
bdist_rpm were being used to build the spec file.
I don't know if that is enough to get the install to ignore including the
pyc and pyo files in the INSTALLED_FILES as essentially specified in the
setup.cfg option, but it is clearly a glitch that could cause
unexpected behavior.
The issue with the .so files arose because of my effort to work around the
issue of the pyc and pyo files. I think the proper choice of
bdist_rpm to import may be closer to the cause of the original problem.
I tried to do something to fix the numpy distutils bdist_rpm.py (by trying
to follow what was done in install.py) but it didn't work and I got an
error message I didn't understand.
Stan Klein
Oops, I forgot to copy the subject line.
Phillip J. Eby wrote:
> At 03:15 PM 7/30/2007 -0400, Stanley A. Klein wrote:
>>> I don't need to build the .so files. They are already built. That
had to
>>> be done using the build-in-place and the numpy distutils for reasons I
don't fully understand but are related to the use of numpy.
>
>> Have you tried building them with setuptools, using the numpy
>> distutils 'build_ext' command, using:
>>
>> setup(
>> cmdclass = dict(build_ext = numpy_build_ext_class_here),
ext_modules = list_of_Extension_objects,
>> ...
>> )
>>
>> Unless there is a radical difference between numpy distutils and the
regular distutils, you should be able to do this. Just find numpy's
"build_ext" class, and define the appropriate Extension() objects (for
the things to build) in your setup script. Setuptools will then
delegate the building to numpy, but handle the installing itself.
>
>> Again, this is assuming that numpy's distutils extensions don't do
anything unfriendly like completely redefine how extension objects work
or assume that their commands will be only mixed with other numpy
commands. (Setuptools doesn't make such assumptions, and tries to
leave the normal distutils stuff alone as much as possible.)
>I think we're getting into confusing territory by trying to get
workarounds >for workarounds. Let me try to take us a step back and focus
on the initial >problem which is that bdist_rpm is not working with
enthought.kiva. The >existing setup script already does build extensions
just fine; they're just >not being picked up by bdist_rpm. A suggestion
from a coworker of mine >prompted Stanley to look at using a script that
we have for building >enthought.kiva inplace (there are a few more options
that are needed beyond >"python setup.py develop"); however, it wasn't
really a suggestion to use >that as basis for building an RPM.
>
>numpy.distutils extends distutils in three ways which are important for
enthought.kiva:
>
> * automatically adds the location of the numpy headers to the
>include_dirs of Extensions. (easily replaced)
>
> * adds a build_src command that allows users to give Python functions
in >the sources list of an Extension. These functions will be called to
>actually
>generate the real source files. (hard to replace)
>
> * allows subpackages to have their own build information which is
>assembled by the top-level setup.py script. This is mostly legacy from
when >the enthought package was monolithic and doesn't strictly need to
continue. >I won't go into details since I don't think it's part of the
problem. >(straightforward, but time-consuming to replace)
>numpy.distutils tries hard to not step on setuptools' toes. We actually
check if setuptools is in sys.modules and use its command classes
instead >of distutils' as the base classes for our commands. However, it's
possible >that neglect of our bdist_rpm implementation has caused the
implementations >to diverge and some toe-stepping has taking place.
>The main problem is that bdist_rpm is not working on enthought.kiva. Most
likely, this is the fault of numpy.distutils. However, this is a bug that
needs to be caught and fixed. Working around it by doing an --inplace
build >and then trying to include the extension modules as package_data is
not >likely to work and is not a solution.
>
>I'm not usually a Redhat guy, so I don't have much experience with
bdist_rpm; however, numpy.distutils has had problems with bdist_rpm in
the >past. I'm trying to get an environment working on a Redhat machine,
and >will try to build an RPM for enthought.kiva and try to see the
problem >first-hand. I've looked over Stanley's emails on the subject, and
don't see >enough information for me to really pin down the problem.
Robert -
Thanks for illuminating the issue.
The problem I had was as follows. Fedora (also RedHat) uses SE-Linux,
which needs to know all the files expected to be in sensitive directories
such as the Python site-packages. This includes the pyc and pyo files
ordinarily generated by Python as the .py files are executed.
It turns out that to do a bdist_rpm for Fedora, it is necessary to create
a setup.cfg file containing the lines:
[install]
optimize = 1
or to add these lines to the existing setup.cfg file.
If that is not done, the result in Fedora is an unpackaged files error.
This is due to the fact that if distutils/setuptools doesn't cause the pyc
and pyo files to be created, Fedora will create them but they won't be
properly handled in the spec file created by distutils/setuptools.
In trying to do bdist_rpm with kiva, I got the unpackaged files error.
This implies that numpy distutils did not properly handle the optimize=1
in the setup.cfg (when I did "python setup.py bdist_rpm"). That's when I
went to the workaround that resulted in this thread.
I hope this clarifies the problem.
Thanks.
Stan Klein
--
Stanley A. Klein, D.Sc.
Managing Principal
Open Secure Energy Control Systems, LLC
8070 Georgia Avenue
Silver Spring, MD 20910
301-565-4025
Phillip J. Eby wrote:
> At 03:15 PM 7/30/2007 -0400, Stanley A. Klein wrote:
>>> I don't need to build the .so files. They are already built. That
had to
>>> be done using the build-in-place and the numpy distutils for reasons I
>>> don't fully understand but are related to the use of numpy.
>
>> Have you tried building them with setuptools, using the numpy
>> distutils 'build_ext' command, using:
>>
>> setup(
>> cmdclass = dict(build_ext = numpy_build_ext_class_here),
>> ext_modules = list_of_Extension_objects,
>> ...
>> )
>>
>> Unless there is a radical difference between numpy distutils and the
>> regular distutils, you should be able to do this. Just find numpy's
>> "build_ext" class, and define the appropriate Extension() objects
>> (for the things to build) in your setup script. Setuptools will then
>> delegate the building to numpy, but handle the installing itself.
>
>> Again, this is assuming that numpy's distutils extensions don't do
>> anything unfriendly like completely redefine how extension objects
>> work or assume that their commands will be only mixed with other
>> numpy commands. (Setuptools doesn't make such assumptions, and tries
>> to leave the normal distutils stuff alone as much as possible.)
>I think we're getting into confusing territory by trying to get
workarounds >for workarounds. Let me try to take us a step back and focus
on the initial >problem which is that bdist_rpm is not working with
enthought.kiva. The >existing setup script already does build extensions
just fine; they're just >not being picked up by bdist_rpm. A suggestion
from a coworker of mine >prompted Stanley to look at using a script that
we have for building >enthought.kiva inplace (there are a few more
options that are needed beyond >"python setup.py develop"); however, it
wasn't really a suggestion to use >that as basis for building an RPM.
>
>numpy.distutils extends distutils in three ways which are important for
>enthought.kiva:
>
> * automatically adds the location of the numpy headers to the
>include_dirs of Extensions. (easily replaced)
>
> * adds a build_src command that allows users to give Python functions
in >the sources list of an Extension. These functions will be called to
>actually
>generate the real source files. (hard to replace)
>
> * allows subpackages to have their own build information which is
>assembled by the top-level setup.py script. This is mostly legacy from
when >the enthought package was monolithic and doesn't strictly need to
continue. >I won't go into details since I don't think it's part of the
problem. >(straightforward, but time-consuming to replace)
>numpy.distutils tries hard to not step on setuptools' toes. We actually
>check if setuptools is in sys.modules and use its command classes
instead >of distutils' as the base classes for our commands. However,
it's possible >that neglect of our bdist_rpm implementation has caused
the implementations >to diverge and some toe-stepping has taking place.
>The main problem is that bdist_rpm is not working on enthought.kiva. Most
>likely, this is the fault of numpy.distutils. However, this is a bug that
>needs to be caught and fixed. Working around it by doing an --inplace
build >and then trying to include the extension modules as package_data
is not >likely to work and is not a solution.
>
>I'm not usually a Redhat guy, so I don't have much experience with
>bdist_rpm; however, numpy.distutils has had problems with bdist_rpm in
the >past. I'm trying to get an environment working on a Redhat machine,
and >will try to build an RPM for enthought.kiva and try to see the
problem >first-hand. I've looked over Stanley's emails on the subject,
and don't see >enough information for me to really pin down the problem.
Robert -
Thanks for illuminating the issue.
The problem I had was as follows. Fedora (also RedHat) uses SE-Linux,
which needs to know all the files expected to be in sensitive directories
such as the Python site-packages. This includes the pyc and pyo files
ordinarily generated by Python as the .py files are executed.
It turns out that to do a bdist_rpm for Fedora, it is necessary to create
a setup.cfg file containing the lines:
[install]
optimize = 1
or to add these lines to the existing setup.cfg file.
If that is not done, the result in Fedora is an unpackaged files error.
This is due to the fact that if distutils/setuptools doesn't cause the pyc
and pyo files to be created, Fedora will create them but they won't be
properly handled in the spec file created by distutils/setuptools.
In trying to do bdist_rpm with kiva, I got the unpackaged files error.
This implies that numpy distutils did not properly handle the optimize=1
in the setup.cfg (when I did "python setup.py bdist_rpm"). That's when I
went to the workaround that resulted in this thread.
I hope this clarifies the problem.
Thanks.
Stan Klein
I'm trying to package an rpm for enthought kiva. The regular setup.py
uses the numpy distutils because of some cpp functions that have to be
compiled and are somehow tied to numpy.
Someone recently did a "build in place" program that uses the existing
setup.py (that I renamed setup.original.py) and builds the .so files in
the regular source directory hierarchy. I did that and then tried to run
setuptools (python setup.py bdist_rpm) using a straightforward setup.py.
It included all the python files but missed the *.so files. I can run
kiva examples if I manually put the *.so files into the proper place in
site-packages, so I know they are needed.
I tried including
packages = find_packages(),
package_data = {'': ['*.so']},
in the setup.py, but it still missed the *.so files.
What am I doing wrong?
Stan Klein
Steve Holden wrote:
> Yaakov (Cygwin Ports) wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> Steve Holden wrote:
>>> I have tried to install it, but each time I do I see the first compile
>>> command start to run and the process hangs. If I CTRL/C the setup.py
>>> and restart it, it begins the next compile (implying that the previous
>>> one succeeded), which again hangs. I see python.exe still running for
>>> each install run I have interrupted, which I have to kill manually.
>>
>> Try rebaseall.
>>
> Unfortunately I did that before I posted. Sorry, should have mentioned it.
>
[Note for distutils-sig readers: I am trying to track down a build issue
with version 1.1.6 of the PIL under Cygwin - nobody else reports any
successful or failed experiences building this package].
Here's the output from a failed setup run with a couple of debug prints
inserted which should report how sub-process termination occurred - it
hangs after this output.
$ python setup.py build
running build
running build_py
running build_ext
building '_imaging' extension
SPAWN: ['gcc', '-fno-strict-aliasing', '-DNDEBUG', '-g', '-O3', '-Wall',
'-Wstrict-prototypes', '-DHAVE_LIBZ', '-IlibImaging', '-I/usr/include',
'-I/usr/include/python2.5', '-c', 'libImaging/Chops.c', '-o',
'build/temp.cygwin-1.5.24-i686-2.5/libImaging/Chops.o'] PATH? 1 V: 0 D:0
gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes
-DHAVE_LIBZ -
IlibImaging -I/usr/include -I/usr/include/python2.5 -c
libImaging/Chops.c -o bui
ld/temp.cygwin-1.5.24-i686-2.5/libImaging/Chops.o
Are we done yet? Waiting on pid 3280
As a further follow up, I extracted the _spawn_all function and ran it
under command line control.
Everything seems to work fine with other subtasks, so I am wondering
whether this is a failure specific to gcc, which would seem kind of
unlikely. So I ran the same compile using my test function standing
alone, and see
sholden@bigboy ~/Imaging-1.1.6
$ python ~/Projects/Python/spawntest.py gcc -fno-strict-aliasing
-DNDEBUG -g -O3 -Wall -Wstrict-prototypes -DHAVE_LIBZ -IlibImaging
-I/usr/include -I/usr/include/python2.5 -c libImaging/Chops.c -o
build/temp.cygwin-1.5.24-i686-2.5/libImaging/Chops.o
Are we done yet? Waiting on pid 3244
Got pid, status 3244 0
Got WIFEXITED 0
So it appears unlikely to be gcc-specific, leaving me wondering what
exactly is the difference between the build environment and my tests.
regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------
> > http://en.wikipedia.org/wiki/X86-64 notes that 'x64' is a common
> > name, so
> > how does 'win-x64' and 'win-ia64' sound as a compromise?
> I'm happy
> > to let
> > any other informal "votes" make a final decision though...
>
> As one who is forever switching back and forth among operating
> systems (Windows, sometimes with cygwin, Linux, Mac OS X,
> Solaris), I
> would be happy to see the standard names that I'm used to, i.e. the
> output of config.guess [1], which I think is "x86_64" for the
> architecture and "win32" for the operating system.
I agree in principle - if config.guess has standardized, we should adopt it.
> Actually config.guess doesn't run on Windows unless with cygwin or
> mingw, in which case it outputs "cygwin" or "mingw" for the
> operating
> system. But it definitely outputs "x86_64" on other operating
> systems on x86-64 architectures.
My Vista x64 box has a (32bit - can't locate a 64bit) cygwin installed.
This is what I get:
sh-3.2$ /usr/share/automake-1.10/config.guess
i686-pc-cygwin
sh-3.2$ uname -a
CYGWIN_NT-6.0-WOW64 vista-64 1.5.24(0.156/4/2) 2007-01-31 10:57 i686 Cygwin
My XP x86 box gives the exact same result for config.guess (i686), so I'm a
little confused by this. Does anyone know what I am missing? But either
way, I have to concede that my preference for x86 is somewhat arbitrary, so
x86_64 appears to win the vote (although polls remain open until checkin
time <wink>)
I've created a patch at http://python.org/sf/1761786 with x86_64 - I'd
welcome any feedback etc. Note the patch also changes bdist_msi to use
get_platform() for the final .msi created.
Cheers,
Mark
On Thu, July 26, 2007 12:46 pm, Phillip J. Eby wrote:
At 03:02 AM 7/26/2007 -0500, Dave Peterson wrote:
>>Over on the enthought-dev mailing list we're having a bit of a
>>discussion on what the best way to distribute documenation and examples
>>is for projects that we distribute binary eggs for. The general
>>consensus is that it would be very nice indeed if there was a way to
>>generate a tarball, or platform install, of just documentation and
>>examples so that people wouldn't need to download a full, presumably
>>significantly larger, source tarball. Another option would be that
>>eggs included the documentation and examples and that, during the
>>installation of the egg, those docs and examples were relocated to a
>>common location (outside of the zip) to make access by users more
>>convenient. This latter idea is similar to how Ruby Gems deal with docs.
>>
>>I don't claim to be a distutils or setuptools guru, so it shouldn't be
>>too surprising that I can't seem to find anything about a setuptools or
>>distutils command to do either of these. Am I missing something?
>There are a few different ways you could do this. The easiest would
>be to put a docs subdirectory either inside one of your packages or
>inside your .egg-info directory, then use the pkg_resources resource
>or metadata APIs to list and extract them. One advantage to using
>something like .egg-info/docs would be that this could perhaps be
>recognized by some sort of "standard" tools in the future.
>If not, does it seem like something that might be worthy of putting into
>setuptools?
>It might be worth establishing convention(s) for, yes. Tools could
>then be developed around them, including perhaps changes to easy_install.
This relates to a question I asked this list earlier this month (but
didn't get a response). For Linux systems the Linux Standards Base
references the Unix Filesystem Hierarchy Standard (that applies to all
Unix systems as well). The FHS specifies that documentation files (other
than specially formatted items like man pages) go into
/usr/share/doc/package_name_and_version
These sometimes include examples, demos, and similar files. For example,
the docs on my FC5 system for inkscape go in
/usr/share/doc/inkscape-0.45.1. In that case the doc files are a typical
minimal set:
/usr/share/doc/inkscape-0.45.1/AUTHORS
/usr/share/doc/inkscape-0.45.1/COPYING
/usr/share/doc/inkscape-0.45.1/ChangeLog
/usr/share/doc/inkscape-0.45.1/NEWS
/usr/share/doc/inkscape-0.45.1/README
and the examples, tutorials, clipart, and many miscellaneous files are in
/usr/share/inkscape. The actual executables are in /usr/bin.
In some cases the documentation is created as a separate package. For
example, Python does this for its html-based docs and on my FC5 system,
the python html docs are in /usr/share/doc/python-docs-2.4.3/. Similar
considerations go to configuration files that are supposed to go into
/etc. There are a number of other rules, and they are generally observed
by systems that use rpm and deb packaging such as Fedora and Ubuntu.
I couldn't figure out how to make this happen when using bdist_rpm, which
is why I asked the earlier question. It seems to me that the only way
using current Python packaging features would be to put the docs somewhere
in the Python site-packages hierarchy and do a post-install scripted move
to where they belong under the LSB/FHS rules. It would be preferable to
get them to go where they belong without the need for post-install
scripting.
Stan Klein
At 10:51 PM 7/25/2007 +0200, Arve Knudsen wrote:
>I take it there is no interest in fixing this problem? Or is it
>simply too much hassle??
>
>Arve
>
>On 7/10/07, Arve Knudsen <<mailto:arve.knudsen@gmail.com>
>arve.knudsen(a)gmail.com> wrote:
>Hello
>
>As far as I can tell setuptools does not always regenerate the
>source manifest when MANIFEST.in changes, contrary to what the
>documentation says. That is, if I add a directive to exclude files
>with the .txt extension to MANIFEST.in (after having previously
>including them), and run the sdist command, such files are still
>included. I have to remove the .egg-info for things to work as
>expected. This must clearly be a bug?
>
>Regards,
>Arve Knudsen
Your email got lost in the flurry of PyPI-related emails. I am
looking at the code and don't see how this condition could be
produced, at least not the way you seem to be describing it, because
the template is processed after the old filelist is read in. So if
there are exclusion commands in the template, these should *always*
be applied to the resulting SOURCES.txt.
Could you give me the exact old MANIFEST.in contents, the new
MANIFEST.in contents, and the egg-info/SOURCES.txt?
Better yet, can you give me a small setup.py and steps that I can use
to reproduce the problem?
At 11:30 AM 7/26/2007 -0400, Noah Gift wrote:
>Have you envisioned a workflow that interacts with RPM's or other
>package management systems? I know my company uses RPM's for
>example to package up our Python Packages, and many shops use RPM
>and/or debian packaging systems.
Running bdist_rpm on a setuptools-based package produces a usable
RPM, but you have to manually translate any dependencies and supply
them as bdist_rpm options, either on the command line or in the
project's setup.cfg.