[Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer

David Cournapeau david at ar.media.kyoto-u.ac.jp
Wed Apr 1 01:10:09 EDT 2009


Christopher Barker wrote:
> It does, but we don't need a binary installer for a python that doesn't 
> have a binary installer.
>   

Yes, not now - but I would prefer avoiding to have to change the process
again when time comes. It may not look like it, but enabling a working
process which works well on all platforms including windows took me
several days to work properly. And I am in a hurry to go into this again :)
> Hmmm -- I don't know virtualenv enough to know what the virtualenv knows 
> about how it was created...
>
> However, I'm not sure you need to do what your saying here. I imagine 
> this workflow:
>
> set up a virtualenv for, say numpy x.y.rc-z
>
> play around with it, get everything to build, etc. with plain old 
> setup.py build, setup.py install, etc.
>
> Once you are happy, run:
>
> /Library/Frameworks/Python.framework/Versions/2.5/bin/bdist_mpkg
>   

This means building the same thing twice. Now, for numpy, it is not that
a big deal, but for scipy, not so much. If/when we have a good, reliable
build farm for mac os x, this point becomes moot, so.

> By the way, if you run bdist_mpkg from a version installed into your 
> virtualenv, you will get an installer that will install into your 
> virtualenv, whit the path hard coded, so really useless.
>   


That's exactly the problem in the current binary :)
>
> ah -- maybe that's the issue then -- darn. Are the docs included in the 
> .mpkg? Do they need to be built for that?
>   

The docs are included in the .dmg, and yes, the doc needs to be built
from the same installation (or more exactly the same source).

>
> yes, I am.
>   

I have not tackled the uninstall part, but I already wrote this to
"write in stone" my POV in the whole python packaging situation:

http://cournape.wordpress.com/2009/04/01/python-packaging-a-few-observations-cabal-for-a-solution/

>
> True. In that case we could put the dylib somewhere obscure:
>
> /usr/local/lib/scipy1.6/lib/
>   

Hm, that's strange - why /usr/local/lib ? It is outside the scipy
installation.

> or even:
>
> /Library/Frameworks/Python.framework/Versions/2.5/lib/
>   

That's potentially dangerous: since this directory is likely to be in
LIBDIR, it means libgfortran will be taken there or from /usr/local/lib
if the user builds numpy/scipy after installing numpy. If it is
incompatible with the user gfortran, it will lead to weird issues, hard
to debug.

This problem bit me on windows 64 bits recently: we did something
similar (creating a libpython*.a and put in C:\python*\libs), but the
installed library was not 64 bits compatible - I assumed this library
was built by python itself, and I have wasted several hours looking
elsewhere for a problem caused by numpy.distutils.

If we install something like libgfortran, it should be installed
privately - but dynamically linking against private libraries is hard,
because that's very platform dependent (in particular on windows, I have
yet to see a sane solution - everyone just copy the private .dll
alongside the binaries, AFAICS).

Now, if you bring me a solution to this problem, I would be *really* glad.

> Actually, and I betray my ignorance here, but IIUC:
>
>   - There are a bunch of different scipy extensions that use libgfortran
>   - Many of them are built more-or-less separately
>   - So each of them would get their own copy of the static libgfortran
>   

AFAIK, statically linking a library does not mean the whole copy is put
into the binary. I guess different binary formats do it differently, but
for example, on linux:

gfortran hello.f -> a.out is 8 kb
gfortran hello.f -static-libgfortran -> a.out is 130 kb
libgfortran.a -> ~ 1.3Mb

Of course, this depends on the function you need to pull out from
libgfortran - but I guess we do not pull so much, because we mainly use
intrinsics (gfortran math functions, etc...) which should be very small.
I don't think we use so much the IO fortran runtime - actually, we
should explicitly avoid it since it cause trouble because the C and
fortran runtimes  would 'fight' each other with unspeakable consequences.

And thinking about it: mac os x rather encourage big binaries - "fat
binary", so I am not sure it is a big concern.

> This is making me think solving the dynamic linking problem makes sense.
>   

It makes sense for a whole lot of reasons, but it is hard. The problem
is much bigger on windows (where almost *everything* is statically
linked), and I tried to tackle this to ship only one numpy installer
with 3 dynamically loaded atlas at runtime - I did not find a workable
solution.

>
> Also, would it break anything if the libgfortran installed were properly 
> versioned:
>
>   libgfortran.a.b.c
>
> Isn't that the point of versioned libs?
>   

versioned libraries only make sense for shared libraries,I think. On
Linux, the static library is not even publicly available (it is in
/usr/lib/gcc/4.3.3). I wonder whether the mac os x gfortran binary did
not make a mistake there, actually.

cheers,

David



More information about the NumPy-Discussion mailing list