[Numpy-discussion] Recent umath changes

David Cournapeau david at ar.media.kyoto-u.ac.jp
Wed Dec 17 00:56:43 EST 2008


Charles R Harris wrote:
>
>
> On Tue, Dec 16, 2008 at 8:59 PM, David Cournapeau
> <david at ar.media.kyoto-u.ac.jp <mailto:david at ar.media.kyoto-u.ac.jp>>
> wrote:
>
>     Hi,
>
>        There have been some changes recently in the umath code, which
>     breaks windows 64 compilation - and I don't understand their rationale
>     either. I have myself spent quite a good deal of time to make sure
>     this
>     works on many platforms/toolchains, by fixing the config distutils
>     command and that platform specificities are contained in a very
>     localized part of the code. It may not be very well documented (see
>     below), but may I ask that next time someone wants to change file
>     file,
>     people ask for review before putting it directly in the trunk ?
>
>     thanks,
>
>     David
>
>
>     How to deal with platform oddities:
>     -----------------------------------
>
>     Basically, the code to replace missing C99 math funcs is, for an
>     hypothetical double foo(double) function:
>
>     #ifndef HAVE_FOO
>     #udnef foo
>     static double npy_foo(double a)
>     {
>     // define a npy_foo function with the same requirements as C99 foo
>     }
>
>     #define npy_foo foo
>     #else
>     double foo(double);
>     #endif
>
>     I think this code is wrong on several accounts:
>      - we should not undef foo if foo is available: if foo is available at
>     that point, it is a bug in the configuration, and should not be
>     dealt in
>     the code. Some cases may be complicated (IEEE754-related macro
>     which are
>     sometimes macro, something functions, etc...), but that should be
>     dealt
>     in very narrow cases.
>      - we should not declare our own function: function declaration is not
>     portable, and varies among OS/toolchains. Some toolchains use
>     intrinsic,
>     some non standard inline mechanism, etc... which can crash the
>     resulting
>     binary because there is a discrepency between our code calling
>     conventions and the library convention. The reported problem with VS
>     compiler on amd64 is caused by this exact problem.
>
>     Unless there is a strong rationale otherwise, I would like that we
>     follow how "autoconfed" projects do. They have long experience on
>     dealing with platforms idiosyncrasies, and the above method is not the
>     one they follow. They follow the simple:
>
>
> Yes, the rational was to fix compilation on windows 64 with msvc and
> etch on SPARC, both of which were working after the changes.

It does not work at the moment on windows at least :) But more
essentially, I don't see why you declared those functions: can you
explain me what was your intention, because I don't understand the
rationale.

> You are, of course, free to break these builds again. However, I
> designated space at the top of the file for compiler/distro specific
> defines, I think you should use them, there is a reason other folks do.

The problem is two folds:
   - by declaring functions everywhere in the code, you are effectively
spreading toolchain specific oddities in the whole source file. This is
not good, IMHO: those should be detected at configuration stage, and
dealt in the source code using those informations. That's how every
autoconf project does it. If a function is actually a macro, this should
be detected at configuration.
 - declarations are toolchain specific; it is actually worse, it even
depends on the compiler flags. It is at least the case with MS
compilers. So there is no way to guarantee that your declaration matches
the math runtime one (the compiler crash reported is exactly caused by
this).

> The macro undef could be moved but I preferred to generate an error if
> there was a conflict with the the standard c function prototypes.
>
> We can't use inline code for these functions as they are passed to the
> generic loops as function pointers.

Yes, I believe this is another problem when declaring function: if we
use say cosl, and cosl is an inline function in the runtime, by
re-declaring it, you are telling the compiler that it is not inline
anymore, so the compiler does not know anymore you can't take the
address of cosl, unless it detects the mismatch between the runtime
declaration and ours, and considers it as an error (I am not sure
whether this is always an error with MS compilers; it may only be a
warning on some versions - it is certainly not dealt in a gracious
manner every time, since the linker crashes in some cases).

David
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion at scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   




More information about the NumPy-Discussion mailing list