There have been some changes recently in the umath code, which
breaks windows 64 compilation - and I don't understand their rationale
either. I have myself spent quite a good deal of time to make sure this
works on many platforms/toolchains, by fixing the config distutils
command and that platform specificities are contained in a very
localized part of the code. It may not be very well documented (see
below), but may I ask that next time someone wants to change file file,
people ask for review before putting it directly in the trunk ?
How to deal with platform oddities:
Basically, the code to replace missing C99 math funcs is, for an
hypothetical double foo(double) function:
static double npy_foo(double a)
// define a npy_foo function with the same requirements as C99 foo
#define npy_foo foo
I think this code is wrong on several accounts:
- we should not undef foo if foo is available: if foo is available at
that point, it is a bug in the configuration, and should not be dealt in
the code. Some cases may be complicated (IEEE754-related macro which are
sometimes macro, something functions, etc...), but that should be dealt
in very narrow cases.
- we should not declare our own function: function declaration is not
portable, and varies among OS/toolchains. Some toolchains use intrinsic,
some non standard inline mechanism, etc... which can crash the resulting
binary because there is a discrepency between our code calling
conventions and the library convention. The reported problem with VS
compiler on amd64 is caused by this exact problem.
Unless there is a strong rationale otherwise, I would like that we
follow how "autoconfed" projects do. They have long experience on
dealing with platforms idiosyncrasies, and the above method is not the
one they follow. They follow the simple:
And deal with platform oddities in the *configuration* code instead of
directly in the code. That really makes my life easier when I deal with
windows compilers, which are already painful enough to deal with as it is.