[SciPy-dev] Bessel functions from Boost

Charles R Harris charlesr.harris at gmail.com
Thu Feb 12 21:58:25 EST 2009


On Thu, Feb 12, 2009 at 6:31 PM, David Cournapeau <
david at ar.media.kyoto-u.ac.jp> wrote:

> Pauli Virtanen wrote:
> > Wed, 11 Feb 2009 03:31:30 +0900, David Cournapeau wrote:
> >
> > [clip]
> >
> >> I started a branch, special_refactor. I added all the converted Boost
> >> data set (the .ipp files to .csv), plus the small python script I used
> >> to generate them. I started implementing the corresponding tests - but
> >> this takes some time, because of all this template stuff which is
> >> awkward to follow. The only thing to do is to find which function is
> >> called for which test with which parameter - someone more familiar with
> >> boost could to this much faster, I guess.
> >>
> >
> > I added a couple of more functions to the tests:
> >
> > They correctly point out that in 0.7.0:
> >
> > + The problems in Cephes's Iv (large argument), Yv (large order)
> >   and Kn (large order)
> >
> > + Numpy's complex-valued `arcsinh` and `arctanh` can have large
> >   relative errors (~1e-5) for small arguments (< eps)!
> >
> >   Loss of precision in the naive implementation, I'll bet.
> >
> > but they fail to spot the other known issues. But on the positive side,
> > the `arcsinh` issue is the only new one that came up.
> >
> > One problem with these tests is that the data files are *huge*,
> > they currently total ~ 7 Mb. Even compressed, or saved as .npy files,
> > these would add ~ 2 Mb to the Scipy source tarball. So I'm not sure
> > what to do with this...
> >
>
> That's the reason why I started a branch - I did not know how it would
> end up. I don't see an obvious answer to the problem: those are tests
> for ~ 100 functions, so this means 20kb of compressed data/function on
> average. Each test is two data points at least (x and f(x)), this means
> around ~ 500 test points/function. That does not sound that big anymore.
> Maybe we could have an option to split the dataset to make them separate
> from the main tarball ?
>
> I kept the data in .csv because I thought it would be nice to test for
> double and float at least, and the gain using binary would not be that
> huge anymore (it is also easier to use for tests outside the python
> machinery),
>

Maybe it would be best to split out the tests into a separate project and
not distribute it with scipy. It could be turned into a generic test suite
based on python that could be used to test any implementation of a specific
function.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20090212/7b90d642/attachment.html>


More information about the SciPy-Dev mailing list