[Numpy-discussion] fpower ufunc

Charles R Harris charlesr.harris at gmail.com
Thu Oct 20 23:38:33 EDT 2016


On Thu, Oct 20, 2016 at 9:11 PM, Nathaniel Smith <njs at pobox.com> wrote:

> On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
> <charlesr.harris at gmail.com> wrote:
> > Hi All,
> >
> > I've put up a preliminary PR for the proposed fpower ufunc. Apart from
> > adding more tests and documentation, I'd like to settle a few other
> things.
> > The first is the name, two names have been proposed and we should settle
> on
> > one
> >
> > fpower (short)
> > float_power (obvious)
>
> +0.6 for float_power
>
> > The second thing is the minimum precision. In the preliminary version I
> have
> > used float32, but perhaps it makes more sense for the intended use to
> make
> > the minimum precision float64 instead.
>
> Can you elaborate on what you're thinking? I guess this is because
> float32 has limited range compared to float64, so is more likely to
> see overflow? float32 still goes up to 10**38 which is < int64_max**2,
> FWIW. Or maybe there's some subtlety with the int->float casting here?
>

logical, (u)int8, (u)int16, and float16 get converted to float32, which is
probably sufficient to avoid overflow and such. My thought was that float32
is something of a "specialized" type these days, while float64 is the
standard floating point precision for everyday computation.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20161020/b51784b4/attachment.html>


More information about the NumPy-Discussion mailing list