[Numpy-discussion] fpower ufunc
Charles R Harris
charlesr.harris at gmail.com
Thu Oct 20 23:38:33 EDT 2016
On Thu, Oct 20, 2016 at 9:11 PM, Nathaniel Smith <njs at pobox.com> wrote:
> On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
> <charlesr.harris at gmail.com> wrote:
> > Hi All,
> > I've put up a preliminary PR for the proposed fpower ufunc. Apart from
> > adding more tests and documentation, I'd like to settle a few other
> > The first is the name, two names have been proposed and we should settle
> > one
> > fpower (short)
> > float_power (obvious)
> +0.6 for float_power
> > The second thing is the minimum precision. In the preliminary version I
> > used float32, but perhaps it makes more sense for the intended use to
> > the minimum precision float64 instead.
> Can you elaborate on what you're thinking? I guess this is because
> float32 has limited range compared to float64, so is more likely to
> see overflow? float32 still goes up to 10**38 which is < int64_max**2,
> FWIW. Or maybe there's some subtlety with the int->float casting here?
logical, (u)int8, (u)int16, and float16 get converted to float32, which is
probably sufficient to avoid overflow and such. My thought was that float32
is something of a "specialized" type these days, while float64 is the
standard floating point precision for everyday computation.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion