numpy numbers converted wrong
Robert Kern
robert.kern at gmail.com
Thu Oct 26 18:02:48 EDT 2006
robert wrote:
> in Gnuplot (Gnuplot.utils) the input array will be converted to a Numeric float array as shown below. When I insert a numpy array into Gnuplot like that below, numbers 7.44 are cast to 7.0
> Why is this and what should I do ? Is this bug in numpy or in Numeric?
>
>
> [Dbg]>>> m #numpy array
> array([[ 9.78109200e+08, 7.44000000e+00],
> [ 9.78454800e+08, 7.44000000e+00],
> [ 9.78541200e+08, 8.19000000e+00],
> ...,
> [ 1.16162280e+09, 8.14600000e+01],
> [ 1.16170920e+09, 8.10500000e+01],
> [ 1.16179560e+09, 8.16800000e+01]])
> [Dbg]>>> Numeric.asarray(m, Numeric.Float32)[:10]
> array([[ 9.78109184e+008, 7.00000000e+000],
> [ 9.78454784e+008, 7.00000000e+000],
> [ 9.78541184e+008, 8.00000000e+000],
> [ 9.78627584e+008, 8.00000000e+000],
> [ 9.78713984e+008, 8.00000000e+000],
> [ 9.78973184e+008, 8.00000000e+000],
> [ 9.79059584e+008, 8.00000000e+000],
> [ 9.79145984e+008, 8.00000000e+000],
> [ 9.79232384e+008, 9.00000000e+000],
> [ 9.79318784e+008, 8.00000000e+000]],'f')
> [Dbg]>>> Numeric.asarray(m, Numeric.Float)[:10]
> array([[ 9.78109200e+008, 7.00000000e+000],
> [ 9.78454800e+008, 7.00000000e+000],
The problem is with the version of Numeric you are using. I can replicate this
problem with Numeric 24.0 but not with 24.2.
> and why and what is:
>
> [Dbg]>>> m[0,1]
> 7.44
> [Dbg]>>> type(_)
> <type 'numpy.float64'>
> [Dbg]>>>
It is a scalar object. numpy supports more number types than Python does so the
scalar results of indexing operations need representations beyond the standard
int, float, complex types. These scalar objects also support the array
interface, so it's easier to write generic code that may operate on arrays or
scalars. Their existence also resolves the long-standing problem of maintaining
the precision of arrays even when performing operations with scalars. In
Numeric, adding the scalar 2.0 to a single precision array would return a
double-precision array. Worse, if a and b are single precision arrays, (a+b[0])
would give a double-precision result because b[0] would have to be represented
as a standard Python float.
The _Guide to NumPy_ has a discussion of these in Chapter 2, part of the sample
chapters:
http://numpy.scipy.org/numpybooksample.pdf
> does this also slow down python math computations?
If you do a whole lot of computations with scalar values coming out of arrays,
then yes, somewhat. You can forestall that by casting to Python floats or ints
if that is causing problems for you.
> should one better stay away from numpy in current stage of numpy development?
> I remember, with numarray there were no such problems.
Not really, no.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
More information about the Python-list
mailing list