On Sun, Jan 25, 2009 at 4:35 AM, Matthew Brett <matthew.brett@gmail.com> wrote:

Hi,

When converting arrays from float to ints, I notice that NaNs, Infs, and -Infs all get the minimum integer value:

flts = np.array([np.nan, np.inf, -np.inf]) flts.astype(np.int16) array([-32768, -32768, -32768], dtype=int16)

However, setting NaNs into integer arrays gives a value of 0

ints = np.array([1]) ints.dtype dtype('int32') ints[0] = np.nan ints array([0])

whereas Inf or -Inf raise an error (as Josef pointed out recently):

ints[0] = np.inf

Traceback (most recent call last): File "<ipython console>", line 1, in <module> OverflowError: cannot convert float infinity to long

ints[0] = -np.inf

Traceback (most recent call last): File "<ipython console>", line 1, in <module> OverflowError: cannot convert float infinity to long

Matlab seems more consistent and sensible here:

flts = [NaN Inf -Inf]; int32(flts)

ans =

0 2147483647 -2147483648

ints = int32([1 1 1]); ints(:) = [NaN Inf -Inf]

ints =

0 2147483647 -2147483648

Is there a route to change towards the matlab behavior? Or at least make numpy behavior self-consistent?

Best,

Matthew

As we discussed in another thread, I think that the silent conversion of nan to zero should not be done, since it is too much a source of errors. Users should be forced to set nans to a valid number explicitly. Josef