[Numpy-discussion] Release blockers for 1.4.0 ?

David Cournapeau david at ar.media.kyoto-u.ac.jp
Mon Dec 7 23:22:48 EST 2009


josef.pktd at gmail.com wrote:
> On Mon, Dec 7, 2009 at 1:24 PM, Charles R Harris
> <charlesr.harris at gmail.com> wrote:
>   
>> On Mon, Dec 7, 2009 at 11:16 AM, Charles R Harris
>> <charlesr.harris at gmail.com> wrote:
>>     
>>> On Mon, Dec 7, 2009 at 10:31 AM, David Cournapeau <cournape at gmail.com>
>>> wrote:
>>>       
>>>> On Tue, Dec 8, 2009 at 1:48 AM, Charles R Harris
>>>> <charlesr.harris at gmail.com> wrote:
>>>>         
>>>>> On Mon, Dec 7, 2009 at 8:24 AM, David Cournapeau <cournape at gmail.com>
>>>>> wrote:
>>>>>           
>>>>>> Hi,
>>>>>>
>>>>>> There are a few issues which have been found on numpy 1.4.0, which
>>>>>> worry
>>>>>> me:
>>>>>>
>>>>>> # 1317: segfaults for integer division overflow
>>>>>> # 1318: all FPU exceptions ignored by default
>>>>>>
>>>>>> #1318 worries me the most: I think it is a pretty serious regression,
>>>>>> since things like this go unnoticed:
>>>>>>
>>>>>> x = np.array([1, 2, 3, 4]) / 0 # x is an array of 0, no warning
>>>>>> printed
>>>>>>
>>>>>>             
>>>>> Hasn't that always been the case? Unless we have a way to raise
>>>>> exceptions
>>>>> from ufuncs I don't know what else we can do.
>>>>>           
>>>> No, it is a consequence of errors being set to ignored in numpy.ma:
>>>>
>>>>
>>>> http://projects.scipy.org/gitweb?p=numpy;a=blob;f=numpy/ma/core.py;h=f28a5738efa6fb6c4cbf0b3479243b0d7286ae32;hb=master#l107
>>>>
>>>> So the fix is easy - but then it shows many (> 500) invalid values,
>>>> etc... related to wrong fpu handling (most of them are limited to the
>>>> new polynomial code, though).
>>>>
>>>>         
>>> Umm, no. Just four, and easily fixed as I explicitly relied on the
>>> behaviour. After the fix and seterror(all='raise'):
>>>
>>>       
>> To be specific, it was a true divide and I relied on nan being returned. I
>> expect many of the remaining failures are of the same sort.
>>     
>
> if seterr raise also raises when the calculations are done with
> floating point, then it's not really useful.

I think it is out of the question to set the default to raise - at least
that not what I suggest. The default up to 1.0.4 was warning, and it was
unintentionally set to ignored starting at 1.1.0.

> I think this is more a problem with the silent casting of nan and inf
> to 0 for integers (which I dislike for a long time), not a problem
> with floating point operations.
>   

I think it depends on the use-cases: I can see why in stats it may be
useful to set them to ignore, but for linear algebra, for example, nan
is almost always a bug in the code somewhere.

Note also that the default can be overriden temporarily - we should
actually have a context manager so that it becomes easy to use it safely
if python >= 2.6 is an option.

cheers,

David



More information about the NumPy-Discussion mailing list