Should numpy.sqrt(-1) return 1j rather than nan?

Tim Hochberg tim.hochberg at ieee.org
Thu Oct 12 10:25:46 EDT 2006


Mark Bakker wrote:
> My vote is for consistency in numpy.
> But it is unclear what consistency is.
>
> What is truly confusing for newbie Python users (and a source for 
> error even after 5 years of Python programming) is that
>
> >>> 2/3
> 0
I recommend that you slap "from __future__ import division" into site.py 
or the top of your program:

    from __future__ import division

    import numpy
    print 3/2
    a = numpy.arange(3)
    print a / (a+5)
    print a // a+5

     >>>>

    1.5
    [ 0.          0.16666667  0.28571429]
    [5 6 6]


>
> In that respect, I would think
>
> >>> numpy.sqrt(2)
>
> should give 1, but it gives 1.4142135623730951
Is there any practical reason to return 1? If not, isn't this argument 
sort of silly?

> So numpy does typechecking anyway (it gets an integer and makes it a 
> float).
>
> If that is the consistent behavior, then by all means
>
> >>> sqrt(-1)
>
> should return 1j.
Well, it could also return: -1j or as someone mentioned one of six (I 
think) different quaternion values. It all depends on what the 
domain/range of the problem is.

> Wouldn't that be the consistent thing to do????

No, there's a difference. In order to do the former, all that is 
required is that sqrt switches on the *type* of it argument. sqrt return 
a float for integer and floating point args and a complex for complex 
args. In order to the latter, sqrt needs to switch on the *value* of its 
argument, which is an entirely different beast both in theory and in 
practice. In particular:

    sqrt(some_big_array)

would have to scan all the values in the array to determine if any were 
negative to decide whether to return a float or an imaginary number. It 
also means that the memory usage is unpredictable -- the returned array 
is double in size if any values are negative.

In addition to the significant slowdown this introduces, the former 
approach (keeping the input and output domains the same) is somewhat 
more robust against error, particularly if one tightens up the error 
mode. I would guess that people are working the complex plane at most 
half the time, in the former case a negative square root signals a 
problem and promoting to complex is at best a nuisance and may result in 
a painful to track down error. In the latter case, it's easy enough for 
me to toss in a astype in situations where I'm mixing domains and need 
complex outputs.

Ideally numpy and scipy would have chosen different names for forgiving 
and strict powers and square roots, square_root and power versus sqrt 
and pow for example, but it's probably too late to do anything about 
that now.

Since it sounds like Travis is going to tighten up the default error 
mode, I think that this is a non issue. No ones going to run into NANs 
unexpectedly and the error when doing sqrt([1,2,3,-1]) should be 
confusing and most once.

-tim



-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642




More information about the NumPy-Discussion mailing list