[Numpy-discussion] The risks of empty()

Tim Hochberg tim.hochberg at ieee.org
Tue Jan 2 19:08:05 EST 2007

Bock, Oliver BGI SYD wrote:
> Some of my arrays are not fully populated.  (I separately record which
> entries are valid.)  I want to use numpy.empty() to speed up the
> creation of these arrays, but I'm worried about what will happen if I
> apply operations to the entire contents of these arrays.  E.g.
> a + b
> I care about the results where valid entries align, but not otherwise.
> Given that numpy.empty() creates an ndarray using whatever junk it finds
> on the heap, it seems to me that there is the possibility that this
> could include bit patterns that are not valid floating point
> representations, which might raise floating point exceptions if used in
> operations like the one above (if they are "signalling" NaNs).  Will
> this be a problem, or will the results of operations on invalid floating
> point numbers yield NaN?
This depends on what the error state is set to. You can set it to ignore 
floating point errors, in which case this will almost certainly work.

However, why take the chance. Why not just build your arrays on top of 
zeros instead of empty? Most of the ways that I can think of filling in 
a sparse array are slow enough to overwhelm the extra overhead of zeros 
versus empty.
> Or to put it another way: do I need to ensure that array data is
> initialised before using it?
I think that this should work if you set the err state correctly (for 
example (seterr(all="ignore"). However, I don't like shutting down the 
error checking unless absolutely necessary, and overall it just seems 
better to initialize the arrays.


More information about the NumPy-Discussion mailing list