Hi,
We noticed that comparing arrays of different shapes with allclose
doesn't work anymore in numpy 1.6.2.
Is this a feature or a bug ? :)
See the output in both 1.6.1 and 1.6.2 at the end of this mail.
Best regards,
Martin
1.6.1::
In [1]: import numpy as np
In [2]: np.__version__
Out[2]: '1.6.1'
In [3]: a = np.array([1, 2, 3])
In [4]: b = np.array([1, 2, 3, 4])
In [5]: np.allclose(a, b)
Out[5]: False
1.6.2::
In[1]: import numpy as np
In[2]: np.__version__
Out[2]: '1.6.2'
In [3]: a = np.array([1, 2, 3])
In[4]: b = np.array([1, 2, 3, 4])
In[5]: np.allclose(a, b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File
"/home/maarten/pytroll/local/lib/python2.7/site-packages/numpy-1.6.2-py2.7-linux-x86_64.egg/numpy/core/numeric.py",
line 1936, in allclose
return all(less_equal(abs(x-y), atol + rtol * abs(y)))
ValueError: operands could not be broadcast together with shapes (3) (4)

Trying to install numpy 1.6.2 on a mac osx 10.7.4 from this .dmg
9323135 numpy-1.6.2-py2.7-python.org-macosx10.3.dmg
gives
"numpy 1.6.2 can't be installed on this disk.
numpy requires python.org Python 2.7 to install."
But python 2.7.3 *is* installed from python.org
18761950 python-2.7.3-macosx10.6.dmg
and /usr/bin/python is linked as described in
http://wolfpaulus.com/journal/mac/installing_python_osx
python -c 'import sys; print sys.version'
2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
How is the code in the .dmg checking for python.org python ?
Thanks,
cheers
-- denis

I understand the historical reason for the "Truth value of an array" error,
avoiding the pitfalls of:
>>> a = np.arange(10)
>>> if a==0: # this is ambiguous, use any() or all()
I also understand the issues with logical and/or:
>>> if (a<10 and a > 5): # this will break due to "and" vs "&"
However the main point in this thread from 3 years
ago<http://thread.gmane.org/gmane.comp.python.numeric.general/32653>is
very valid. If I write code that uses lists and then convert that to
an
array for efficiency or more powerful computation, I have my own pitfall
trying to do:
>>> if a: # why doesn't this just check for size?
My Proposal
------------------
It seems to me that an elegant solution to this dilemma is to separate the
behavior of ndarrays of type bool from all other ndarrays. Keep the
current behavior for ndarrays of type bool, but let the __nonzero__ for all
other ndarrays be based on size.
>>> if a==0: # still raises Truth error because it's of dtype bool
>>> if (a<10 and a>5): # still raises Truth error because it's of dtype bool
>>> if a: # works fine because dtype is int64
This solution avoids all the primary pitfalls of ambiguity where someone
would need any() or all() because they're working with bools at the element
level. But for cases where a function may return data or None, I really
like to use the normal Python truth test for that instead of:
>>> if a is not None and len(a) > 0: # that was a chore to find out
The only problem I see with this solution is with the case of the
single-element array.
>>> s = np.array([[0]])
>>> if s: # works today, returns False
With my proposal,
>>> if s: # still works, but returns True because array is not empty
It's a wart to be sure, but it would make ndarrays much easier to work with
when converting from standard Python containers. Maybe we need something
like this (probably not possible):
>>> from numpy.__future__ import truthiness
I've especially found this Truth error a challenge converting from
dictionaries of lists to pandas DataFrames. It raises the same error,
tracing back to this ambiguity in ndarrays. If it's too big of a change to
make for ndarrays, maybe the same proposal could be implemented in pandas.
Jim

I have a file with thousands of lines like this:
Signal was returned in 204 microseconds
Signal was returned in 184 microseconds
Signal was returned in 199 microseconds
Signal was returned in 4274 microseconds
Signal was returned in 202 microseconds
Signal was returned in 189 microseconds
I try to read it like this:
data = np.loadtxt('dummy.data', dtype={'names':('label','times','musec'), 'fmts':('|S23','i8','|S13')})
It fails, I think, because it wants a string format and field for each of the words 'Signal' 'was' 'returned' etc.
Can I make it treat that whole string before the number as one string, one field? All I really care about is the numbers anyway.
Any advice appreciated.

I have tried to install the 1.6.2 win32 superpack on my Windows 7 Pro (64
bit) system which has ActiveState ActivePython 2.7.2.5 (64 bit) installed.
However, I get an error that Python 2.7 is required and can't be found in
the Registry.
I only need numpy as it is a pre-requisite for another package and numpy is
the only pre-requisite that won't install.
Other are specific to Python 2.7 and they install. Some are Win64 and some
are Win32.
Is there a work around for this?
I have no facilities available to build numpy.
Regards,
Jim

I discovered this because scipy.optimize.fmin_powell appears to squeeze 1d
argmin to 0d unlike the other optimizers, but that's a different story.
I would expect the 0d array to behave like the 1d array not the 2d as it
does below. Thoughts? Maybe too big of a pain to change this behavior if
indeed it's not desired, but I found it to be unexpected.
[255]: np.version.full_version # same on 1.5.1
[255]: '1.8.0.dev-8e0a542'
[262]: arr = np.random.random((25,1))
[~/]
[263]: np.dot(arr, np.array([1.])).shape
[263]: (25,)
[~/]
[264]: np.dot(arr, np.array([[1.]])).shape
[264]: (25, 1)
[~/]
[265]: np.dot(arr, np.array(1.)).shape
[265]: (25, 1)
[~/]
[271]: np.dot(arr.squeeze(), np.array(1.)).shape
[271]: (25,)
Huh? 0d arrays broadcast with dot?
[~]
[279]: arr = np.random.random((25,2))
[~/]
[280]: np.dot(arr.squeeze(), np.array(2.)).shape
[280]: (25, 2)
Skipper

Hi,
I try to update values in a single field of numpy record array based on
a condition defined in another array. I found that that the result
depends on the order in which I apply the boolean indices/field names.
For example:
cond = np.zeros(5, dtype=np.bool)
cond[2:] = True
X = np.rec.fromarrays([np.arange(5)], names='a')
X[cond]['a'] = -1
print X
returns: [(0,) (1,) (2,) (3,) (4,)] (the values were not updated)
X['a'][cond] = -1
print X
returns: [(0,) (1,) (-1,) (-1,) (-1,)] (it worked this time).
I find this behaviour very confusing. Is it expected? Would it be
possible to emit a warning message in the case of "faulty" assignments?
Bartosz

Dear all,
if I have two ndarray arr1 and arr2 (with the same shape), is there some
difference when I do:
arr = arr1 + arr2
and
arr = np.add(arr1, arr2),
and then if I have more than 2 arrays: arr1, arr2, arr3, arr4, arr5, then I
cannot use np.add anymore as it only recieves 2 arguments.
then what's the best practice to add these arrays? should I do
arr = arr1 + arr2 + arr3 + arr4 + arr5
or I do
arr = np.sum(np.array([arr1, arr2, arr3, arr4, arr5]), axis=0)?
because I just noticed recently that there are functions like np.add,
np.divide, np.substract... before I am using all like directly arr1/arr2,
rather than np.divide(arr1,arr2).
best regards,
Chao
--
***********************************************************************************
Chao YUE
Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL)
UMR 1572 CEA-CNRS-UVSQ
Batiment 712 - Pe 119
91191 GIF Sur YVETTE Cedex
Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16
************************************************************************************

Hi folks,
years ago, John Hunter and I bought the py4science.{com, org, info}
domains thinking they might be useful. We never did anything with
them, and with his passing I realized I'm not really in the mood to
keep renewing them without a clear goal in mind.
Does anybody here want to do anything with these? They expire
December 3, 2012. I can just let them lapse, but I figured I'd give a
heads-up in case anybody has a concrete use for them I'd rather
transfer them than let them go to a domain squatter.
Basically, if you want them, they're yours to have as of 12/3/12.
Cheers,
f

Hi everyone,
I'm currently having issues with installing Numpy 1.6.2 with Python
3.1 and 3.2 using pip in Travis builds - see for example:
https://travis-ci.org/astropy/astropy/jobs/3379866
The build aborts with a cryptic message:
ValueError: underlying buffer has been detached
Has anyone seen this kind of issue before?
Thanks for any help,
Cheers,
Tom