On Thu, May 20, 2010 at 12:13 PM, Benjamin Root <ben.root@ou.edu> wrote:
On Thu, May 20, 2010 at 10:30 AM, Ryan May <rmay31@gmail.com> wrote:
On Thu, May 20, 2010 at 9:44 AM, Benjamin Root <ben.root@ou.edu> wrote:
I gave two counterexamples of why.
The examples you gave aren't counterexamples. See below...
On Wed, May 19, 2010 at 7:06 PM, Darren Dale <dsdale24@gmail.com> wrote:
On Wed, May 19, 2010 at 4:19 PM, <josef.pktd@gmail.com> wrote:
On Wed, May 19, 2010 at 4:08 PM, Darren Dale <dsdale24@gmail.com> wrote:
I have a question about creation of numpy arrays from a list of objects, which bears on the Quantities project and also on masked arrays:
>>> import quantities as pq >>> import numpy as np >>> a, b = 2*pq.m,1*pq.s >>> np.array([a, b]) array([ 12., 1.])
Why doesn't that create an object array? Similarly:
Consider the use case of a person creating a 1-D numpy array: > np.array([12.0, 1.0]) array([ 12., 1.])
How is python supposed to tell the difference between > np.array([a, b]) and > np.array([12.0, 1.0]) ?
It can't, and there are plenty of times when one wants to explicitly initialize a small numpy array with a few discrete variables.
What do you mean it can't? 12.0 and 1.0 are floats, a and b are not. While, yes, they can be coerced to floats, this is a *lossy* transformation--it strips away information contained in the class, and IMHO should not be the default behavior. If I want the objects, I can force it:
In [7]: np.array([a,b],dtype=np.object) Out[7]: array([2.0 m, 1.0 s], dtype=object)
This works fine, but feels ugly since I have to explicitly tell numpy not to do something. It feels to me like it's violating the principle of "in the face of ambiguity, resist the temptation to guess."
I have thought about this further, and I think I am starting to see your point (from both of you). Here are my thoughts:
As I understand it, numpy.array() (rather, array_like()) essentially builds the dimensions of the array by first identifying if there is an iterable object, and then if the contents of the iterable is also iterable, until it reaches a non-iterable.
Therefore, the question becomes, why is numpy.array() implicitly coercing the non-iterable type into a numeric? Is there some reason that I am not seeing for why there is an implicit coercion?
I think because the dtype is numeric (float), otherwise it wouldn't operate on numbers, and none of the other numerical functions might work (just a guess)
a = np.array(['2.0', '1.0'], dtype=object) a array([2.0, 1.0], dtype=object) np.sqrt(a) Traceback (most recent call last): File "<pyshell#31>", line 1, in <module> np.sqrt(a) AttributeError: sqrt np.array([a,a]) array([[2.0, 1.0], [2.0, 1.0]], dtype=object) 2*a array([2.02.0, 1.01.0], dtype=object)
b = np.array(['2.0', '1.0']) np.sqrt(b) NotImplemented np.array([b,b]) array([['2.0', '1.0'], ['2.0', '1.0']], dtype='|S3') 2*b Traceback (most recent call last): File "<pyshell#41>", line 1, in <module> 2*b TypeError: unsupported operand type(s) for *: 'int' and 'numpy.ndarray'
Josef
At first glance, I did not see a problem with this behavior, and I have come to expect it (hence my original reply). But now, I am not quite so sure.
Ryan
-- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion