[Numpy-discussion] Regression: in-place operations (possibly intentional)
efiring at hawaii.edu
Fri Sep 21 19:51:12 EDT 2012
On 2012/09/21 12:20 PM, Nathaniel Smith wrote:
> On Fri, Sep 21, 2012 at 10:04 PM, Chris Barker <chris.barker at noaa.gov> wrote:
>> On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith <njs at pobox.com> wrote:
>>> You're right of course. What I meant is that
>>> a += b
>>> should produce the same result as
>>> a[...] = a + b
>>> If we change the casting rule for the first one but not the second, though,
>>> then these will produce different results if a is integer and b is float:
>> I certainly agree that we would want that, however, numpy still needs
>> to deal tih pyton symantics, which means that wile (at the numpy
>> level) we can control what "a[...] =" means, and we can control what
>> "a + b" produces, we can't change what "a + b" means depending on the
>> context of the left hand side.
>> that means we need to do the casting at the assignment stage, which I
>> gues is your point -- so:
>> a_int += a_float
>> should do the addition with the "regular" casting rules, then cast to
>> an int after doing that.
>> not sure the implimentation details.
> Yes, that seems to be what happens.
> In : a = np.arange(3)
> In : a *= 1.5
> In : a
> Out: array([0, 1, 3])
> But still, the question is, can and should we tighten up the
> assignment casting rules to same_kind or similar?
An example of where tighter casting seems undesirable is the case of
functions that return integer values with floating point dtype, such as
rint(). It seems natural to do something like
In : ind = np.empty((3,), dtype=int)
In : rint(np.arange(3, dtype=float) / 3, out=ind)
Out: array([0, 0, 1])
where one is generating integer indices based on some manipulation of
floating point numbers. This works in 1.6 but fails in 1.7.
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
More information about the NumPy-Discussion