On Wed, Dec 9, 2009 at 2:04 AM, Pauli Virtanen <pav@iki.fi> wrote:
ti, 2009-12-08 kello 22:26 -0800, Dr. Phillip M. Feldman kirjoitti:
Darren Dale wrote:
On Sat, Mar 7, 2009 at 5:18 AM, Robert Kern <robert.kern@gmail.com> wrote:
On Sat, Mar 7, 2009 at 04:10, Stéfan van der Walt <stefan@sun.ac.za> wrote:
2009/3/7 Robert Kern <robert.kern@gmail.com>:
In [5]: z = zeros(3, int)
In [6]: z[1] = 1.5
In [7]: z Out[7]: array([0, 1, 0])
Blind moment, sorry. So, what is your take -- should this kind of thing pass silently?
Downcasting data is a necessary operation sometimes. We explicitly made a choice a long time ago to allow this.
I'd think that downcasting is different from dropping the imaginary part. Also, I doubt a bit that there is a large body of correct code relying on the implicit behavior. This kind of assertions should of course be checked experimentally -- make the complex downcast an error, and check a few prominent software packages.
An alternative to an exception would be to make complex numbers with nonzero imaginary parts to cast to *nan*. This would, however, likely lead to errors difficult to track.
Another alternative would be to raise an error only if the imaginary part is non-zero. This requires some additional checking in some places where no checking is usually made.
At least I tend to use .real or real() to explicitly take the real part. In interactive use, it occasionally is convenient to have the real part taken "automatically", but sometimes this leads to problems inside Matplotlib.
Nevertheless, I can't really regard dropping the imaginary part a significant issue. I've sometimes bumped into problems because of it, and it would have been nice to catch them earlier, though. (As an example, scipy.interpolate.interp1d some time ago silently dropped the imaginary part -- not nice.)
It looks like a lot of folks have written or met buggy code at one point or another because this behaviour. And finding the problem is a hassle because it doesn't stand out. I think that makes it a candidate for a warning simply because we want to help people write correct code and that is a significant issue. So +1 for raising a warning in this case. I feel the same about silently casting floats to integers, although that doesn't feel quite as strange because one at least expects the result to be close to the original. I think the boundaries are unsigned discrete <- signed discrete <- real line <- complex plane The different kinds all have different domains and crossing the boundaries between kinds should only happen if it is the clear intent of the programmer. Because python types don't match up 1-1 with numpy types this can be tricky to enforce but I think is worthwhile to keep it in mind. Chuck