Darren Dale wrote:
On Sat, Mar 7, 2009 at 5:18 AM, Robert Kern <robert.kern@gmail.com> wrote:
On Sat, Mar 7, 2009 at 04:10, Stéfan van der Walt <stefan@sun.ac.za> wrote:
2009/3/7 Robert Kern <robert.kern@gmail.com>:
In [5]: z = zeros(3, int)
In [6]: z[1] = 1.5
In [7]: z Out[7]: array([0, 1, 0])
Blind moment, sorry. So, what is your take -- should this kind of thing pass silently?
Downcasting data is a necessary operation sometimes. We explicitly made a choice a long time ago to allow this.
In that case, do you know why this raises an exception:
np.int64(10+20j)
Darren
I think that you have a good point, Darren, and that Robert is oversimplifying the situation. NumPy and Python are somewhat out of step. The NumPy approach is stricter and more likely to catch errors than Python. Python tends to be somewhat laissez-faire about numerical errors and the correctness of results. Unfortunately, NumPy seems to be a sort of step-child of Python, tolerated, but not fully accepted. There are a number of people who continue to use Matlab, despite all of its deficiencies, because it can at least be counted on to produce correct answers most of the time. Dr. Phillip M. Feldman -- View this message in context: http://old.nabble.com/Assigning-complex-values-to-a-real-array-tp22383353p26... Sent from the Numpy-discussion mailing list archive at Nabble.com.