
Hi, On Fri, Jan 18, 2013 at 7:58 PM, Chris Barker - NOAA Federal <chris.barker@noaa.gov> wrote:
On Fri, Jan 18, 2013 at 4:39 AM, Olivier Delalleau <shish@keba.be> wrote:
Le vendredi 18 janvier 2013, Chris Barker - NOAA Federal a écrit :
If you check again the examples in this thread exhibiting surprising / unexpected behavior, you'll notice most of them are with integers. The tricky thing about integers is that downcasting can dramatically change your result. With floats, not so much: you get approximation errors (usually what you want) and the occasional nan / inf creeping in (usally noticeable).
fair enough.
However my core argument is that people use non-standard (usually smaller) dtypes for a reason, and it should be hard to accidentally up-cast.
This is in contrast with the argument that accidental down-casting can produce incorrect results, and thus it should be hard to accidentally down-cast -- same argument whether the incorrect results are drastic or not....
It's really a question of which of these we think should be prioritized.
After thinking about it for a while, it seems to me Olivier's suggestion is a good one. The rule becomes the following: array + scalar casting is the same as array + array casting except array + scalar casting does not upcast floating point precision of the array. Am I right (Chris, Perry?) that this deals with almost all your cases? Meaning that it is upcasting of floats that is the main problem, not upcasting of (u)ints? This rule seems to me not very far from the current 1.6 behavior; it upcasts more - but the dtype is now predictable. It's easy to explain. It avoids the obvious errors that the 1.6 rules were trying to avoid. It doesn't seem too far to stretch to make a distinction between rules about range (ints) and rules about precision (float, complex). What do you'all think? Best, Matthew