[Numpy-discussion] type conversion question

Chris Barker - NOAA Federal chris.barker at noaa.gov
Fri Apr 19 11:12:58 EDT 2013

On Thu, Apr 18, 2013 at 10:04 PM, K.-Michael Aye <kmichael.aye at gmail.com> wrote:
> On 2013-04-19 01:02:59 +0000, Benjamin Root said:
>> So why is there an error in the 2nd case, but no error in the first
>> case? Is there a logic to it?
>> When you change a dtype like that in the first one, you aren't really
>> upcasting anything.  You are changing how numpy interprets the
>> underlying bits.  Because you went from a 32-bit element size to a
>> 64-bit element size, you are actually seeing the double-precision
>> representation of 2 of your original data points together.

I was wondering what would happen if there were not the right number
of points available...

In [225]: a = np.array((2.0, 3.0), dtype=np.float32)

In [226]: a.dtype=np.float64

In [227]: a
Out[227]: array([ 32.00000763])

OK , but:

In [228]: a = np.array((2.0,), dtype=np.float32)

In [229]: a.dtype=np.float64
ValueError                                Traceback (most recent call last)
<ipython-input-229-0d494747aee1> in <module>()
----> 1 a.dtype=np.float64

ValueError: new type not compatible with array.

so numpy is smart enough to not let you do it .. good thing.

Final note -- changing the dtype in place like that is a very powerful
and useful tool, but not likely to be used often -- it's  really for
things like working with odd binary data and the like.



Christopher Barker, Ph.D.

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov

More information about the NumPy-Discussion mailing list