Numpy optimize the python process by explicitly define the element type of
array.
Just like C++.
Python let you work with automatic converting... but it slows down the
process.
Like having extra code to check the type of your element array.
I suggest you check the numpy reference instead of python reference when
using numpy.
Sincerely Yours,
pujo
On 5/23/06, Ivan Vilata i Balaguer
En/na Pujo Aji ha escrit::
use 'f' to tell numpy that its array element is a float type: b = numpy.array([1,2,3,4],'f')
an alternative is to put dot after the number: b = numpy.array([1. ,2. ,3. ,4.])
This hopefully solve your problem.
You're right, but according to Python reference docs, having an integer base and a negative integer exponent should still return a floating point result, without the need of converting the base to floating point beforehand.
I wonder if the numpy/numarray behavior is based on some implicit policy which states that operating integers with integers should always return integers, for return type predictability, or something like that. Could someone please shed some light on this? Thanks!
En/na Pujo Aji ha escrit::
On 5/23/06, *Ivan Vilata i Balaguer*
mailto:ivilata@carabos.com> wrote: [...] According to http://docs.python.org/ref/power.html: For int and long int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered.
Then, shouldn't be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25 ])`` (i.e. a floating point result)? Is this behaviour intentional? (I googled for previous messages on the topic but I didn't find any.)
::
Ivan Vilata i Balaguer >qo< http://www.carabos.com/ Cárabos Coop. V. V V Enjoy Data ""