[Numpy-discussion] setting decimal accuracy in array operations (scikits.timeseries)
marcotuckner at public-files.de
Wed Mar 3 17:23:59 EST 2010
Thanks to all who answered.
This is really helpful!
>> If you are still seeing actual calculation differences, we will
>> need to see a complete, self-contained example that demonstrates
>> the difference.
> To add a bit more detail -- unless you are explicitly specifying
> single precision floats (dtype=float32), then both numpy and excel
> are using doubles -- so that's not the source of the differences.
> Even if you are using single precision in numpy, It's pretty rare for
> that to make a significant difference. Something else is going on.
> I suspect a different algorithm, you can tell timeseries.convert how
> you want it to interpolate -- who knows what excel is doing.
I checked the values row by row comparing Excel against the Python results.
The the values of both programs match perfectly at the data points where
no periodic sequence occurs:
so those values where the aggregated value results in a straight value
(e.g. 12.04) the results were the same.
At values points where the result was a periodic sequence (e.g.
12.222222 ...) the described difference could be observed.
I will try to create a self contained example tomorrow.
Thanks a lot and kind regards,
More information about the NumPy-Discussion