[Numpy-discussion] Numpy float precision vs Python list float issue

josef.pktd at gmail.com josef.pktd at gmail.com
Mon Apr 20 13:34:56 EDT 2009


On Mon, Apr 20, 2009 at 12:04 PM, David Cournapeau <cournape at gmail.com>wrote:

> On Tue, Apr 21, 2009 at 12:49 AM, Rob Clewley <rob.clewley at gmail.com>
> wrote:
> > On Mon, Apr 20, 2009 at 10:48 AM, David Cournapeau
> > <david at ar.media.kyoto-u.ac.jp> wrote:
> >> Rob Clewley wrote:
> >>> David,
> >>>
> >>> I'm confused about your reply. I don't think Ruben was only asking why
> >>> you'd ever get non-zero error after the forward and inverse transform,
> >>> but why his implementation using lists gives zero error but using
> >>> arrays he gets something of order 1e-15.
> >>>
> >>
> >> That's more likely just an accident. Forward + inverse = id is the
> >> surprising thing, actually. In any numerical package, if you do
> >> ifft(fft(a)), you will not recover a exactly for any non trivial size.
> >> For example, with floating point numbers, the order in which you do
> >> operations matters, so:
> > <SNIP ARITHMETIC>
> >> Will give you different values for d and c, even if you "on paper",
> >> those are exactly the same. For those reasons, it is virtually
> >> impossible to have exactly the same values for two different
> >> implementations of the same algorithm. As long as the difference is
> >> small (if the reconstruction error falls in the 1e-15 range, it is
> >> mostly likely the case), it should not matter,
> >
> > I understand the numerical mathematics behind this very well but my
> > point is that his two algorithms appear to be identical (same
> > operations, same order), he simply uses lists in one and arrays in the
> > other. It's not like he used vectorization or other array-related
> > operations - he uses for loops in both cases. Of course I agree that
> > 1e-15 error should be acceptable, but that's not the point. I think
> > there is legitimate curiosity in wondering why there is any difference
> > between using the two data types in exactly the same algorithm.
>
> Yes, it is legitimate and healthy to worry about the difference - but
> the surprising thing really is the list behavior when you are used to
> numerical computation :) And I maintain that the algorithms are not
> the same in both operations. For once, the operation of using arrays
> on the data do not give the same data in both cases, you can see right
> away that m and ml are not the same, e.g.
>
> print ml - morig
>
> shows that the internal representation is not exactly the same.
>
>
I think you are copying your result to your original list

instead of
morigl = ml[:]

use:

from copy import deepcopy
morigl = deepcopy(ml)


this is morigl after running your script

>>> morigl
[[1.0000000000000002, 5.0, 6.0, 1.9999999999999998, 9.0, 7.0, 1.0, 4.0],
[3.0, 6.0, 2.0, 6.9999999999999982, 4.0, 8.0, 5.0, 9.0], [8.0, 2.0, 9.0,
0.99999999999999989, 3.0000000000000004, 4.0000000000000009, 1.0,
4.9999999999999991], [3.9999999999999964, 1.0, 5.0, 7.9999999999999991, 3.0,
1.0000000000000036, 4.0, 6.9999999999999991], [5.9999999999999982, 2.0,
0.99999999999999989, 2.9999999999999996, 8.0, 2.0, 4.0, 3.0],
[7.9999999999999973, 5.0, 9.0, 5.0, 4.0, 2.0000000000000009,
1.0000000000000018, 5.0], [4.0, 8.0, 5.0, 9.0, 5.9999999999999991, 3.0, 2.0,
7.0000000000000009], [5.0, 6.0, 5.0, 1.0, 7.9999999999999964, 2.0, 9.0,
3.0000000000000036]]

Josef
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20090420/e69e4990/attachment.html>


More information about the NumPy-Discussion mailing list