Alan G Isaac wrote:
On Thu, 25 May 2006, Robert Kern apparently wrote:
You aren't using bc correctly.
Ooops. I am not a user and was just following your post without reading the manual. I hope the below fixes pi; and (I think) it makes the same point I tried to make before: a continuity argument renders the general claim you made suspect. (Of course it's looking like a pretty narrow range of possible benefit as well.)
Yes, you probably can construct cases where the % (2*pi) step will ultimately yield an answer closer to what you want. You cannot expect that step to give *reliable* improvements.
If you know that you are epsilon from n*2*π (the real number, not the floating point one), you should just be calculating sin(epsilon). Usually, you do not know this, and % (2*pi) will not tell you this. (100*pi + epsilon) is not the same thing as (100*π + epsilon).
Yes, I understand all this. Of course, it is not quite an answer to the question: can '%(2*pi)' offer an advantage in the right circumstances?
Not in any that aren't contrived. Not in any situations where you don't already have enough knowledge to do a better calculation (e.g. calculating sin(epsilon) rather than sin(2*n*pi + epsilon)).
And the original question was again different: can we learn from such calculations that **some** method might offer an improvement?
No, those calculations make no such revelation. Good implementations of sin() already reduce the argument into a small range around 0 just to make the calculation feasible. They do so much more accurately than doing % (2*pi) but they can only work with the information given to the function. It cannot know that, for example, you generated the inputs by multiplying the double-precision approximation of π by an integer. You can look at the implementation in fdlibm: http://www.netlib.org/fdlibm/s_sin.c
bc 1.05 Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 pi = 4*a(1) epsilon = 0.00001 s(100*pi + epsilon) .00000999999999983333333333416666666666468253967996 or 9.999999999833333e-006
compared to:
epsilon = 0.00001 sin(100*pi+epsilon)
9.999999976550551e-006
sin((100*pi+epsilon)%(2*pi))
9.9999999887966145e-006
As Sasha noted, that is an artifact of bc's use of decimal rather than binary, and Python's conversion of the literal "0.00001" into binary. [scipy]$ bc -l bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 pi = 4*a(1) epsilon = 1./2.^16 s(100*pi + epsilon) .00001525878906190788105354014301687863346141309981 s(epsilon) .00001525878906190788105354014301687863346141310239 [scipy]$ python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information.
from numpy import * epsilon = 1./2.**16 sin(100*pi + epsilon) 1.5258789063872268e-05 sin((100*pi + epsilon) % (2*pi)) 1.5258789076118735e-05 sin(epsilon) 1.5258789061907882e-05
I do recommend reading up more on floating point arithmetic. A good paper is Goldman's "What Every Computer Scientist Should Know About Floating-Point Arithmetic": http://www.physics.ohio-state.edu/~dws/grouplinks/floating_point_math.pdf -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco