Alan G Isaac wrote:
On Thu, 25 May 2006, Robert Kern apparently wrote:
What continuity? This is floating-point arithmetic.
Sure, but a continuity argument suggests (in the absence of specific floating point reasons to doubt it) that a better approximation at one point will mean better approximations nearby. E.g.,
epsilon = 0.00001 sin(100*pi+epsilon)
Compare to the bc result of 9.9999999998333333e-006
bc 1.05 Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 epsilon = 0.00001 s(100*pi + epsilon) .00000999999999983333333333416666666666468253968254
You aren't using bc correctly.
bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 100*pi 0
If you know that you are epsilon from n*2*π (the real number, not the floating point one), you should just be calculating sin(epsilon). Usually, you do not know this, and % (2*pi) will not tell you this. (100*pi + epsilon) is not the same thing as (100*π + epsilon).
FWIW, for the calculation that you did in bc, numpy.sin() gives the same results (up to the last digit):
from numpy import * sin(0.00001)
You wanted to know if something there is something exploitable to improve the accuracy of numpy.sin(). In general, there is not. However, if you know the difference between your value and an integer multiple of the real number 2*π, then you can do your floating-point calculation on that difference. However, you will not in general get an improvement by using % (2*pi) to calculate that difference.