Fwd: Trigonometry in degrees

I agree that a big python library is more close to the standard python lib than matlab. However, helping transition from matlab is a great concern in the python scientific community, because matlab is used is a lot of engineering classes at University. That's a tough call hmmm. I'll look at the implementation of scipy.special.sindg and friends to see if/how they have optimisations for exact values. Le dim. 10 juin 2018 à 16:44, Stephan Houben <stephanh42@gmail.com<mailto:stephanh42@gmail.com>> a écrit : 2018-06-09 8:18 GMT+02:00 Robert Vanden Eynde <robertvandeneynde@hotmail.com<mailto:robertvandeneynde@hotmail.com>>: For the naming convention, scipy using sindg (therefore Nor sind nor sindeg) will make the sind choice less obvious. However if Matlab and Julia chooses sind that's a good path to go, Matlab is pretty popular, as other pointed out, with Universities giving "free" licences and stuff. With that regards, scipy wanting to "be a replacement to Matlab in python and open source" it's interesting they chose sindg and not the Matlab name sind. I would suggest that compatibility with a major Python library such as SciPy is more important than compatibility with other programming languages. I would go even further and argue that scipy.special.sindg and its friends cosdg and tandg can serve as the reference implementation for this proposal. Stephan For the "d" as suffix that would mean "d" as "double" like in opengl. Well, let's remember that in Python there's only One floating type, that's a double, and it's called float... So python programmers will not think "sind means it uses a python float and not a python float32 that C99 sinf would". Python programmers would be like "sin takes float in radians, sind takes float in degrees or int, because int can be converted to float when there's no overflow". Le sam. 9 juin 2018 à 04:09, Wes Turner <wes.turner@gmail.com<mailto:wes.turner@gmail.com>> a écrit : # Python, NumPy, SymPy, mpmath, sage trigonometric functions https://en.wikipedia.org/wiki/Trigonometric_functions ## Python math module https://docs.python.org/3/library/math.html#trigonometric-functions - degrees(radians): Float degrees - radians(degrees): Float degrees ## NumPy https://docs.scipy.org/doc/numpy/reference/routines.math.html#trigonometric-... - degrees(radians) : List[float] degrees - rad2deg(radians): List[float] degrees - radians(degrees) : List[float] radians - deg2rad(degrees): List[float] radians https://docs.scipy.org/doc/numpy/reference/generated/numpy.sin.html ## SymPy http://docs.sympy.org/latest/modules/functions/elementary.html#sympy-functio... http://docs.sympy.org/latest/modules/functions/elementary.html#trionometric-... - sympy.mpmath.degrees(radians): Float degrees - sympy.mpmath.radians(degrees): Float radians - https://stackoverflow.com/questions/31072815/cosd-and-sind-with-sympy - cosd, sind - https://stackoverflow.com/questions/31072815/cosd-and-sind-with-sympy#commen... > Let x, theta, phi, etc. be Symbols representing quantities in radians. Keep a list of these symbols: angles = [x, theta, phi]. Then, at the very end, use y.subs([(angle, angle*pi/180) for angle in angles]) to change the meaning of the symbols to degrees" ## mpmath http://mpmath.org/doc/current/functions/trigonometric.html - sympy.mpmath.degrees(radians): Float degrees - sympy.mpmath.radians(degrees): Float radians ## Sage https://doc.sagemath.org/html/en/reference/functions/sage/functions/trig.htm... On Friday, June 8, 2018, Robert Vanden Eynde <robertvandeneynde@hotmail.com<mailto:robertvandeneynde@hotmail.com>> wrote: - Thanks for pointing out a language (Julia) that already had a name convention. Interestingly they don't have a atan2d function. Choosing the same convention as another language is a big plus. - Adding trig function using floats between 0 and 1 is nice, currently one needs to do sin(tau * t) which is not so bad (from math import tau, tau sounds like turn). - Julia has sinpi for sin(pi*x), one could have sintau(x) for sin(tau*x) or sinturn(x). Grads are in the idea of turns but with more problems, as you guys said, grads are used by noone, but turns are more useful. sin(tau * t) For The Win. - Even though people mentionned 1/6 not being exact, so that advantage over radians isn't that obvious ? from math import sin, tau from fractions import Fraction sin(Fraction(1,6) * tau) sindeg(Fraction(1,6) * 360) These already work today by the way. - As you guys pointed out, using radians implies knowing a little bit about floating point arithmetic and its limitations. Integer are more simple and less error prone. Of course it's useful to know about floats but in many case it's not necessary to learn about it right away, young students just want their player in the game move in a straight line when angle = 90. - sin(pi/2) == 1 but cos(pi/2) != 0 and sin(3*pi/2) != 1 so sin(pi/2) is kind of an exception. Le ven. 8 juin 2018 à 09:11, Steven D'Aprano <steve@pearwood.info<mailto:steve@pearwood.info>> a écrit : On Fri, Jun 08, 2018 at 03:55:34PM +1000, Chris Angelico wrote:
Ha ha, nice pun, but no, the hyperbolic trig functions never take arguments in degrees. Or radians for that matter. They are "hyperbolic angles", which some electrical engineering text books refer to as "hyperbolic radians", but all the maths text books I've seen don't call them anything other than a real number. (Or sometimes a complex number.) But for what it's worth, there is a correspondence of a sort between the hyperbolic angle and circular angles. The circular angle going between 0 to 45° corresponds to the hyperbolic angle going from 0 to infinity. https://en.wikipedia.org/wiki/Hyperbolic_angle https://en.wikipedia.org/wiki/Hyperbolic_function
[1] Apocryphally, alas.
Don't ruin a good story with facts ;-) -- Steve _______________________________________________ Python-ideas mailing list Python-ideas@python.org<mailto:Python-ideas@python.org> https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/ _______________________________________________ Python-ideas mailing list Python-ideas@python.org<mailto:Python-ideas@python.org> https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/ _______________________________________________ Python-ideas mailing list Python-ideas@python.org<mailto:Python-ideas@python.org> https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/ _______________________________________________ Python-ideas mailing list Python-ideas@python.org<mailto:Python-ideas@python.org> https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

On Sun, Jun 10, 2018 at 11:26 AM, Robert Vanden Eynde < robertvandeneynde@hotmail.com> wrote:
not really -- if you are moving from matlab to python, you are going to be using numpy and scipy -- we really don't need to spell similar functionality differently than scipy does. In regard to the "special values", and exact results -- a good math lib should return results that are "exact" in all but maybe the last digit stored. So you could check inputs and outputs with, e.g. math.isclose() to give people the "exact" results. -- and keep it all in floating point. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Sun, Jun 10, 2018 at 10:01:09PM -0700, Chris Barker via Python-ideas wrote:
I wish Uncle Timmy or Mark Dickinson were around to give a definite answer, but in their absence I'll have a go. I'm reasonably sure that's wrong. The problem with trig functions is that they suffer from "the table maker's dilemma", so it is very hard to guarantee a correctly rounded result without going to ludicrous extremes: http://perso.ens-lyon.fr/jean-michel.muller/Intro-to-TMD.htm So I think that there's no guarantee given for trancendental functions like sine, cosine etc. But even if they were, using isclose() is the wrong solution. Suppose sin(x) returns some number y, such that isclose(y, 0.0) say. You have no way of knowing that y is an inaccurate result that ought to be zero, or whether the answer should be non-zero and y is correct. You cannot assume that "y is close to zero, therefore it ought to be zero". It's not just zero, the same applies for any value. That's just moving rounding errors from one input to a slightly different input. # current situation sine of x returns y, but the mathematical exact result is exactly z # suggested "fix" sine of x ± a tiny bit returns exactly z, but ought to return y Guessing what sin or cos "ought to" return based on either the inexact input or inexact output is not a good approach. Remember, because π is irrational, we cannot actually call sin or cos on any rational multiple of π. We can only operate on multiples of pi, which is *close to* but not the same as π. That's why it is okay that tan(pi/2) returns a huge number instead of infinity or NAN. That's because the input is every so slightly smaller than π/2. That's exactly the behaviour you want when x is ever so slightly smaller than π/2. -- Steve

This would basically be the reason for a PiMultiple class - you can special case it. You'd know sin(PiMultiple(0.5)) == 0. You'd know cos(PiMultiple(0.5)) == -1 and tan(PiMultiple(0.5)) == nan. This could let you remember as much angles as possible into multiples of pi, and as long as you're in multiples of pi, you're exact. PiMultiple(Fraction(1, 6)) would be exact and could give the right sin() and cos() behaviour. And because it'd be a numeric type, you could still use it with all other numeric types and add/multiply etc it. When you add and subtract it with another numeric type, it'd lose the special status, but even with multiples and divisions you can preserve it's specialness. And if it gets weird values, you can always fall back on converting it to a float, therefore never giving worse results. It also gives a reason -against- degrees. if you have PiMultiple or TauMultiple, it's rather easy to give common angles, and students can learn to properly learn radians for angles as they should.(because, lets be honest, they're objectively better measures of angles than degrees, or even *shiver* grads. ) We SHOULD make it easy to code exact and the right way, and I think a PiMultiple class could help that a lot. That said, it does need a better name.

Op 11 jun. 2018 om 09:18 heeft Jacco van Dorp <j.van.dorp@deonet.nl> het volgende geschreven:
What is the real world advantage of such a class? So far I’ve only seen examples where the current behavior is said to be confusing for students. In most cases where I have used math.sin the angle wasn’t a constant and wasn’t an exact mulltiple of pi. Ronald

2018-06-11 10:00 GMT+02:00 Ronald Oussoren <ronaldoussoren@mac.com>:
Im assuming the current math.pi would be converted to PiMultiple(1) When learning, it's rather easy to write and read the following:
from math import sin, pi, asin, cos
It helps clarity and understanding when you're coming to python from a math background. In universities, half of your values when working with angles are written as some multiple of pi (of some weird fraction involving it). Also, for the common angles, we'll gain accuracy (and perhaps we can avoid the infinite series for certain computations. That would be a win.). Also, you're countering the "confusing to students" part with your own production environment experience. Those aren't alike. And if this was done, your float-based production code wouldn't get slower or care any other way. What you implement and test with PiMultiple would work perfectly fine with any random float as well, just without the extra advantages.

On Mon, Jun 11, 2018 at 12:08:07PM +0200, Jacco van Dorp wrote:
asin(1) "0.5π" # Currently: 1.5707963267948966
I think that if you expect the stdlib math library to change to symbolic maths for trig functions, you are going to be extremely disappointed. Aside from everything else, this is such a massive backward- compatibility break that it probably wouldn't even have been allowed in Python 3.0, let alone in 3.8.
It helps clarity and understanding when you're coming to python from a math background.
What about the other 95% of Python programmers who don't have a maths background and don't give two hoots about the mathematical elegance of being able to differentiate sin(x) without a multiplicative factor? -- Steve

On Mon, Jun 11, 2018 at 1:00 AM, Ronald Oussoren <ronaldoussoren@mac.com> wrote:
What is the real world advantage of such a class? So far I’ve only seen examples where the current behavior is said to be confusing for students.
EXACTLY! In [*49*]: math.sin(math.pi) Out[*49*]: 1.2246467991473532e-16 If the difference between 1.2246467991473532e-16 and zero is important to you, you've got bigger issues to deal with, and you'd better have a decent grasp of floating point computation's limitations. This is not that different than using Decimal, because it is confusing or aesthetically unpleasing to get something other than 1 (for example) when you add 0.1 up ten times: In [*25*]: x = 0.0 In [*26*]: *for* i *in* range(10): x += 0.1 In [*27*]: x Out[*27*]: 0.9999999999999999 But: In [*28*]: 1.0 - x Out[*28*]: 1.1102230246251565e-16 i.e. x is within one decimal unit in the last place stored by a float to 1.0. Which is to say -- there is no practical difference within the abilities of floating point, and Decimal, while it would present this particular result exactly, isn't any more "accurate" in general (unless you use more precision, which is a result of variable precision, not decimal arithmetic per se) So -- If there is a nifty way to specify that I want, say, the sin of "exactly pi", then the code could special case that, and return exactly zero. But what if you happen to pass in a value just a tiny bit larger than pi? then you have a potential discontinuity at pi, because you'd still have to use the regular FP computation for any non-exact multiple of pi. All this means that you will get a very similar result by rounding your outputs to a digit less than full FP precision: In [*46*]: math.sin(math.pi) Out[*46*]: 1.2246467991473532e-16 In [*47*]: round(math.sin(math.pi), 15) Out[*47*]: 0.0 But anyway: There *may* be a nice use-case for a more "friendly" trig package, particularly in education, but I don't think it would ever belong in the std lib. (though maybe I'd change my mind if it saw really wide use) However simiply adding a few names like: sindeg, cosdeg, etc, to save folks from having to type: math.sin(math.degree(something)) is a fine idea -- and it may make it a bit more clear, when people go looking for the "Sine" function, that they don't want to use degrees with the regular one... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

Would sind and cosd make Euler's formula work correctly? sind(x) + i * sind(x) == math.e ** (i * x) I suspect that adding these functions is kind of like those cartoons where the boat is springing leaks and the character tried to plug them with their fingers. Floating point is a leaky abstraction. Perhaps you'd prefer an enhancement to the fractions module that provides real (not float) math?

As mentioned, with complex numbers the radians make more sense and of course cmath.sind(x) + 1j * cmath.sind(x) != cmath.exp(1j * x). However, adding degrees version for cmath (import cmath) is still useful, cmath.rectd, cmath.phased, cmath.polard etc. 2018-06-11 19:24 GMT+02:00 Michael Selik <mike@selik.org<mailto:mike@selik.org>>: Would sind and cosd make Euler's formula work correctly? sind(x) + i * sind(x) == math.e ** (i * x) I suspect that adding these functions is kind of like those cartoons where the boat is springing leaks and the character tried to plug them with their fingers. Floating point is a leaky abstraction. Perhaps you'd prefer an enhancement to the fractions module that provides real (not float) math? _______________________________________________ Python-ideas mailing list Python-ideas@python.org<mailto:Python-ideas@python.org> https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

Whoops, it turns out Euler's formula does work! I expected imprecision, but at least one test matched. x = 42 cos(x) + 1j * sin(x) == e ** (1j * x) I suppose that's because it's radians. On Mon, Jun 11, 2018, 10:24 AM Michael Selik <mike@selik.org> wrote:

2018-06-11 19:33 GMT+02:00 Michael Selik <mike@selik.org>:
I think you will find it holds for any x (except inf, -inf and nan). The boat is less leaky than you think; IEEE floating-point arithmetic goes out of its way to produce exact answers whenever possible. (To great consternation of hardware designers who felt that requiring 1.0*x == x was too expensive.)
I suppose that's because it's radians.
Well, the formula obviously only holds in exact arithmetic if cos and sin are the versions taking radians. Stephan

On 2018-06-11 14:04, Stephan Houben wrote:
In fact, 1.0*x == x is almost all that this test exercises. If I'm looking in the right place, this is C the implementation of a ** b, omitting in a few special cases: vabs = hypot(a.real,a.imag); len = pow(vabs,b.real); at = atan2(a.imag, a.real); phase = at*b.real; if (b.imag != 0.0) { len /= exp(at*b.imag); phase += b.imag*log(vabs); } r.real = len*cos(phase); r.imag = len*sin(phase); This means that (e ** ...) is essentially implemented in terms of the formula above. Indeed, in the special case of e ** (1j * x), we have a.real = e, a.imag = 0.0, b.real = 0.0, and b.imag = 1.0, so concretely the code simplifies to this: vabs = e len = 1.0 at = 0.0 phase = 0.0 if (b.imag != 0.0) { len = 1.0; phase = x; // requires log(e) == 1.0 and x * 1.0 == x } r.real = cos(phase); // requires 1.0 * x == x r.imag = sin(phase); Thus, it shouldn't be too surprising that the formula holds :) Clément.

On Mon, Jun 11, 2018 at 10:24:42AM -0700, Michael Selik wrote:
Would sind and cosd make Euler's formula work correctly?
sind(x) + i * sind(x) == math.e ** (i * x)
No, using degrees makes Euler's identity *not* work correctly, unless you add in a conversion factor from degrees to radians: https://math.stackexchange.com/questions/1368049/eulers-identity-in-degrees Euler's Identity works fine in radians: py> from cmath import exp py> exp(1j*math.pi) (-1+1.2246063538223773e-16j) which is close enough to -1 given the usual rounding issues with floats. (Remember, math.pi is not π, but a number close to it. There is no way to represent the irrational number π in less than an infinite amount of memory without symbolic maths.) [...]
Perhaps you'd prefer an enhancement to the fractions module that provides real (not float) math?
I should think not. Niven's Theorem tells us that for rational angles between 0° and 90° (that is, angles which can be represented as fractions), there are only THREE for which sine (and cosine) are themselves rational: https://en.wikipedia.org/wiki/Niven's_theorem Every value of sin(x) except for those three angles is an irrational number, which means they cannot be represented exactly as fractions or in a finite number of decimal places. What that means is that if we tried to implement real (not float) trigonometric functions on fractions, we'd need symbolic maths capable of returning ever-more complicated expressions involving surds. For example, the exact value of sin(7/2 °) involves a triple nested square root: 1/2 sqrt(2 - sqrt(2 + sqrt(3))) and that's one of the relatively pretty ones. sin(3°) is: -1/2 (-1)^(29/60) ((-1)^(1/60) - 1) (1 + (-1)^(1/60)) http://www.wolframalpha.com/input/?i=exact+value+of+sine%2815%2F2+degrees%29 http://www.wolframalpha.com/input/?i=exact+value+of+sine%283+degrees%29 This proposal was supposed to *simplify* the trig functions for non-mathematicians, not make them mind-bogglingly complicated. -- Steve

Steven D'Aprano wrote:
I don't think anyone is going to complain about sin(3°) not being exact, whatever units are being used. This discussion is only about the rational values. I wonder whether another solution would be to provide a set of "newbie math" functions that round their results.
Yes, I know, this just pushes the surprises somewhere else, but so does every solution. -- Greg

On Tue, Jun 12, 2018 at 10:57:18AM +1200, Greg Ewing wrote:
Precisely. They are the values I'm talking about. If you're serious about only supporting only the rational values, then the implementation is trivial and we can support both degrees and radians easily. def sin(angle): # radians if angle == 0: return 0 raise ValueError("sorry, no rational sine for that angle") def sind(angle): # degrees angle %= 360 rational_values = { 0: 0, 30: 0.5, 90: 1, 150: 0.5, 180: 0, 210: -0.5, 270: -1, 330: -0.5 } if angle in rational_values: return rational_values[angle] raise ValueError("sorry, no rational sine for that angle") Exceedingly simple, and exceedingly useless. Which was my point: supporting only the rational values is pointless, because there are only a handful of them. We either have to support full-blown symbolic results, or we have rational APPROXIMATIONS to the true value. I'm responding to a proposal that explicitly suggested using fractions to do "real (not float) math", which I read as "no approximations". I imagine that Michael thought that by using fractions, we can calculate exact rational results for sine etc without floating point rounding errors. No we cannot, except for the values above. If "not float" is serious, then with the exception of the above values, *none* of the values will be rational and we either can't return a value (except for the above) or we have to use symbolic maths. Using fractions is not a magic panacea that lets you calculate exact answers just by swapping floats to fractions.
No, that's really not true of every solution. The initial proposal is fine: a separate set of trig functions that take their arguments in degrees would have no unexpected surprises (only the expected ones). With a decent implementation i.e. not this one: # don't do this def sind(angle): return math.sin(math.radians(angle)) and equivalent for cos, we ought to be able to get correctly rounded values for nearly all the "interesting" angles of the circle, without those pesky rounding issues caused by π not being exactly representable as a float. There will still be rounding errors, because we're dealing with numbers like sqrt(2) etc, but with IEEE-754 maths, they'll be correctly rounded. -- Steve

[Steven D'Aprano]
...
The initial proposal is fine: a separate set of trig functions that take
But that's good enough for almost all real purposes. Indeed, except for argument reduction, it's essentially how scipy's sindg and cosdg _are_ implemented: https://github.com/scipy/scipy/blob/master/scipy/special/cephes/sindg.c things with some extra bits of precision to absorb them. For example, install `mpmath` and use this: def sindg(d): import math, mpmath d = math.fmod(d, 360.0) if abs(d) == 180.0: return 0.0 with mpmath.extraprec(12): return float(mpmath.sin(mpmath.radians(d))) Then, e.g,
Notes: 1. Python's float "%" is unsuitable for argument reduction; e.g.,
-1e-14 % 360.0 360.0
`math.fmod` is suitable, because it's exact:
math.fmod(-1e-14, 360.0) -1e-14
2. Using a dozen extra bits of precision make it very likely you'll get the correctly rounded 53-bit result; it will almost certainly (barring bugs in `mpmath`) always be good to less than 1 ULP. 3. Except for +-180. No matter how many bits of float precision (including the number of bits used to approximate pi) are used, converting that to radians can never yield the mathematical `pi`; and sin(pi+x) is approximately equal to -x for tiny |x|; e.g., here with a thousand bits:
So +-180 is special-cased. For cosdg, +-{90. 270} would need to be special-cased for the same reason.

On Tue, Jun 12, 2018 at 4:40 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
https://docs.python.org/3/reference/expressions.html#binary-arithmetic-opera... https://docs.python.org/3/reference/expressions.html#id17 https://docs.python.org/3/reference/expressions.html#id18 (the latter two being footnotes from the section in the first link) With real numbers, divmod (and thus the // and % operators) would always return values such that: div, mod = divmod(x, y): 1) div*y + mod == x 2) sign(mod) == sign(y) 3) 0 <= abs(mod) < abs(y) But with floats, you can't guarantee all three of these. The divmod function focuses on the first, guaranteeing the fundamental arithmetic equality, but to do so, it sometimes has to bend the third one and return mod==y. There are times when it's better to sacrifice one than the other, and there are other times when it's the other way around. We get the two options. ChrisA

Hi all, I wrote a possible implementation of sindg: https://gist.github.com/stephanh42/336d54a53b31104b97e46156c7deacdd This code first reduces the angle to the [0,90] interval. After doing so, it can be observed that the simple implementation math.sin(math.radians(angle)) produces exact results for 0 and 90, and a result already rounded to nearest for 60. For 30 and 45, this simple implementation is one ulp too low. So I special-case those to return the correct/correctly-rounded value instead. Note that this does not affect monotonicity around those values. So I am still unsure if this belong in the stdlib, but if so, this is how it could be done. Stephan 2018-06-12 8:50 GMT+02:00 Chris Angelico <rosuav@gmail.com>:

On Tue, Jun 12, 2018, 00:03 Stephan Houben <stephanh42@gmail.com> wrote:
You observed this on your system, but math.sin uses the platform libm, which might do different things on other people's systems.
Again, monotonicity is preserved on your system, but it might not be on others. It's not clear that this matters, but then it's not clear that any of this matters... -n

What was wrong with my initial implementation with a lookup table ? :D def sind(x): if x % 90 == 0: return (0, 1, 0, -1)[int(x // 90) % 4] else: return sin(radians(x)) If you want to support multiples of 30, you can do % 30 and // 30. Le mer. 13 juin 2018 à 09:51, Stephan Houben <stephanh42@gmail.com> a écrit :

2018-06-13 12:00 GMT+02:00 Robert Vanden Eynde <robertve92@gmail.com>:
I kinda missed it, but now you ask: 1. It's better to reduce the angle while still in degrees since one of the advantages of degrees is that the reduction can be done exactly. Converting very large angles first to radians and then taking the sine can introduce a large error, 2. I used fmod instead of % on advice in this thread. 3. I also wanted to special case, 30, 45, and 60.
If you want to support multiples of 30, you can do % 30 and // 30.
Sure, but I also wanted to special-case 45. Stephan

2018-06-13 12:08 GMT+02:00 Robert Vanden Eynde <robertve92@gmail.com>:
Then of you also want 45, you could do % 15 ? :D
Sure, but how the lookup is done in the Python reference code is ultimately not so important, since it will need to be rewritten in C if it is to be included in the math package (math is C-only). And then we'll probably end up with a bunch of if-checks against the common values. Stephan

My first comment is that special casing values like this can lead to some very undesirable properties when you use the function for numerical analysis. Suddenly your sind is no longer continuous (sind(x) is no longer the limit of sind(x+d) as d goes to 0). As I stated in my initial comment on this, if you are going to create a sind function with the idea that you want 'nice' angles to return 'exact' results, then what you need to do is have the degree based trig routines do the angle reduction in degrees, and only when you have a small enough angle, either use the radians version on the small angle or directly include an expansion in degrees. Angle reduction would be based on the identity that sin(x+y) = sin(x) * cos(y) + cos(x) * sin(y) and cos(x+y) = cos(x)*cos(y) - sin(x) * sin(y). If you want to find sin(z) for an arbitrary value z, you can reduce it to and x+y where x is some multiple of say 15 degrees, and y is in the range -7.5 to 7.5 degrees. You can have stored exact values of sin/cos of the 15 degree increments (and only really need them between 0 and 90) and then compute the sin and cos of the y value. On 6/13/18 6:07 AM, Stephan Houben wrote:
-- Richard Damon

Op wo 13 jun. 2018 13:12 schreef Richard Damon <Richard@damon-family.org>:
The deviations introduced by the special casing are on the order of one ulp. At that level of detail the sin wasn't continuous to begin with.
Yes that is what my code does. It reduces degrees to [0,90].
This is not how sine functions are calculated. They are calculated by reducing angle to some interval, then evaluating a polynomial which approximates the true sine within that interval. Stephan

Stephan Houben wrote:
Yes that is what my code does. It reduces degrees to [0,90].
Only for the special angles, though. Richard's point is that you need to do tha for *all* angles to avoid discontinuities with large angles.
That's what he suggested, with the interval being -7.5 to 7.5 degrees. -- Greg

On 6/13/18 7:21 AM, Stephan Houben wrote:
I would say the change isn't one ulp, changing a non-zero number to zero is not one ulp (unless maybe you are on the verge of underflow). It may be one ulp of 'full scale', but we aren't near the full scale point. It might be the right answer for a less than one ulp change in the INPUT, but if we thought that way we wouldn't have minded the non-zero result in the first place. The fundamental motivation is that for 'nice angles' we want the 'nice result' when possible, but the issue is that most of the 'nice angles' in radians are not representable exactly, so it isn't surprising that we don't get the nice results out. One property that we like to preserve in functional calculation is that the following pseudo code dp = x + delta derivative = ( f(xp) - f(x) ) / (xp - x) (or variations where you subtract delta or work at x+delta and x-delta) should approximate well the derivative of the function, (which for sin in radians should be cos), and that this improves as delta gets very small until we hit the rounding error in the computation of f(x). (Note, I don't divide by delta, but xp-x to remove the round off error in computing xp which isn't the fault of the function f). Changing a point because it is the closest to the nice number will cause this calculation to spike due to the single point perturbation). Yes, this calculation may start to 'blow up' f(xp) - f(x) is very small compared to f(x) and we start to measure the round off error in the computation of the function, near a zero of the function, (which if we are root finding is common) we can do quite well.
And that is what my method did (as others have said). Virtual all methods of computing sin and cos use the angle addition formula (and even quadrant reduction is using it for the special case of using one of the angles where sin/cos are valued in-1, 0, 1). The methods that least use it that I know of reduces the angle to a quadrant (or octant) and then selects one a number of expansions good for a limited range, or table interpolation (but even that sort of uses it with the approximation of sin(x) ~ x and cos(x) ~ 1 for very small x) -- Richard Damon

[Richard Damon]
Either way, it's necessary to get the effect of working in greater than output precision, if it's desired that the best possible result be returned for cases well beyond just the handful of "nice integer inputs" people happen to be focused on today. So I'll say again that the easiest way to do that is to use `mpmath` to get extra precision directly. The following does that for sindg, cosdg, and tandg. - There are no special cases. Although tandg(90 + i*180) dies with ZeroDivisionError inside mpmath, and that could/should be fiddled to return an infinity instead. - Apart from that, all functions appear to give the best possible double-precision result for all representable-as-a-double integer degree inputs (sindg(30), cosdg(-100000), doesn't matter). - And for all representable inputs of the form `integer + j/32` for j in range(32). - But not for all of the form `integer + j/64` for j in range(1, 64, 2). A few of those suffer greater than 1/2 ULP error. Setting EXTRAPREC to 16 is enough to repair those - but why bother? ;-) - Consider the largest representable double less than 90:
The code below gives the best possible tangent:
tandg(x) 4031832051015932.0
Native precision is waaaaay off:
math.tan(math.radians(x)) 3530114321217157.5
It's not really the extra precision that saves the code below, but allowing argument reduction to reduce to the range [-pi/4, pi/4] radians, followed by exploiting trigonometric identities. In this case, exploiting that tan(pi/2 + z) = -1/tan(z). Then even native precision is good enough:
-1 / math.tan(math.radians(x - 90)) 4031832051015932.0
Here's the code: import mpmath from math import fmod # Return (n, x) such that: # 1. d degrees is equivalent to x + n*(pi/2) radians. # 2. x is an mpmath float in [-pi/4, pi/4]. # 3. n is an integer in range(4). # There is one potential rounding error, when mpmath.radians() is # used to convert a number of degrees between -45 and 45. This is # done using the current mpmath precision. def treduce(d): d = fmod(d, 360.0) n = round(d / 90.0) assert -4 <= n <= 4 d -= n * 90.0 assert -45.0 <= d <= 45.0 return n & 3, mpmath.radians(d) EXTRAPREC = 14 def sindg(d): with mpmath.extraprec(EXTRAPREC): n, x = treduce(d) if n & 1: x = mpmath.cos(x) else: x = mpmath.sin(x) if n >= 2: x = -x return float(x) def cosdg(d): with mpmath.extraprec(EXTRAPREC): n, x = treduce(d) if n & 1: x = mpmath.sin(x) else: x = mpmath.cos(x) if 1 <= n <= 2: x = -x return float(x) def tandg(d): with mpmath.extraprec(EXTRAPREC): n, x = treduce(d) x = mpmath.tan(x) if n & 1: x = -1.0 / x return float(x)

I should note that numeric code "that works" is often much subtler than it appears at first glance. So, for educational purposes, I'll point out some of what _wasn't_ said about this crucial function: [Tim]
obvious. That `fmod` is exact is guaranteed by the relevant standards, but most people who write a libm don't get it right at first. There is no "cheap" way to implement it correctly. It requires getting the effect of doing exact integer division on, potentially, multi-thousand bit integers. Assuming x > y > 0, a correct implementation of fmod(x, y) ends up in a loop that goes around a number of times roughly equal to log2(x/y), simulating one-bit-at-a-time long division For example, here's glibc's implementation: https://github.com/bminor/glibc/blob/master/sysdeps/ieee754/dbl-64/e_fmod.c Don't expect that to be easy to follow either ;-) Then how do we know that `d -= n * 90.0" is exact? That's not obvious either. It follows from the "Sterbenz lemma", one version of which: if x and y are non-zero floats of the same sign within a factor of 2 of each other, 1/2 <= x/y <= 2 (mathematically) then x-y is exactly representable as a float too. This is true regardless of the floating-point base, or of rounding mode in use. In IEEE-754, it doesn't even need weasel words to exempt underflowing cases. That lemma needs to be applied by cases, for each of the possible values of (the integer) `n`. It gets closest to failing for |n| = 1. For example, if d is a tiny bit larger than 45, n is 1, and then d/90 is (mathematically) very close to 1/2. Which is another thing that needs to be shown: "if d is a tiny bit larger than 45, n is 1". Why? It's certainly true if we were using infinite precision, but we're not. The smallest representable double > 45 is 45 + 2**-47:
Then d/90.0 (the argument to round()) is, with infinite precision, (45 + 2**-47)/90 = 0.5 + 2**-47/90 1 ULP with respect to 0.5 is 2**-53, so that in turn is equal to 0.5 + 2**-53/(90/64) = 0.5 + (64/90)*2**-53 = 0.5 + 0.71111111111... * 2**-53 Because the tail (0.711...) is greater than 0.5 ULP, it rounds up under nearest-even rounding, to 0.5 + 2**-53
and so Python's round() rounds it up to 1:
round(_) 1
Note that it would _not_ be true if truncating "rounding" were done, so round-nearest is a hidden assumption in the code. Similar analysis needs to be done at values near the boundaries around all possible values of `n`. That `assert -45.0 <= d <= 45.0` can't fall then follows from all of that. In all, a proof that the code is correct is much longer than the code itself. That's typical. Alas, it's also typical that math library sources rarely point out the subtleties.

On Thu, Jun 14, 2018 at 01:44:34PM -0500, Tim Peters wrote:
Thanks Tim! Reading your digressions on the minutia of floating point maths is certainly an education. It makes algebra and real-valued mathematics seem easy in comparison. I still haven't got over Mark Dickinson's demonstration a few years back that under Decimal floating point, but not binary, it is possible for the ordinary arithmetic average (x+y)/2 to be outside of the range [x, y]: py> from decimal import getcontext, Decimal py> getcontext().prec = 3 py> x = Decimal('0.516') py> y = Decimal('0.518') py> (x + y) / 2 Decimal('0.515') -- Steve

[Steven D'Aprano <steve@pearwood.info>]
Thanks Tim!
You're welcome ;-)
Hard to say, really. The problem with floating point is that it's so God-awful lumpy - special cases all over the place. Signaling and quiet NaNs; signed infinities; signed zeroes; normal finites all with the same number of bits, but where the gap between numbers changes abruptly at power-of-2 boundaries; subnormals where the gap remains the same across power-of-2 boundaries, but the number of _bits_ changes abruptly; all "the rules" break down when you get too close to overflow or underflow; four rounding modes to worry about; and a whole pile of technically defined exceptional conditions and related traps & flags. Ignoring all that, though, it's pretty easy ;-) 754 was dead serious about requiring results act is if a single rounding is done to the infinitely precise result, and that actually allows great simplification in reasoning. The trend these days appears to be using automated theorem-proving systems to keep track of the mountain of interacting special cases. Those have advanced enough that we may even be on the edge of getting provably-correctly-rounded transcendental functions with reasonable speed. Although it's not clear people will be able to understand the proofs ;-) I still haven't got over Mark Dickinson's demonstration a few years
Ya, decimal fp doesn't really solve anything except the shallow surprise that decimal fractions generally aren't exactly representable as binary fractions. Which is worth a whole lot for casual users, but doesn't address any of the deep problems (to the contrary, it makes those a bit worse). I like to illustrate the above with 1-digit decimal fp, because it makes it more apparent at once that - unlike as in binary fp - multiplication and division by 2 may _not_ be exact in decimal fp. We can't even average a number "with itself" reliably:
But related things _can_ happen in binary fp too! You have to be near the edge of representable non-zero finites though:
Oops. So rewrite it:
x/2 + y/2 1e+308
Better! But then:
Oops. A math library has to deal with everything "correctly". Believe it or not, this paper "How do you compute the midpoint of an interval?" https://hal.archives-ouvertes.fr/hal-00576641v1/document is solely concerned with computing the average of two IEEE doubles, yet runs to 29(!) pages. Almost everything you try fails for _some_ goofy cases. I personally write it as (x+y)/2 anyway ;-)

On Sat, Jun 16, 2018 at 10:57 PM, Tim Peters <tim.peters@gmail.com> wrote: Ya, decimal fp doesn't really solve anything except the shallow surprise
It's my suspicion that the story is the same with "degree-based" trig :-) Which is why, if you want "nice-looking" results, it seems one could simply accept a decimal digit or so less precision, and use the "regular" FP trig functions, rounding to 14 or so decimal digits. Though if someone really wants to implement trig in native degrees -- more power to 'em. However -- if this is really such a good idea -- wouldn't someone have make a C lib that does it? Or has someone? Anyone looked? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Mon, Jun 18, 2018, 19:25 Chris Barker via Python-ideas < python-ideas@python.org> wrote:
quite a few in fact, including cos(n*pi) https://www.boost.org/doc/libs/1_52_0/boost/units/cmath.hpp

On 6/18/18 19:23, Chris Barker via Python-ideas wrote:
Certainly! scipy.special uses the functions implemented in the Cephes C library. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

On Mon, Jun 18, 2018 at 07:23:50PM -0700, Chris Barker wrote:
No, there's nothing magical about C. You can do it in pure Python. It isn't as fast of course, but it works well enough. When I get a Round Tuit, I'll pop the code up on PyPy. -- Steve

[Tim]
[Greg Ewing]
So why doesn't float % use math.fmod?
[Chris Angelico]
positive integer y), and _given that_ sometimes results are fiddled to keep #1 approximately true, and strict inequality in #3 may be sacrificed. For `fmod`, sign(mod) == sign(x) instead.
All mod functions, m(x, y), strive to return a result that's mathematically exactly equal to x-n*y (for some mathematical integer `n` that may not even be representable in the programming language). `fmod()` is exact in that sense, but Python's floating "%" may not be. and no float scheme such that sign(m(x, y)) = sign(y) can be (see the original example at the top: the only mathematical integer `n` such that the mathematical -1e-14 - n*360.0 is exactly representable as a double is n==0). The most useful mod function for floats _as floats_ would actually satisfy abs(m(x, y)) <= abs(y) / 2 That can be done exactly too - but then the sign of the result has approximately nothing to do with the signs of the arguments.

Sorry for top posting, but these aren't really opinions for the debate, just information. I haven't seen them mentioned, and none grouped nicely under someone's reply. Number representation:: IEEE-754 doubles cannot represent pi correctly at any bit-depth (i mean, obviously, but more seriously). input pi = 3.14159265358979323846264338327 output pi = 3.14159265358979311599796346854 The cutoff is the value everyone uses, which is what is in math.pi among other places. 3.141592653589793 Any conversion using this value of pi is already wrong, and the input to the function is already wrong. The function doesn't matter. It's always underestimating if using pi. I'm clarifying this because it's separate from any discussion of functions. More accurate or different functions cannot use a better input pi. Calculator: http://www.binaryconvert.com/convert_double.html input pi source: https://www.piday.org/million/ Libraries: Boost implements a cos_pi function, it's algorithm will probably be useful to look at. glibc implements cos/sin as a look-up table, which is most likely where any other implementation will end up, as it is a common endpoint. I've seen this on different architectures and libraries. (sincostab.h if you want to google glibc's table). Maybe there is a hilarious index lookup off-by-1 there, I didn't look. Tables are the basis for the calculation in many libraries in march architectures, so I wanted to point that out to any function designers. A new implementation may come down to calculating right index locations. For a degree based implementation, the same algorithm can be used with a different table input that is calibrated to degrees. Likewise, a pi based implementation, the same but for the pi scale factor input. Nothing needs to be done designed other than a new lookup table calibrated for the "base" of a degree, radian, or pi.. it will have the same output precision issues calculating index values, but at least the input will be cleaner. This is not me saying what should be done, just giving information that may hopefully be useful Small note about python's math: The python math library does not implement algorithms, it exposes the c functions. You can see C/C++ has this exact issue performing the calculation there. As that is "the spec," the spec is defined with the error. The math library is technically correct given it's stated purpose and result. Technically correct, the best kind of correct. https://www.youtube.com/watch?v=hou0lU8WMgo On Mon, Jun 11, 2018 at 10:53 PM Tim Peters <tim.peters@gmail.com> wrote:

On Mon, Jun 11, 2018 at 10:24 AM, Michael Selik <mike@selik.org> wrote:
Would sind and cosd make Euler's formula work correctly?
Not trying to pick on you, but this question shows a key misunderstanding: There is nothing inherently more accurate in using degrees rather than radians for trigonometry. IT's nice that handy values like "one quarter of a circle" can be exactly represented, but that's really only an asthetic thing. And every computer math lib I've even seen uses floating point radians for trig functions, so unless you're really going to implement trig from degrees from scratch, then you are going to go to floating point radians (and floating point pi) anyway. Oh, and radians are the more "natural" units (in fact unitless) for math, and the only way that things like the Euler identity work. Which is why computational math libs use them. So there are two orthogonal ideas on the table here: 1) Have trig functions that take degrees for convenience for when folks are working in degrees already. 2) Have trig functions that produce exact values (i.e what is "expected") for the special cases. It seems the OP is interested in a package that combines both of these -- which is a fine idea as a third party lib. Perhaps you'd prefer an enhancement to the fractions module that provides
real (not float) math?
Isn't that exactly what the fractions module does? or are you suggesting that it be extended with trig functions ? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Mon, Jun 11, 2018, 1:18 PM Chris Barker <chris.barker@noaa.gov> wrote:
That's actually what I was trying to say. Shouldn't have tried to be round-about. Not only did the point get muddled, but I wrote something false as well! Perhaps you'd prefer an enhancement to the fractions module that provides
The latter. However, to Steven's point about irrationals, perhaps this should be an entirely separate module designed to handle various irrationalities accurately. ... Like SymPy.

On Mon, Jun 11, 2018 at 01:18:10PM -0700, Chris Barker via Python-ideas wrote:
Actually there is: using radians, the only "nice" angle that can be represented exactly is 0. With degrees, we can represent a whole lot of "nice" angles exactly. "Nice", is of course subjective, but most of us would recognise that 36° represented as exactly 36.0 is nice but being *approximately* represented as 0.6283185307179586 is not. Using radians, we have two sources of rounding error: - π cannot be represented exactly as a float, so we have to use a number pi which is ever-so-slightly off; - plus the usual round-off error in the algorithm; while using degrees, we only have the second one (since 180° *can* be represented exactly, as the float 180.0). And with the degrees implementation, we should be able to use correctly rounded roots for many of our "nice" angles.
Well that's the whole point of the discussion.
Oh, and radians are the more "natural" units (in fact unitless) for math,
Degrees are unit-less too. 180° = π radians. That's just a scaling factor difference. Unless you're doing symbolic maths, differentiating or integrating trig functions, or certain geometric formulae which are "neater" in radians than in degrees, there's no real advantage to radians. Degrees are simply a much more practical unit of angle for practical work.
and the only way that things like the Euler identity work.
Its not the *only* way. https://math.stackexchange.com/questions/1368049/eulers-identity-in-degrees
Which is why computational math libs use them.
Actually, more maths libraries than you might guess offer trig functions in degrees, or scaled by pi, e.g: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_lib.html Julia and Matlab provide sind etc, and although I cannot find a reference right now, I seem to recall the latest revision to the IEEE 754 standard suggesting them as optional functions. -- Steve

On Mon, Jun 11, 2018 at 09:18:22AM +0200, Jacco van Dorp wrote:
Not when I went to school it wasn't. sin(π/2) = sin(90°) = 1. Perhaps you meant cos? In any case, the sin function doesn't work that way. Unless we add a special dunder method to defer to, it cannot be expected to magically know how to deal with these "PiMultiple" objects, except by converting them to floats. You don't suddenly get accurate results by waving a magic wand over the float 0.5 and saying "You're a multiple of pi". You still have to code a separate algorithm for this, and that's hard work. (Why do you think the decimal module doesn't support trig functions?) Is every function that takes float arguments now supposed to recognise PiMultiple objects and treat them specially? How do they integrate in the numeric tower and interact with ordinary floats? But let's say you do this: def sin(arg): if isinstance(arg, PiMultiple): # does this mean it needs to be a builtin class? call internal sinpi(arg) function else: call regular sin(arg) function This isn't Java, and not everything needs to be a class. If we go to the trouble of writing separate sinpi() etc implementations, why hide one of them behind a class (and not even in a proper object-oriented interface) when we can just call the functions directly? sinpi(1.5) sin(PiMultiple(1.5)) I know which I'd rather use.
It also gives a reason -against- degrees. if you have PiMultiple or TauMultiple, it's rather easy to give common angles,
What about the uncommon angles? The whole point of this proposal is to make it easy to give angles in degrees without the need to convert to radians, introducing rounding errors over and above those introduced by the trig function itself.
No they are not objectively better measures of angles. With radians, you have to divide a right angle into an irrational number of radians, one which cannot be expressed in a finite number of decimal places. In a very real sense, it is impossible to measure exactly 1 radian. I know that in practical terms, this makes no difference, we can get close enough, but physical measurements are limited to rational numbers. A measurement system based on irrational numbers, especially one as difficult as π, is not objectively better. Its not just because of tradition that nobody uses radians in civil engineering, astronomy, architecture, etc. Radians shine when we're doing pure maths and some branches of physics, but they're a PITA to use in most practical circumstances. E,g, the tip of your little finger at arms length is close enough to 1° or 0.017 radian. Who wants to measure angles in multiples of 0.017? https://www.timeanddate.com/astronomy/measuring-the-sky-by-hand.html Using radians, these heuristics stink. -- Steve

On Sun, Jun 10, 2018 at 10:48 PM, Steven D'Aprano <steve@pearwood.info> wrote:
hmm -- I'm no numerical analyst, but I could have sworn I learned (from Kahan himself) that the trig functions could (and were, at least in the HP calculators :-) ) be computed to one digit of accuracy. He even proved how many digits of pi you'd have to store to do that (though I can't say I understood the proof) -- I think you needed all those digits of pi because the trig functions are defined on the range 0 -- pi/2, and any larger value needs to be mapped to that domain -- if someone asks for the sin(e100), you need to know pretty exactly what x % pi/4 is. The problem with trig functions is that they suffer from "the
so -- if that's the case, I still think we know that while the last digit may not be the best rounded value to the real one, the second to last digit is correct. And if not, then there's nothing we could do other than implement the math lib :-) But even if they were, using isclose() is the wrong solution. Suppose
no, but you can say "y is as close to zero as I care about" we are already restricted to not knowing the distinction within less than an eps -- so making the "effective eps" a bit larger would result in more esthetically pleasing results. It's not just zero, the same applies for any value. That's just moving
I don't think that's what it would be -- rather, it would be returning a bit less precision in exchange for more esthetically pleasing results :-) Note that there is no way I would advocate using this for the stdlib trig functions -- only for a purpose library. I'm also suggesting that it would result in equally good results to what is being proposed: using integer degrees, or a pi or tau based units. If you used integer degrees, then you'd have exactly, say pi (180 degrees), but a precision of only pi/180 -- much less than the 15 digits or so you'd get if you rounded the regular floating point results. And if you used tau based units, you'd be back to the same thing -- 0.5 tau could be exact, but what would you do for a bit bigger or smaller than that? use FP :-)
I suppose so, but is there a guarantee that the FP representation of π/2 is a tiny bit less than the exact value, rather than a tiny bit more? which would result in VERY different answers: In [*10*]: math.tan(math.pi / 2.0) Out[*10*]: 1.633123935319537e+16 In [*11*]: math.tan(math.pi / 2.0 + 2e-16) Out[*11*]: -6218431163823738.0 (though equally "correct") Also -- 1.6 e+16 is actually pretty darn small compared to FP range. So a library that wants to produce "expected" results may want to do something with that -- something like: In [*119*]: *def* pretty_tan(x): ...: tol = 3e-16 ...: diff = x % (pi / 2) ...: *if* abs(diff) < tol: ...: *return* float("-Inf") ...: *elif* (pi /2) - diff < tol: ...: *return* float("Inf") ...: *return* math.tan(x) ...: ...: In [*120*]: x = pi / 2 - 5e-16 In [*121*]: *for* i *in* range(10): ...: val = x + i * 1e-16 ...: *print* val, pretty_tan(val) ...: 1.57079632679 1.9789379661e+15 1.57079632679 1.9789379661e+15 1.57079632679 inf 1.57079632679 inf 1.57079632679 -inf 1.57079632679 -inf 1.57079632679 -inf 1.57079632679 -inf 1.57079632679 -2.61194216074e+15 1.57079632679 -2.61194216074e+15 You'd want to tweak that tolerance value to be as small as possible, and do somethign to make it more symmetric, but you get the idea. The goal is that if you have an input that is about as close as you can get to pi/2, you get inf or -inf as a result. This does mean you are tossing away a tiny bit of precision -- you could get a "correct" value for those values really close to pi/2, but it would be prettier... -CHB
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Mon, Jun 11, 2018 at 02:12:06PM -0700, Chris Barker wrote:
no, but you can say "y is as close to zero as I care about"
Of course you can. But we (the std lib) should not make that decision for everybody. For some people 0.001 is "close enough to zero". For others, 1e-16 is not. We're not in the position to decide for everyone. -- Steve

On Sun, Jun 10, 2018 at 11:26 AM, Robert Vanden Eynde < robertvandeneynde@hotmail.com> wrote:
not really -- if you are moving from matlab to python, you are going to be using numpy and scipy -- we really don't need to spell similar functionality differently than scipy does. In regard to the "special values", and exact results -- a good math lib should return results that are "exact" in all but maybe the last digit stored. So you could check inputs and outputs with, e.g. math.isclose() to give people the "exact" results. -- and keep it all in floating point. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Sun, Jun 10, 2018 at 10:01:09PM -0700, Chris Barker via Python-ideas wrote:
I wish Uncle Timmy or Mark Dickinson were around to give a definite answer, but in their absence I'll have a go. I'm reasonably sure that's wrong. The problem with trig functions is that they suffer from "the table maker's dilemma", so it is very hard to guarantee a correctly rounded result without going to ludicrous extremes: http://perso.ens-lyon.fr/jean-michel.muller/Intro-to-TMD.htm So I think that there's no guarantee given for trancendental functions like sine, cosine etc. But even if they were, using isclose() is the wrong solution. Suppose sin(x) returns some number y, such that isclose(y, 0.0) say. You have no way of knowing that y is an inaccurate result that ought to be zero, or whether the answer should be non-zero and y is correct. You cannot assume that "y is close to zero, therefore it ought to be zero". It's not just zero, the same applies for any value. That's just moving rounding errors from one input to a slightly different input. # current situation sine of x returns y, but the mathematical exact result is exactly z # suggested "fix" sine of x ± a tiny bit returns exactly z, but ought to return y Guessing what sin or cos "ought to" return based on either the inexact input or inexact output is not a good approach. Remember, because π is irrational, we cannot actually call sin or cos on any rational multiple of π. We can only operate on multiples of pi, which is *close to* but not the same as π. That's why it is okay that tan(pi/2) returns a huge number instead of infinity or NAN. That's because the input is every so slightly smaller than π/2. That's exactly the behaviour you want when x is ever so slightly smaller than π/2. -- Steve

This would basically be the reason for a PiMultiple class - you can special case it. You'd know sin(PiMultiple(0.5)) == 0. You'd know cos(PiMultiple(0.5)) == -1 and tan(PiMultiple(0.5)) == nan. This could let you remember as much angles as possible into multiples of pi, and as long as you're in multiples of pi, you're exact. PiMultiple(Fraction(1, 6)) would be exact and could give the right sin() and cos() behaviour. And because it'd be a numeric type, you could still use it with all other numeric types and add/multiply etc it. When you add and subtract it with another numeric type, it'd lose the special status, but even with multiples and divisions you can preserve it's specialness. And if it gets weird values, you can always fall back on converting it to a float, therefore never giving worse results. It also gives a reason -against- degrees. if you have PiMultiple or TauMultiple, it's rather easy to give common angles, and students can learn to properly learn radians for angles as they should.(because, lets be honest, they're objectively better measures of angles than degrees, or even *shiver* grads. ) We SHOULD make it easy to code exact and the right way, and I think a PiMultiple class could help that a lot. That said, it does need a better name.

Op 11 jun. 2018 om 09:18 heeft Jacco van Dorp <j.van.dorp@deonet.nl> het volgende geschreven:
What is the real world advantage of such a class? So far I’ve only seen examples where the current behavior is said to be confusing for students. In most cases where I have used math.sin the angle wasn’t a constant and wasn’t an exact mulltiple of pi. Ronald

2018-06-11 10:00 GMT+02:00 Ronald Oussoren <ronaldoussoren@mac.com>:
Im assuming the current math.pi would be converted to PiMultiple(1) When learning, it's rather easy to write and read the following:
from math import sin, pi, asin, cos
It helps clarity and understanding when you're coming to python from a math background. In universities, half of your values when working with angles are written as some multiple of pi (of some weird fraction involving it). Also, for the common angles, we'll gain accuracy (and perhaps we can avoid the infinite series for certain computations. That would be a win.). Also, you're countering the "confusing to students" part with your own production environment experience. Those aren't alike. And if this was done, your float-based production code wouldn't get slower or care any other way. What you implement and test with PiMultiple would work perfectly fine with any random float as well, just without the extra advantages.

On Mon, Jun 11, 2018 at 12:08:07PM +0200, Jacco van Dorp wrote:
asin(1) "0.5π" # Currently: 1.5707963267948966
I think that if you expect the stdlib math library to change to symbolic maths for trig functions, you are going to be extremely disappointed. Aside from everything else, this is such a massive backward- compatibility break that it probably wouldn't even have been allowed in Python 3.0, let alone in 3.8.
It helps clarity and understanding when you're coming to python from a math background.
What about the other 95% of Python programmers who don't have a maths background and don't give two hoots about the mathematical elegance of being able to differentiate sin(x) without a multiplicative factor? -- Steve

On Mon, Jun 11, 2018 at 1:00 AM, Ronald Oussoren <ronaldoussoren@mac.com> wrote:
What is the real world advantage of such a class? So far I’ve only seen examples where the current behavior is said to be confusing for students.
EXACTLY! In [*49*]: math.sin(math.pi) Out[*49*]: 1.2246467991473532e-16 If the difference between 1.2246467991473532e-16 and zero is important to you, you've got bigger issues to deal with, and you'd better have a decent grasp of floating point computation's limitations. This is not that different than using Decimal, because it is confusing or aesthetically unpleasing to get something other than 1 (for example) when you add 0.1 up ten times: In [*25*]: x = 0.0 In [*26*]: *for* i *in* range(10): x += 0.1 In [*27*]: x Out[*27*]: 0.9999999999999999 But: In [*28*]: 1.0 - x Out[*28*]: 1.1102230246251565e-16 i.e. x is within one decimal unit in the last place stored by a float to 1.0. Which is to say -- there is no practical difference within the abilities of floating point, and Decimal, while it would present this particular result exactly, isn't any more "accurate" in general (unless you use more precision, which is a result of variable precision, not decimal arithmetic per se) So -- If there is a nifty way to specify that I want, say, the sin of "exactly pi", then the code could special case that, and return exactly zero. But what if you happen to pass in a value just a tiny bit larger than pi? then you have a potential discontinuity at pi, because you'd still have to use the regular FP computation for any non-exact multiple of pi. All this means that you will get a very similar result by rounding your outputs to a digit less than full FP precision: In [*46*]: math.sin(math.pi) Out[*46*]: 1.2246467991473532e-16 In [*47*]: round(math.sin(math.pi), 15) Out[*47*]: 0.0 But anyway: There *may* be a nice use-case for a more "friendly" trig package, particularly in education, but I don't think it would ever belong in the std lib. (though maybe I'd change my mind if it saw really wide use) However simiply adding a few names like: sindeg, cosdeg, etc, to save folks from having to type: math.sin(math.degree(something)) is a fine idea -- and it may make it a bit more clear, when people go looking for the "Sine" function, that they don't want to use degrees with the regular one... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

Would sind and cosd make Euler's formula work correctly? sind(x) + i * sind(x) == math.e ** (i * x) I suspect that adding these functions is kind of like those cartoons where the boat is springing leaks and the character tried to plug them with their fingers. Floating point is a leaky abstraction. Perhaps you'd prefer an enhancement to the fractions module that provides real (not float) math?

As mentioned, with complex numbers the radians make more sense and of course cmath.sind(x) + 1j * cmath.sind(x) != cmath.exp(1j * x). However, adding degrees version for cmath (import cmath) is still useful, cmath.rectd, cmath.phased, cmath.polard etc. 2018-06-11 19:24 GMT+02:00 Michael Selik <mike@selik.org<mailto:mike@selik.org>>: Would sind and cosd make Euler's formula work correctly? sind(x) + i * sind(x) == math.e ** (i * x) I suspect that adding these functions is kind of like those cartoons where the boat is springing leaks and the character tried to plug them with their fingers. Floating point is a leaky abstraction. Perhaps you'd prefer an enhancement to the fractions module that provides real (not float) math? _______________________________________________ Python-ideas mailing list Python-ideas@python.org<mailto:Python-ideas@python.org> https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

Whoops, it turns out Euler's formula does work! I expected imprecision, but at least one test matched. x = 42 cos(x) + 1j * sin(x) == e ** (1j * x) I suppose that's because it's radians. On Mon, Jun 11, 2018, 10:24 AM Michael Selik <mike@selik.org> wrote:

2018-06-11 19:33 GMT+02:00 Michael Selik <mike@selik.org>:
I think you will find it holds for any x (except inf, -inf and nan). The boat is less leaky than you think; IEEE floating-point arithmetic goes out of its way to produce exact answers whenever possible. (To great consternation of hardware designers who felt that requiring 1.0*x == x was too expensive.)
I suppose that's because it's radians.
Well, the formula obviously only holds in exact arithmetic if cos and sin are the versions taking radians. Stephan

On 2018-06-11 14:04, Stephan Houben wrote:
In fact, 1.0*x == x is almost all that this test exercises. If I'm looking in the right place, this is C the implementation of a ** b, omitting in a few special cases: vabs = hypot(a.real,a.imag); len = pow(vabs,b.real); at = atan2(a.imag, a.real); phase = at*b.real; if (b.imag != 0.0) { len /= exp(at*b.imag); phase += b.imag*log(vabs); } r.real = len*cos(phase); r.imag = len*sin(phase); This means that (e ** ...) is essentially implemented in terms of the formula above. Indeed, in the special case of e ** (1j * x), we have a.real = e, a.imag = 0.0, b.real = 0.0, and b.imag = 1.0, so concretely the code simplifies to this: vabs = e len = 1.0 at = 0.0 phase = 0.0 if (b.imag != 0.0) { len = 1.0; phase = x; // requires log(e) == 1.0 and x * 1.0 == x } r.real = cos(phase); // requires 1.0 * x == x r.imag = sin(phase); Thus, it shouldn't be too surprising that the formula holds :) Clément.

Michael Selik wrote:
Whoops, it turns out Euler's formula does work! I expected imprecision, but at least one test matched.
That might be because the implememtation of e ** x where x is complex is using Euler's formula... -- Greg

On Mon, Jun 11, 2018 at 10:24:42AM -0700, Michael Selik wrote:
Would sind and cosd make Euler's formula work correctly?
sind(x) + i * sind(x) == math.e ** (i * x)
No, using degrees makes Euler's identity *not* work correctly, unless you add in a conversion factor from degrees to radians: https://math.stackexchange.com/questions/1368049/eulers-identity-in-degrees Euler's Identity works fine in radians: py> from cmath import exp py> exp(1j*math.pi) (-1+1.2246063538223773e-16j) which is close enough to -1 given the usual rounding issues with floats. (Remember, math.pi is not π, but a number close to it. There is no way to represent the irrational number π in less than an infinite amount of memory without symbolic maths.) [...]
Perhaps you'd prefer an enhancement to the fractions module that provides real (not float) math?
I should think not. Niven's Theorem tells us that for rational angles between 0° and 90° (that is, angles which can be represented as fractions), there are only THREE for which sine (and cosine) are themselves rational: https://en.wikipedia.org/wiki/Niven's_theorem Every value of sin(x) except for those three angles is an irrational number, which means they cannot be represented exactly as fractions or in a finite number of decimal places. What that means is that if we tried to implement real (not float) trigonometric functions on fractions, we'd need symbolic maths capable of returning ever-more complicated expressions involving surds. For example, the exact value of sin(7/2 °) involves a triple nested square root: 1/2 sqrt(2 - sqrt(2 + sqrt(3))) and that's one of the relatively pretty ones. sin(3°) is: -1/2 (-1)^(29/60) ((-1)^(1/60) - 1) (1 + (-1)^(1/60)) http://www.wolframalpha.com/input/?i=exact+value+of+sine%2815%2F2+degrees%29 http://www.wolframalpha.com/input/?i=exact+value+of+sine%283+degrees%29 This proposal was supposed to *simplify* the trig functions for non-mathematicians, not make them mind-bogglingly complicated. -- Steve

Steven D'Aprano wrote:
I don't think anyone is going to complain about sin(3°) not being exact, whatever units are being used. This discussion is only about the rational values. I wonder whether another solution would be to provide a set of "newbie math" functions that round their results.
Yes, I know, this just pushes the surprises somewhere else, but so does every solution. -- Greg

On Tue, Jun 12, 2018 at 10:57:18AM +1200, Greg Ewing wrote:
Precisely. They are the values I'm talking about. If you're serious about only supporting only the rational values, then the implementation is trivial and we can support both degrees and radians easily. def sin(angle): # radians if angle == 0: return 0 raise ValueError("sorry, no rational sine for that angle") def sind(angle): # degrees angle %= 360 rational_values = { 0: 0, 30: 0.5, 90: 1, 150: 0.5, 180: 0, 210: -0.5, 270: -1, 330: -0.5 } if angle in rational_values: return rational_values[angle] raise ValueError("sorry, no rational sine for that angle") Exceedingly simple, and exceedingly useless. Which was my point: supporting only the rational values is pointless, because there are only a handful of them. We either have to support full-blown symbolic results, or we have rational APPROXIMATIONS to the true value. I'm responding to a proposal that explicitly suggested using fractions to do "real (not float) math", which I read as "no approximations". I imagine that Michael thought that by using fractions, we can calculate exact rational results for sine etc without floating point rounding errors. No we cannot, except for the values above. If "not float" is serious, then with the exception of the above values, *none* of the values will be rational and we either can't return a value (except for the above) or we have to use symbolic maths. Using fractions is not a magic panacea that lets you calculate exact answers just by swapping floats to fractions.
No, that's really not true of every solution. The initial proposal is fine: a separate set of trig functions that take their arguments in degrees would have no unexpected surprises (only the expected ones). With a decent implementation i.e. not this one: # don't do this def sind(angle): return math.sin(math.radians(angle)) and equivalent for cos, we ought to be able to get correctly rounded values for nearly all the "interesting" angles of the circle, without those pesky rounding issues caused by π not being exactly representable as a float. There will still be rounding errors, because we're dealing with numbers like sqrt(2) etc, but with IEEE-754 maths, they'll be correctly rounded. -- Steve

[Steven D'Aprano]
...
The initial proposal is fine: a separate set of trig functions that take
But that's good enough for almost all real purposes. Indeed, except for argument reduction, it's essentially how scipy's sindg and cosdg _are_ implemented: https://github.com/scipy/scipy/blob/master/scipy/special/cephes/sindg.c things with some extra bits of precision to absorb them. For example, install `mpmath` and use this: def sindg(d): import math, mpmath d = math.fmod(d, 360.0) if abs(d) == 180.0: return 0.0 with mpmath.extraprec(12): return float(mpmath.sin(mpmath.radians(d))) Then, e.g,
Notes: 1. Python's float "%" is unsuitable for argument reduction; e.g.,
-1e-14 % 360.0 360.0
`math.fmod` is suitable, because it's exact:
math.fmod(-1e-14, 360.0) -1e-14
2. Using a dozen extra bits of precision make it very likely you'll get the correctly rounded 53-bit result; it will almost certainly (barring bugs in `mpmath`) always be good to less than 1 ULP. 3. Except for +-180. No matter how many bits of float precision (including the number of bits used to approximate pi) are used, converting that to radians can never yield the mathematical `pi`; and sin(pi+x) is approximately equal to -x for tiny |x|; e.g., here with a thousand bits:
So +-180 is special-cased. For cosdg, +-{90. 270} would need to be special-cased for the same reason.

On Tue, Jun 12, 2018 at 4:40 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
https://docs.python.org/3/reference/expressions.html#binary-arithmetic-opera... https://docs.python.org/3/reference/expressions.html#id17 https://docs.python.org/3/reference/expressions.html#id18 (the latter two being footnotes from the section in the first link) With real numbers, divmod (and thus the // and % operators) would always return values such that: div, mod = divmod(x, y): 1) div*y + mod == x 2) sign(mod) == sign(y) 3) 0 <= abs(mod) < abs(y) But with floats, you can't guarantee all three of these. The divmod function focuses on the first, guaranteeing the fundamental arithmetic equality, but to do so, it sometimes has to bend the third one and return mod==y. There are times when it's better to sacrifice one than the other, and there are other times when it's the other way around. We get the two options. ChrisA

Hi all, I wrote a possible implementation of sindg: https://gist.github.com/stephanh42/336d54a53b31104b97e46156c7deacdd This code first reduces the angle to the [0,90] interval. After doing so, it can be observed that the simple implementation math.sin(math.radians(angle)) produces exact results for 0 and 90, and a result already rounded to nearest for 60. For 30 and 45, this simple implementation is one ulp too low. So I special-case those to return the correct/correctly-rounded value instead. Note that this does not affect monotonicity around those values. So I am still unsure if this belong in the stdlib, but if so, this is how it could be done. Stephan 2018-06-12 8:50 GMT+02:00 Chris Angelico <rosuav@gmail.com>:

On Tue, Jun 12, 2018, 00:03 Stephan Houben <stephanh42@gmail.com> wrote:
You observed this on your system, but math.sin uses the platform libm, which might do different things on other people's systems.
Again, monotonicity is preserved on your system, but it might not be on others. It's not clear that this matters, but then it's not clear that any of this matters... -n

What was wrong with my initial implementation with a lookup table ? :D def sind(x): if x % 90 == 0: return (0, 1, 0, -1)[int(x // 90) % 4] else: return sin(radians(x)) If you want to support multiples of 30, you can do % 30 and // 30. Le mer. 13 juin 2018 à 09:51, Stephan Houben <stephanh42@gmail.com> a écrit :

2018-06-13 12:00 GMT+02:00 Robert Vanden Eynde <robertve92@gmail.com>:
I kinda missed it, but now you ask: 1. It's better to reduce the angle while still in degrees since one of the advantages of degrees is that the reduction can be done exactly. Converting very large angles first to radians and then taking the sine can introduce a large error, 2. I used fmod instead of % on advice in this thread. 3. I also wanted to special case, 30, 45, and 60.
If you want to support multiples of 30, you can do % 30 and // 30.
Sure, but I also wanted to special-case 45. Stephan

2018-06-13 12:08 GMT+02:00 Robert Vanden Eynde <robertve92@gmail.com>:
Then of you also want 45, you could do % 15 ? :D
Sure, but how the lookup is done in the Python reference code is ultimately not so important, since it will need to be rewritten in C if it is to be included in the math package (math is C-only). And then we'll probably end up with a bunch of if-checks against the common values. Stephan

My first comment is that special casing values like this can lead to some very undesirable properties when you use the function for numerical analysis. Suddenly your sind is no longer continuous (sind(x) is no longer the limit of sind(x+d) as d goes to 0). As I stated in my initial comment on this, if you are going to create a sind function with the idea that you want 'nice' angles to return 'exact' results, then what you need to do is have the degree based trig routines do the angle reduction in degrees, and only when you have a small enough angle, either use the radians version on the small angle or directly include an expansion in degrees. Angle reduction would be based on the identity that sin(x+y) = sin(x) * cos(y) + cos(x) * sin(y) and cos(x+y) = cos(x)*cos(y) - sin(x) * sin(y). If you want to find sin(z) for an arbitrary value z, you can reduce it to and x+y where x is some multiple of say 15 degrees, and y is in the range -7.5 to 7.5 degrees. You can have stored exact values of sin/cos of the 15 degree increments (and only really need them between 0 and 90) and then compute the sin and cos of the y value. On 6/13/18 6:07 AM, Stephan Houben wrote:
-- Richard Damon

Op wo 13 jun. 2018 13:12 schreef Richard Damon <Richard@damon-family.org>:
The deviations introduced by the special casing are on the order of one ulp. At that level of detail the sin wasn't continuous to begin with.
Yes that is what my code does. It reduces degrees to [0,90].
This is not how sine functions are calculated. They are calculated by reducing angle to some interval, then evaluating a polynomial which approximates the true sine within that interval. Stephan

Stephan Houben wrote:
Yes that is what my code does. It reduces degrees to [0,90].
Only for the special angles, though. Richard's point is that you need to do tha for *all* angles to avoid discontinuities with large angles.
That's what he suggested, with the interval being -7.5 to 7.5 degrees. -- Greg

On 6/13/18 7:21 AM, Stephan Houben wrote:
I would say the change isn't one ulp, changing a non-zero number to zero is not one ulp (unless maybe you are on the verge of underflow). It may be one ulp of 'full scale', but we aren't near the full scale point. It might be the right answer for a less than one ulp change in the INPUT, but if we thought that way we wouldn't have minded the non-zero result in the first place. The fundamental motivation is that for 'nice angles' we want the 'nice result' when possible, but the issue is that most of the 'nice angles' in radians are not representable exactly, so it isn't surprising that we don't get the nice results out. One property that we like to preserve in functional calculation is that the following pseudo code dp = x + delta derivative = ( f(xp) - f(x) ) / (xp - x) (or variations where you subtract delta or work at x+delta and x-delta) should approximate well the derivative of the function, (which for sin in radians should be cos), and that this improves as delta gets very small until we hit the rounding error in the computation of f(x). (Note, I don't divide by delta, but xp-x to remove the round off error in computing xp which isn't the fault of the function f). Changing a point because it is the closest to the nice number will cause this calculation to spike due to the single point perturbation). Yes, this calculation may start to 'blow up' f(xp) - f(x) is very small compared to f(x) and we start to measure the round off error in the computation of the function, near a zero of the function, (which if we are root finding is common) we can do quite well.
And that is what my method did (as others have said). Virtual all methods of computing sin and cos use the angle addition formula (and even quadrant reduction is using it for the special case of using one of the angles where sin/cos are valued in-1, 0, 1). The methods that least use it that I know of reduces the angle to a quadrant (or octant) and then selects one a number of expansions good for a limited range, or table interpolation (but even that sort of uses it with the approximation of sin(x) ~ x and cos(x) ~ 1 for very small x) -- Richard Damon

[Richard Damon]
Either way, it's necessary to get the effect of working in greater than output precision, if it's desired that the best possible result be returned for cases well beyond just the handful of "nice integer inputs" people happen to be focused on today. So I'll say again that the easiest way to do that is to use `mpmath` to get extra precision directly. The following does that for sindg, cosdg, and tandg. - There are no special cases. Although tandg(90 + i*180) dies with ZeroDivisionError inside mpmath, and that could/should be fiddled to return an infinity instead. - Apart from that, all functions appear to give the best possible double-precision result for all representable-as-a-double integer degree inputs (sindg(30), cosdg(-100000), doesn't matter). - And for all representable inputs of the form `integer + j/32` for j in range(32). - But not for all of the form `integer + j/64` for j in range(1, 64, 2). A few of those suffer greater than 1/2 ULP error. Setting EXTRAPREC to 16 is enough to repair those - but why bother? ;-) - Consider the largest representable double less than 90:
The code below gives the best possible tangent:
tandg(x) 4031832051015932.0
Native precision is waaaaay off:
math.tan(math.radians(x)) 3530114321217157.5
It's not really the extra precision that saves the code below, but allowing argument reduction to reduce to the range [-pi/4, pi/4] radians, followed by exploiting trigonometric identities. In this case, exploiting that tan(pi/2 + z) = -1/tan(z). Then even native precision is good enough:
-1 / math.tan(math.radians(x - 90)) 4031832051015932.0
Here's the code: import mpmath from math import fmod # Return (n, x) such that: # 1. d degrees is equivalent to x + n*(pi/2) radians. # 2. x is an mpmath float in [-pi/4, pi/4]. # 3. n is an integer in range(4). # There is one potential rounding error, when mpmath.radians() is # used to convert a number of degrees between -45 and 45. This is # done using the current mpmath precision. def treduce(d): d = fmod(d, 360.0) n = round(d / 90.0) assert -4 <= n <= 4 d -= n * 90.0 assert -45.0 <= d <= 45.0 return n & 3, mpmath.radians(d) EXTRAPREC = 14 def sindg(d): with mpmath.extraprec(EXTRAPREC): n, x = treduce(d) if n & 1: x = mpmath.cos(x) else: x = mpmath.sin(x) if n >= 2: x = -x return float(x) def cosdg(d): with mpmath.extraprec(EXTRAPREC): n, x = treduce(d) if n & 1: x = mpmath.sin(x) else: x = mpmath.cos(x) if 1 <= n <= 2: x = -x return float(x) def tandg(d): with mpmath.extraprec(EXTRAPREC): n, x = treduce(d) x = mpmath.tan(x) if n & 1: x = -1.0 / x return float(x)

I should note that numeric code "that works" is often much subtler than it appears at first glance. So, for educational purposes, I'll point out some of what _wasn't_ said about this crucial function: [Tim]
obvious. That `fmod` is exact is guaranteed by the relevant standards, but most people who write a libm don't get it right at first. There is no "cheap" way to implement it correctly. It requires getting the effect of doing exact integer division on, potentially, multi-thousand bit integers. Assuming x > y > 0, a correct implementation of fmod(x, y) ends up in a loop that goes around a number of times roughly equal to log2(x/y), simulating one-bit-at-a-time long division For example, here's glibc's implementation: https://github.com/bminor/glibc/blob/master/sysdeps/ieee754/dbl-64/e_fmod.c Don't expect that to be easy to follow either ;-) Then how do we know that `d -= n * 90.0" is exact? That's not obvious either. It follows from the "Sterbenz lemma", one version of which: if x and y are non-zero floats of the same sign within a factor of 2 of each other, 1/2 <= x/y <= 2 (mathematically) then x-y is exactly representable as a float too. This is true regardless of the floating-point base, or of rounding mode in use. In IEEE-754, it doesn't even need weasel words to exempt underflowing cases. That lemma needs to be applied by cases, for each of the possible values of (the integer) `n`. It gets closest to failing for |n| = 1. For example, if d is a tiny bit larger than 45, n is 1, and then d/90 is (mathematically) very close to 1/2. Which is another thing that needs to be shown: "if d is a tiny bit larger than 45, n is 1". Why? It's certainly true if we were using infinite precision, but we're not. The smallest representable double > 45 is 45 + 2**-47:
Then d/90.0 (the argument to round()) is, with infinite precision, (45 + 2**-47)/90 = 0.5 + 2**-47/90 1 ULP with respect to 0.5 is 2**-53, so that in turn is equal to 0.5 + 2**-53/(90/64) = 0.5 + (64/90)*2**-53 = 0.5 + 0.71111111111... * 2**-53 Because the tail (0.711...) is greater than 0.5 ULP, it rounds up under nearest-even rounding, to 0.5 + 2**-53
and so Python's round() rounds it up to 1:
round(_) 1
Note that it would _not_ be true if truncating "rounding" were done, so round-nearest is a hidden assumption in the code. Similar analysis needs to be done at values near the boundaries around all possible values of `n`. That `assert -45.0 <= d <= 45.0` can't fall then follows from all of that. In all, a proof that the code is correct is much longer than the code itself. That's typical. Alas, it's also typical that math library sources rarely point out the subtleties.

On Thu, Jun 14, 2018 at 01:44:34PM -0500, Tim Peters wrote:
Thanks Tim! Reading your digressions on the minutia of floating point maths is certainly an education. It makes algebra and real-valued mathematics seem easy in comparison. I still haven't got over Mark Dickinson's demonstration a few years back that under Decimal floating point, but not binary, it is possible for the ordinary arithmetic average (x+y)/2 to be outside of the range [x, y]: py> from decimal import getcontext, Decimal py> getcontext().prec = 3 py> x = Decimal('0.516') py> y = Decimal('0.518') py> (x + y) / 2 Decimal('0.515') -- Steve

[Steven D'Aprano <steve@pearwood.info>]
Thanks Tim!
You're welcome ;-)
Hard to say, really. The problem with floating point is that it's so God-awful lumpy - special cases all over the place. Signaling and quiet NaNs; signed infinities; signed zeroes; normal finites all with the same number of bits, but where the gap between numbers changes abruptly at power-of-2 boundaries; subnormals where the gap remains the same across power-of-2 boundaries, but the number of _bits_ changes abruptly; all "the rules" break down when you get too close to overflow or underflow; four rounding modes to worry about; and a whole pile of technically defined exceptional conditions and related traps & flags. Ignoring all that, though, it's pretty easy ;-) 754 was dead serious about requiring results act is if a single rounding is done to the infinitely precise result, and that actually allows great simplification in reasoning. The trend these days appears to be using automated theorem-proving systems to keep track of the mountain of interacting special cases. Those have advanced enough that we may even be on the edge of getting provably-correctly-rounded transcendental functions with reasonable speed. Although it's not clear people will be able to understand the proofs ;-) I still haven't got over Mark Dickinson's demonstration a few years
Ya, decimal fp doesn't really solve anything except the shallow surprise that decimal fractions generally aren't exactly representable as binary fractions. Which is worth a whole lot for casual users, but doesn't address any of the deep problems (to the contrary, it makes those a bit worse). I like to illustrate the above with 1-digit decimal fp, because it makes it more apparent at once that - unlike as in binary fp - multiplication and division by 2 may _not_ be exact in decimal fp. We can't even average a number "with itself" reliably:
But related things _can_ happen in binary fp too! You have to be near the edge of representable non-zero finites though:
Oops. So rewrite it:
x/2 + y/2 1e+308
Better! But then:
Oops. A math library has to deal with everything "correctly". Believe it or not, this paper "How do you compute the midpoint of an interval?" https://hal.archives-ouvertes.fr/hal-00576641v1/document is solely concerned with computing the average of two IEEE doubles, yet runs to 29(!) pages. Almost everything you try fails for _some_ goofy cases. I personally write it as (x+y)/2 anyway ;-)

On Sat, Jun 16, 2018 at 10:57 PM, Tim Peters <tim.peters@gmail.com> wrote: Ya, decimal fp doesn't really solve anything except the shallow surprise
It's my suspicion that the story is the same with "degree-based" trig :-) Which is why, if you want "nice-looking" results, it seems one could simply accept a decimal digit or so less precision, and use the "regular" FP trig functions, rounding to 14 or so decimal digits. Though if someone really wants to implement trig in native degrees -- more power to 'em. However -- if this is really such a good idea -- wouldn't someone have make a C lib that does it? Or has someone? Anyone looked? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Mon, Jun 18, 2018, 19:25 Chris Barker via Python-ideas < python-ideas@python.org> wrote:
quite a few in fact, including cos(n*pi) https://www.boost.org/doc/libs/1_52_0/boost/units/cmath.hpp

On 6/18/18 19:23, Chris Barker via Python-ideas wrote:
Certainly! scipy.special uses the functions implemented in the Cephes C library. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

On Mon, Jun 18, 2018 at 07:23:50PM -0700, Chris Barker wrote:
No, there's nothing magical about C. You can do it in pure Python. It isn't as fast of course, but it works well enough. When I get a Round Tuit, I'll pop the code up on PyPy. -- Steve

[Tim]
[Greg Ewing]
So why doesn't float % use math.fmod?
[Chris Angelico]
positive integer y), and _given that_ sometimes results are fiddled to keep #1 approximately true, and strict inequality in #3 may be sacrificed. For `fmod`, sign(mod) == sign(x) instead.
All mod functions, m(x, y), strive to return a result that's mathematically exactly equal to x-n*y (for some mathematical integer `n` that may not even be representable in the programming language). `fmod()` is exact in that sense, but Python's floating "%" may not be. and no float scheme such that sign(m(x, y)) = sign(y) can be (see the original example at the top: the only mathematical integer `n` such that the mathematical -1e-14 - n*360.0 is exactly representable as a double is n==0). The most useful mod function for floats _as floats_ would actually satisfy abs(m(x, y)) <= abs(y) / 2 That can be done exactly too - but then the sign of the result has approximately nothing to do with the signs of the arguments.

Sorry for top posting, but these aren't really opinions for the debate, just information. I haven't seen them mentioned, and none grouped nicely under someone's reply. Number representation:: IEEE-754 doubles cannot represent pi correctly at any bit-depth (i mean, obviously, but more seriously). input pi = 3.14159265358979323846264338327 output pi = 3.14159265358979311599796346854 The cutoff is the value everyone uses, which is what is in math.pi among other places. 3.141592653589793 Any conversion using this value of pi is already wrong, and the input to the function is already wrong. The function doesn't matter. It's always underestimating if using pi. I'm clarifying this because it's separate from any discussion of functions. More accurate or different functions cannot use a better input pi. Calculator: http://www.binaryconvert.com/convert_double.html input pi source: https://www.piday.org/million/ Libraries: Boost implements a cos_pi function, it's algorithm will probably be useful to look at. glibc implements cos/sin as a look-up table, which is most likely where any other implementation will end up, as it is a common endpoint. I've seen this on different architectures and libraries. (sincostab.h if you want to google glibc's table). Maybe there is a hilarious index lookup off-by-1 there, I didn't look. Tables are the basis for the calculation in many libraries in march architectures, so I wanted to point that out to any function designers. A new implementation may come down to calculating right index locations. For a degree based implementation, the same algorithm can be used with a different table input that is calibrated to degrees. Likewise, a pi based implementation, the same but for the pi scale factor input. Nothing needs to be done designed other than a new lookup table calibrated for the "base" of a degree, radian, or pi.. it will have the same output precision issues calculating index values, but at least the input will be cleaner. This is not me saying what should be done, just giving information that may hopefully be useful Small note about python's math: The python math library does not implement algorithms, it exposes the c functions. You can see C/C++ has this exact issue performing the calculation there. As that is "the spec," the spec is defined with the error. The math library is technically correct given it's stated purpose and result. Technically correct, the best kind of correct. https://www.youtube.com/watch?v=hou0lU8WMgo On Mon, Jun 11, 2018 at 10:53 PM Tim Peters <tim.peters@gmail.com> wrote:

On Mon, Jun 11, 2018 at 10:24 AM, Michael Selik <mike@selik.org> wrote:
Would sind and cosd make Euler's formula work correctly?
Not trying to pick on you, but this question shows a key misunderstanding: There is nothing inherently more accurate in using degrees rather than radians for trigonometry. IT's nice that handy values like "one quarter of a circle" can be exactly represented, but that's really only an asthetic thing. And every computer math lib I've even seen uses floating point radians for trig functions, so unless you're really going to implement trig from degrees from scratch, then you are going to go to floating point radians (and floating point pi) anyway. Oh, and radians are the more "natural" units (in fact unitless) for math, and the only way that things like the Euler identity work. Which is why computational math libs use them. So there are two orthogonal ideas on the table here: 1) Have trig functions that take degrees for convenience for when folks are working in degrees already. 2) Have trig functions that produce exact values (i.e what is "expected") for the special cases. It seems the OP is interested in a package that combines both of these -- which is a fine idea as a third party lib. Perhaps you'd prefer an enhancement to the fractions module that provides
real (not float) math?
Isn't that exactly what the fractions module does? or are you suggesting that it be extended with trig functions ? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Mon, Jun 11, 2018, 1:18 PM Chris Barker <chris.barker@noaa.gov> wrote:
That's actually what I was trying to say. Shouldn't have tried to be round-about. Not only did the point get muddled, but I wrote something false as well! Perhaps you'd prefer an enhancement to the fractions module that provides
The latter. However, to Steven's point about irrationals, perhaps this should be an entirely separate module designed to handle various irrationalities accurately. ... Like SymPy.

On Mon, Jun 11, 2018 at 01:18:10PM -0700, Chris Barker via Python-ideas wrote:
Actually there is: using radians, the only "nice" angle that can be represented exactly is 0. With degrees, we can represent a whole lot of "nice" angles exactly. "Nice", is of course subjective, but most of us would recognise that 36° represented as exactly 36.0 is nice but being *approximately* represented as 0.6283185307179586 is not. Using radians, we have two sources of rounding error: - π cannot be represented exactly as a float, so we have to use a number pi which is ever-so-slightly off; - plus the usual round-off error in the algorithm; while using degrees, we only have the second one (since 180° *can* be represented exactly, as the float 180.0). And with the degrees implementation, we should be able to use correctly rounded roots for many of our "nice" angles.
Well that's the whole point of the discussion.
Oh, and radians are the more "natural" units (in fact unitless) for math,
Degrees are unit-less too. 180° = π radians. That's just a scaling factor difference. Unless you're doing symbolic maths, differentiating or integrating trig functions, or certain geometric formulae which are "neater" in radians than in degrees, there's no real advantage to radians. Degrees are simply a much more practical unit of angle for practical work.
and the only way that things like the Euler identity work.
Its not the *only* way. https://math.stackexchange.com/questions/1368049/eulers-identity-in-degrees
Which is why computational math libs use them.
Actually, more maths libraries than you might guess offer trig functions in degrees, or scaled by pi, e.g: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_lib.html Julia and Matlab provide sind etc, and although I cannot find a reference right now, I seem to recall the latest revision to the IEEE 754 standard suggesting them as optional functions. -- Steve

On Mon, Jun 11, 2018 at 09:18:22AM +0200, Jacco van Dorp wrote:
Not when I went to school it wasn't. sin(π/2) = sin(90°) = 1. Perhaps you meant cos? In any case, the sin function doesn't work that way. Unless we add a special dunder method to defer to, it cannot be expected to magically know how to deal with these "PiMultiple" objects, except by converting them to floats. You don't suddenly get accurate results by waving a magic wand over the float 0.5 and saying "You're a multiple of pi". You still have to code a separate algorithm for this, and that's hard work. (Why do you think the decimal module doesn't support trig functions?) Is every function that takes float arguments now supposed to recognise PiMultiple objects and treat them specially? How do they integrate in the numeric tower and interact with ordinary floats? But let's say you do this: def sin(arg): if isinstance(arg, PiMultiple): # does this mean it needs to be a builtin class? call internal sinpi(arg) function else: call regular sin(arg) function This isn't Java, and not everything needs to be a class. If we go to the trouble of writing separate sinpi() etc implementations, why hide one of them behind a class (and not even in a proper object-oriented interface) when we can just call the functions directly? sinpi(1.5) sin(PiMultiple(1.5)) I know which I'd rather use.
It also gives a reason -against- degrees. if you have PiMultiple or TauMultiple, it's rather easy to give common angles,
What about the uncommon angles? The whole point of this proposal is to make it easy to give angles in degrees without the need to convert to radians, introducing rounding errors over and above those introduced by the trig function itself.
No they are not objectively better measures of angles. With radians, you have to divide a right angle into an irrational number of radians, one which cannot be expressed in a finite number of decimal places. In a very real sense, it is impossible to measure exactly 1 radian. I know that in practical terms, this makes no difference, we can get close enough, but physical measurements are limited to rational numbers. A measurement system based on irrational numbers, especially one as difficult as π, is not objectively better. Its not just because of tradition that nobody uses radians in civil engineering, astronomy, architecture, etc. Radians shine when we're doing pure maths and some branches of physics, but they're a PITA to use in most practical circumstances. E,g, the tip of your little finger at arms length is close enough to 1° or 0.017 radian. Who wants to measure angles in multiples of 0.017? https://www.timeanddate.com/astronomy/measuring-the-sky-by-hand.html Using radians, these heuristics stink. -- Steve

On Sun, Jun 10, 2018 at 10:48 PM, Steven D'Aprano <steve@pearwood.info> wrote:
hmm -- I'm no numerical analyst, but I could have sworn I learned (from Kahan himself) that the trig functions could (and were, at least in the HP calculators :-) ) be computed to one digit of accuracy. He even proved how many digits of pi you'd have to store to do that (though I can't say I understood the proof) -- I think you needed all those digits of pi because the trig functions are defined on the range 0 -- pi/2, and any larger value needs to be mapped to that domain -- if someone asks for the sin(e100), you need to know pretty exactly what x % pi/4 is. The problem with trig functions is that they suffer from "the
so -- if that's the case, I still think we know that while the last digit may not be the best rounded value to the real one, the second to last digit is correct. And if not, then there's nothing we could do other than implement the math lib :-) But even if they were, using isclose() is the wrong solution. Suppose
no, but you can say "y is as close to zero as I care about" we are already restricted to not knowing the distinction within less than an eps -- so making the "effective eps" a bit larger would result in more esthetically pleasing results. It's not just zero, the same applies for any value. That's just moving
I don't think that's what it would be -- rather, it would be returning a bit less precision in exchange for more esthetically pleasing results :-) Note that there is no way I would advocate using this for the stdlib trig functions -- only for a purpose library. I'm also suggesting that it would result in equally good results to what is being proposed: using integer degrees, or a pi or tau based units. If you used integer degrees, then you'd have exactly, say pi (180 degrees), but a precision of only pi/180 -- much less than the 15 digits or so you'd get if you rounded the regular floating point results. And if you used tau based units, you'd be back to the same thing -- 0.5 tau could be exact, but what would you do for a bit bigger or smaller than that? use FP :-)
I suppose so, but is there a guarantee that the FP representation of π/2 is a tiny bit less than the exact value, rather than a tiny bit more? which would result in VERY different answers: In [*10*]: math.tan(math.pi / 2.0) Out[*10*]: 1.633123935319537e+16 In [*11*]: math.tan(math.pi / 2.0 + 2e-16) Out[*11*]: -6218431163823738.0 (though equally "correct") Also -- 1.6 e+16 is actually pretty darn small compared to FP range. So a library that wants to produce "expected" results may want to do something with that -- something like: In [*119*]: *def* pretty_tan(x): ...: tol = 3e-16 ...: diff = x % (pi / 2) ...: *if* abs(diff) < tol: ...: *return* float("-Inf") ...: *elif* (pi /2) - diff < tol: ...: *return* float("Inf") ...: *return* math.tan(x) ...: ...: In [*120*]: x = pi / 2 - 5e-16 In [*121*]: *for* i *in* range(10): ...: val = x + i * 1e-16 ...: *print* val, pretty_tan(val) ...: 1.57079632679 1.9789379661e+15 1.57079632679 1.9789379661e+15 1.57079632679 inf 1.57079632679 inf 1.57079632679 -inf 1.57079632679 -inf 1.57079632679 -inf 1.57079632679 -inf 1.57079632679 -2.61194216074e+15 1.57079632679 -2.61194216074e+15 You'd want to tweak that tolerance value to be as small as possible, and do somethign to make it more symmetric, but you get the idea. The goal is that if you have an input that is about as close as you can get to pi/2, you get inf or -inf as a result. This does mean you are tossing away a tiny bit of precision -- you could get a "correct" value for those values really close to pi/2, but it would be prettier... -CHB
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Mon, Jun 11, 2018 at 02:12:06PM -0700, Chris Barker wrote:
no, but you can say "y is as close to zero as I care about"
Of course you can. But we (the std lib) should not make that decision for everybody. For some people 0.001 is "close enough to zero". For others, 1e-16 is not. We're not in the position to decide for everyone. -- Steve
participants (17)
-
Chris Angelico
-
Chris Barker
-
Chris Barker - NOAA Federal
-
Clément Pit-Claudel
-
Greg Ewing
-
Jacco van Dorp
-
Matt Arcidy
-
Michael Selik
-
Nathaniel Smith
-
Richard Damon
-
Robert Kern
-
Robert Vanden Eynde
-
Robert Vanden Eynde
-
Ronald Oussoren
-
Stephan Houben
-
Steven D'Aprano
-
Tim Peters