# e vs exp()?

Tim Peters tim.one at comcast.net
Sat Sep 21 01:25:59 EDT 2002

```[Tim]
>> ... The implementation of a decent x**y function is one
>> of the> most difficult tasks in a platform's math library....
>> I've written such a beast without benefit of an extended hardware
>> precision; I wouldn't want to do it again <wink>.

[Terry Reedy]
> What was your basic approach?  Power series?  Rational function?
> Continued fraction? ???

exp(log(x)*y) -- really <wink>.  It was a combination of table-driven and
economized power series approaches, like the Tang algorithm I mentioned last
time, carefully faking just enough extra precision to guarantee < 1 ulp
worst-case error in the end.

> http://functions.wolfram.com/ElementaryFunctions/Power/
> gives 343 formulas (many only relevant to complex domain) but doesn't
> seem to indicate which Mathematica actually uses (trade secret I
> presume).

Those kinds of references won't do you any good at this level:
good-to-the-last-bit library functions are a whole different game.  Check
out Peter Tang's celebrated paper on the implementation of exp (sorry, it's
not free):

http://portal.acm.org/citation.cfm?doid=63522.214389

That's still much the state of modern art for native-precision
transcendentals, and, with enough pain (in x**y's case, transcendant pain),
all can be approached in a similar way.

Dynamic-precision calculations are yet another entirely different kind of
game.  David Bailey's MPFUN is freely available, and so is Dave Gillespie's
Emacs calc mode; you can get as much precision out of those as you want.
Jurjen N.E. Bos's real.py for Python is yet another entirely different game:
an implementation of the constructive reals, where the result of an
arbitrarily complex chain of calculations (not just one!) can be obtained to
any desired accuracy.

As you climb up this chain of wilder ambitions, runtime increases
accordingly, of course.  For best speed, exp(x) can be approximated by 1
<wink>.

```