Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
- expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: - absdiff(a,b) = abs(a - b) -- Useful for forming norms - absmax(a,b) = max(abs(a), abs(b)) - absadd(a,b) = abs(a) + abs(b) -- Useful for L_1 norm and inequalities?
I would really like a powadd = abs(a)**p + abs(b)**p, but I can't think of an easy way to pass a variable p that is compatible with the way ufuncs work without going to three variables, something that might not work with the reduce functions. Along these lines I also think it is time to review generalized ufuncs to see what we can do with them. Thoughts?
Chuck
2008/11/5 Charles R Harris charlesr.harris@gmail.com:
Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be log(exp(a)+exp(b))? That would be extremely useful, yes.
absdiff(a,b) = abs(a - b) -- Useful for forming norms absmax(a,b) = max(abs(a), abs(b)) absadd(a,b) = abs(a) + abs(b) -- Useful for L_1 norm and inequalities?
These I find less exciting, since they can be written easily in terms of existing ufuncs. (The expadd can't without an if statement unless you want range errors.) There is some small gain in having fewer temporaries, but as it stands now each ufunc lives in a byzantine nest of code and is hard to scrutinize.
I would really like a powadd = abs(a)**p + abs(b)**p, but I can't think of an easy way to pass a variable p that is compatible with the way ufuncs work without going to three variables, something that might not work with the reduce functions. Along these lines I also think it is time to review generalized ufuncs to see what we can do with them. Thoughts?
It's worth checking what reduce does on ternary ufuncs. It's not clear what it *should* do.
Anne
On Tue, Nov 4, 2008 at 10:37 PM, Anne Archibald aarchiba@physics.mcgill.cawrote:
2008/11/5 Charles R Harris charlesr.harris@gmail.com:
Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be log(exp(a)+exp(b))? That would be extremely useful, yes.
Yes, but the common case seems to be log(sum_i(exp(a_i))), which would be inefficient if implemented with addexp.reduce.
absdiff(a,b) = abs(a - b) -- Useful for forming norms absmax(a,b) = max(abs(a), abs(b)) absadd(a,b) = abs(a) + abs(b) -- Useful for L_1 norm and inequalities?
These I find less exciting, since they can be written easily in terms of existing ufuncs. (The expadd can't without an if statement unless you want range errors.) There is some small gain in having fewer temporaries, but as it stands now each ufunc lives in a byzantine nest of code and is hard to scrutinize.
The inner loops are very simple, take a look at the current umathmodule. It's all the cruft up top that needs to be clarified. But I was mostly thinking of temporaries and the speed up for small arrays of not having as much call overhead.
I would really like a powadd = abs(a)**p + abs(b)**p, but I can't think
of
an easy way to pass a variable p that is compatible with the way ufuncs
work
without going to three variables, something that might not work with the reduce functions. Along these lines I also think it is time to review generalized ufuncs to see what we can do with them. Thoughts?
How about an abspower(a,p) = abs(a)**p ? That might be more useful.
It's worth checking what reduce does on ternary ufuncs. It's not clear what it *should* do.
I suppose a three variable recursion would be the most natural extension, but probably not that useful.
Chuck
On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald aarchiba@physics.mcgill.ca wrote:
2008/11/5 Charles R Harris charlesr.harris@gmail.com:
Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be log(exp(a)+exp(b))? That would be extremely useful, yes.
+1
But shouldn't it be called 'logadd', for adding values which are stored as logs?
http://www.lri.fr/~pierres/donn%E9es/save/these/torch/docs/manual/logAdd.htm...
I would also really enjoy a logdot function, to be used when working with arrays whose elements are log values.
On Tue, Nov 4, 2008 at 11:05 PM, T J tjhnson@gmail.com wrote:
On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald aarchiba@physics.mcgill.ca wrote:
2008/11/5 Charles R Harris charlesr.harris@gmail.com:
Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be log(exp(a)+exp(b))? That would be extremely useful,
yes.
+1
But shouldn't it be called 'logadd', for adding values which are stored as logs?
Hmm... but I'm thinking one has to be clever here because the main reason I heard for using logs was that normal floating point numbers had insufficient range. So maybe something like
logadd(a,b) = a + log(1 + exp(b - a))
where a > b ?
Chuck
On 05/11/2008, Charles R Harris charlesr.harris@gmail.com wrote:
On Tue, Nov 4, 2008 at 11:05 PM, T J tjhnson@gmail.com wrote:
On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald
aarchiba@physics.mcgill.ca wrote:
2008/11/5 Charles R Harris charlesr.harris@gmail.com:
Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be log(exp(a)+exp(b))? That would be extremely
useful, yes.
+1
But shouldn't it be called 'logadd', for adding values which are stored as
logs?
Hmm... but I'm thinking one has to be clever here because the main reason I heard for using logs was that normal floating point numbers had insufficient range. So maybe something like
logadd(a,b) = a + log(1 + exp(b - a))
where a > b ?
That's the usual way to do it, yes. I'd use log1p(exp(b-a)) for a little extra accuracy, though it probably doesn't matter. And yes, using logadd.reduce() is not the most efficient way to get a logsum(); no reason it can't be a separate function. As T J says, a logdot() would come in handy too. A python implementation is a decent first pass, but logdot() in particular would benefit from a C implementation.
Anne
On Tue, Nov 4, 2008 at 11:41 PM, Anne Archibald aarchiba@physics.mcgill.cawrote:
On 05/11/2008, Charles R Harris charlesr.harris@gmail.com wrote:
On Tue, Nov 4, 2008 at 11:05 PM, T J tjhnson@gmail.com wrote:
On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald
aarchiba@physics.mcgill.ca wrote:
2008/11/5 Charles R Harris charlesr.harris@gmail.com:
Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be log(exp(a)+exp(b))? That would be extremely
useful, yes.
+1
But shouldn't it be called 'logadd', for adding values which are stored
as
logs?
Hmm... but I'm thinking one has to be clever here because the main reason
I
heard for using logs was that normal floating point numbers had
insufficient
range. So maybe something like
logadd(a,b) = a + log(1 + exp(b - a))
where a > b ?
That's the usual way to do it, yes. I'd use log1p(exp(b-a)) for a little extra accuracy, though it probably doesn't matter. And yes, using logadd.reduce() is not the most efficient way to get a logsum();
But probably the best bet here. So, should I add this function? T J's link also mentioned a logsub, which might be more problematic because taking logs of negatives isn't going to work... Although that shouldn't happen if the probability logic is right and roundoff error is small.
no reason it can't be a separate function. As T J says, a logdot() would come in handy too. A python implementation is a decent first pass, but logdot() in particular would benefit from a C implementation.
Are these likely to be big arrays? It shouldn't be too hard to make a logdot once a logadd function is out there.
Chuck
Charles R Harris wrote:
Hmm... but I'm thinking one has to be clever here because the main reason I heard for using logs was that normal floating point numbers had insufficient range. So maybe something like
logadd(a,b) = a + log(1 + exp(b - a))
where a > b ?
Yes, that's the idea. AFAIK, that's generally known as logsumexp algorithm, at least in the machine learning community, I opened a task ticket on it, but I have not done any work on it:
http://projects.scipy.org/scipy/numpy/ticket/765
cheers,
David
Charles R Harris wrote:
Hmm... but I'm thinking one has to be clever here because the main reason I heard for using logs was that normal floating point numbers had insufficient range. So maybe something like
logadd(a,b) = a + log(1 + exp(b - a))
where a > b ?
On 11/5/2008 1:48 AM David Cournapeau apparently wrote:
Yes, that's the idea. AFAIK, that's generally known as logsumexp algorithm, at least in the machine learning community, I opened a task ticket on it, but I have not done any work on it:
Of possible relevance (BSD license): http://code.google.com/p/pyspkrec/source/browse/pyspkrec/gmm.py?r=109 (Search on logsumexp.) Alan Isaac
Anne Archibald wrote:
2008/11/5 Charles R Harris charlesr.harris@gmail.com:
Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be log(exp(a)+exp(b))? That would be extremely useful, yes.
I could probably use this also. What about log (exp(a)+exp(b)+exp(c)...)?
On Wed, Nov 5, 2008 at 12:01 PM, Neal Becker ndbecker2@gmail.com wrote:
Anne Archibald wrote:
2008/11/5 Charles R Harris charlesr.harris@gmail.com:
Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be log(exp(a)+exp(b))? That would be extremely useful, yes.
I could probably use this also. What about log (exp(a)+exp(b)+exp(c)...)?
I added the ufunc logsumexp. The extended add should be done with recursive adds to preserve precision, so:
In [3]: logsumexp.reduce(ones(10)) Out[3]: 3.3025850929940459
In [5]: logsumexp.reduce(eye(3), axis=0) Out[5]: array([ 1.55144471, 1.55144471, 1.55144471])
It looks like this is a good way to compute L_p norms for large p, i.e., exp(logsumexp.reduce(log(abs(x))*p)/p). Adding a logabs ufunc would be helpful here.
Hmm.... I wonder if the base function should be renamed logaddexp, then logsumexp would apply to the reduce method. Thoughts?
Chuck
On Wed, Nov 5, 2008 at 12:00 PM, Charles R Harris charlesr.harris@gmail.com wrote:
Hmm.... I wonder if the base function should be renamed logaddexp, then logsumexp would apply to the reduce method. Thoughts?
As David mentioned, logsumexp is probably the traditional name, but as the earlier link shows, it also goes by logadd. Given the distinction between add (a ufunc) and sum (something done over an axis) within numpy, it seems that logadd or logaddexp is probably a more fitting name. So long as it is documented, I doubt it matters much though...
2008/11/5 T J tjhnson@gmail.com:
numpy, it seems that logadd or logaddexp is probably a more fitting name. So long as it is documented, I doubt it matters much though...
Please don't call it logadd. `logaddexp` or `logsumexp` are both fine, but the `exp` part is essential in emphasising that you are not calculating a+b using logs.
Cheers Stéfan
On Wed, Nov 5, 2008 at 2:41 PM, Stéfan van der Walt stefan@sun.ac.zawrote:
2008/11/5 T J tjhnson@gmail.com:
numpy, it seems that logadd or logaddexp is probably a more fitting name. So long as it is documented, I doubt it matters much though...
Please don't call it logadd. `logaddexp` or `logsumexp` are both fine, but the `exp` part is essential in emphasising that you are not calculating a+b using logs.
I'm inclined to go with logaddexp and add logsumexp as an alias for logaddexp.reduce. But I'll wait until tomorrow to see if there are more comments.
Chuck
On Wed, Nov 5, 2008 at 3:09 PM, Charles R Harris charlesr.harris@gmail.comwrote:
On Wed, Nov 5, 2008 at 2:41 PM, Stéfan van der Walt stefan@sun.ac.zawrote:
2008/11/5 T J tjhnson@gmail.com:
numpy, it seems that logadd or logaddexp is probably a more fitting name. So long as it is documented, I doubt it matters much though...
Please don't call it logadd. `logaddexp` or `logsumexp` are both fine, but the `exp` part is essential in emphasising that you are not calculating a+b using logs.
I'm inclined to go with logaddexp and add logsumexp as an alias for logaddexp.reduce. But I'll wait until tomorrow to see if there are more comments.
Some timings of ufunc vs implementation with currently available functions. I've done the ufunc as logaddexp and defined currently corresponding functions as logadd and logsum just for quick convenience. Results:
In [15]: def logsum(x) : ....: off = x.max(axis=0) ....: return off + log(sum(exp(x - off), axis=0)) ....:
In [57]: def logadd(x,y) : max1 = maximum(x,y) min1 = minimum(x,y) return max1 + log1p(exp(min1 - max1)) ....:
In [61]: a = np.random.random(size=(1000,1000))
In [62]: b = np.random.random(size=(1000,1000))
In [63]: time x = logadd(a,b) CPU times: user 0.15 s, sys: 0.02 s, total: 0.17 s Wall time: 0.17 s
In [65]: time x = logaddexp(a,b) CPU times: user 0.12 s, sys: 0.00 s, total: 0.13 s Wall time: 0.13 s
In [67]: time x = logsum(a) CPU times: user 0.10 s, sys: 0.01 s, total: 0.11 s Wall time: 0.11 s
In [69]: time x = logaddexp.reduce(a, axis=0) CPU times: user 0.14 s, sys: 0.00 s, total: 0.14 s Wall time: 0.14 s
It looks like a ufunc implementation is just a bit faster for adding two arrays but for summing along axis logsum is a bit faster. This isn't unexpected because repeated calls to logaddexp isn't the most efficient way to sum. For smaller arrays, say 10x10 the ufunc wins in both cases by significant margins (like 2x) because of function call overhead. What sort of numbers do folks typically use?
Chuck
On Wed, Nov 5, 2008 at 2:09 PM, Charles R Harris charlesr.harris@gmail.com wrote:
I'm inclined to go with logaddexp and add logsumexp as an alias for logaddexp.reduce. But I'll wait until tomorrow to see if there are more comments.
When working in other bases, it seems like it would be good to avoid having to convert to base e and then back to base 2 with each function call (for example). Is there any desire add similar functions for other standard bases?
logaddexp2 logaddexp10 logdotexp2 logdotexp10
On Thu, Nov 6, 2008 at 1:23 PM, T J tjhnson@gmail.com wrote:
On Wed, Nov 5, 2008 at 2:09 PM, Charles R Harris charlesr.harris@gmail.com wrote:
I'm inclined to go with logaddexp and add logsumexp as an alias for logaddexp.reduce. But I'll wait until tomorrow to see if there are more comments.
When working in other bases, it seems like it would be good to avoid having to convert to base e and then back to base 2 with each function call (for example). Is there any desire add similar functions for other standard bases?
I suppose that depends on who you ask ;) What is your particular interest in these other bases and why would they be better than working in base e and converting at the end? The only one I could see really having a fast implementation is log2. In fact, I think the standard log starts in log2 by pulling in the floating point exponent and then using some sort of rational approximation of log2 over the range [1,2) on the mantissa.
Chuck
On Thu, Nov 6, 2008 at 1:48 PM, Charles R Harris charlesr.harris@gmail.com wrote:
What is your particular interest in these other bases and why would they be better than working in base e and converting at the end?
The interest is in information theory, where quantities are (standardly) represented in bits. So log2 quantities are often stored by the user and then passed into functions or classes. The main reason I'd like to shy away from conversions is that I also make use of generators/iterators and having next() convert to bits before each yield is not ideal (as these things are often slow enough and will be called many times).
The only one I could see really having a fast implementation is log2.
No disagreement here :)
On Thu, Nov 6, 2008 at 2:17 PM, T J tjhnson@gmail.com wrote:
The interest is in information theory, where quantities are (standardly) represented in bits.
I think this is also true in the machine learning community.
On Thu, Nov 6, 2008 at 3:17 PM, T J tjhnson@gmail.com wrote:
On Thu, Nov 6, 2008 at 1:48 PM, Charles R Harris charlesr.harris@gmail.com wrote:
What is your particular interest in these other bases and why would they be better than working in base e and converting at the end?
The interest is in information theory, where quantities are (standardly) represented in bits. So log2 quantities are often stored by the user and then passed into functions or classes. The main reason I'd like to shy away from conversions is that I also make use of generators/iterators and having next() convert to bits before each yield is not ideal (as these things are often slow enough and will be called many times).
I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I don't want to clutter up numpy with a lot of functions. However, if there is a community for these functions I will put them in.
Chuck
On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris charlesr.harris@gmail.com wrote:
I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I don't want to clutter up numpy with a lot of functions. However, if there is a community for these functions I will put them in.
I worry about clutter as well. Note that scipy provides log2 and exp2 already (scipy.special). So I think only logaddexp2 would be needed and (eventually) logdotexp2. Maybe scipy.special is a better place than in numpy? Then perhaps the clutter could be avoided....though I'm probably not the best one to ask for advice on this. I will definitely use the functions and I suspect many others will as well---where ever they are placed.
On Thu, Nov 6, 2008 at 3:01 PM, T J tjhnson@gmail.com wrote:
On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris charlesr.harris@gmail.com wrote:
I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I don't want to clutter up numpy with a lot of functions. However, if there is a community for these functions I will put them in.
I worry about clutter as well. Note that scipy provides log2 and exp2 already (scipy.special). So I think only logaddexp2 would be needed and (eventually) logdotexp2. Maybe scipy.special is a better place than in numpy? Then perhaps the clutter could be avoided....though I'm probably not the best one to ask for advice on this. I will definitely use the functions and I suspect many others will as well---where ever they are placed.
Since no one commented further on this, can we go ahead and add logaddexp2? Once in svn, we can always deal with 'location' later---I just don't want it to get forgotten.
On Sun, Nov 9, 2008 at 11:29 PM, T J tjhnson@gmail.com wrote:
On Thu, Nov 6, 2008 at 3:01 PM, T J tjhnson@gmail.com wrote:
On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris charlesr.harris@gmail.com wrote:
I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily,
I
don't want to clutter up numpy with a lot of functions. However, if
there is
a community for these functions I will put them in.
I worry about clutter as well. Note that scipy provides log2 and exp2 already (scipy.special). So I think only logaddexp2 would be needed and (eventually) logdotexp2. Maybe scipy.special is a better place than in numpy? Then perhaps the clutter could be avoided....though I'm probably not the best one to ask for advice on this. I will definitely use the functions and I suspect many others will as well---where ever they are placed.
Since no one commented further on this, can we go ahead and add logaddexp2? Once in svn, we can always deal with 'location' later---I just don't want it to get forgotten. __
The functions exp2 and log2 are part of the C99 standard, so I'll add those two along with log21p, exp21m, and logaddexp2. The names log21p and exp21p look a bit creepy so I'm open to suggestions.
Chuck
Charles R Harris wrote:
On Sun, Nov 9, 2008 at 11:29 PM, T J <tjhnson@gmail.com mailto:tjhnson@gmail.com> wrote:
On Thu, Nov 6, 2008 at 3:01 PM, T J <tjhnson@gmail.com <mailto:tjhnson@gmail.com>> wrote: > On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris > <charlesr.harris@gmail.com <mailto:charlesr.harris@gmail.com>> wrote: >> I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I >> don't want to clutter up numpy with a lot of functions. However, if there is >> a community for these functions I will put them in. >> > > I worry about clutter as well. Note that scipy provides log2 and exp2 > already (scipy.special). So I think only logaddexp2 would be needed > and (eventually) logdotexp2. Maybe scipy.special is a better place > than in numpy? Then perhaps the clutter could be avoided....though > I'm probably not the best one to ask for advice on this. I will > definitely use the functions and I suspect many others will as > well---where ever they are placed. Since no one commented further on this, can we go ahead and add logaddexp2? Once in svn, we can always deal with 'location' later---I just don't want it to get forgotten. __
The functions exp2 and log2 are part of the C99 standard, so I'll add those two along with log21p, exp21m, and logaddexp2. The names log21p and exp21p look a bit creepy so I'm open to suggestions.
I think the C99 standard is a good place to draw the line.
We can put other ufuncs in scipy.special
-Travis
On Mon, Nov 10, 2008 at 1:17 PM, Travis E. Oliphant oliphant@enthought.comwrote:
Charles R Harris wrote:
On Sun, Nov 9, 2008 at 11:29 PM, T J <tjhnson@gmail.com mailto:tjhnson@gmail.com> wrote:
On Thu, Nov 6, 2008 at 3:01 PM, T J <tjhnson@gmail.com <mailto:tjhnson@gmail.com>> wrote: > On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris > <charlesr.harris@gmail.com <mailto:charlesr.harris@gmail.com>> wrote: >> I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I >> don't want to clutter up numpy with a lot of functions. However, if there is >> a community for these functions I will put them in. >> > > I worry about clutter as well. Note that scipy provides log2 and exp2 > already (scipy.special). So I think only logaddexp2 would be
needed
> and (eventually) logdotexp2. Maybe scipy.special is a better place > than in numpy? Then perhaps the clutter could be avoided....though > I'm probably not the best one to ask for advice on this. I will > definitely use the functions and I suspect many others will as > well---where ever they are placed. Since no one commented further on this, can we go ahead and add logaddexp2? Once in svn, we can always deal with 'location'
later---I
just don't want it to get forgotten. __
The functions exp2 and log2 are part of the C99 standard, so I'll add those two along with log21p, exp21m, and logaddexp2. The names log21p and exp21p look a bit creepy so I'm open to suggestions.
I think the C99 standard is a good place to draw the line.
We can put other ufuncs in scipy.special
I added log2 and exp2. I still need to do the complex versions. I think logaddexp2 should go in also to compliment these. Note that MPL also defines log2 and their version has slightly different properties, i.e., it returns integer values for integer powers of two.
Chuck
On Mon, Nov 10, 2008 at 4:05 PM, Charles R Harris charlesr.harris@gmail.com wrote:
I added log2 and exp2. I still need to do the complex versions. I think logaddexp2 should go in also to compliment these.
Same here, especially since logaddexp is present. Or was the idea that both logexpadd and logexpadd2 should be moved to scipy.special?
Note that MPL also defines log2 and their version has slightly different properties, i.e., it returns integer values for integer powers of two.
I'm just curious now. Can someone comment on the difference in the implementation just committed versus that in cephes?
http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/special/cephes/exp...
The difference won't matter to me as far as usage goes, but I was curious.
On Mon, Nov 10, 2008 at 5:15 PM, T J tjhnson@gmail.com wrote:
On Mon, Nov 10, 2008 at 4:05 PM, Charles R Harris charlesr.harris@gmail.com wrote:
I added log2 and exp2. I still need to do the complex versions. I think logaddexp2 should go in also to compliment these.
Same here, especially since logaddexp is present. Or was the idea that both logexpadd and logexpadd2 should be moved to scipy.special?
Note that MPL also defines log2 and their version has slightly different properties, i.e., it
returns
integer values for integer powers of two.
I'm just curious now. Can someone comment on the difference in the implementation just committed versus that in cephes?
http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/special/cephes/exp... The difference won't matter to me as far as usage goes, but I was curious.
The version committed uses the distro exp2 if it is available, otherwise it uses exp(log(2)*x). The committed version is also defined for floats and long doubles, while the cephes version is double only. That said, the cephes version uses a rational approximation and ldexp, so is probably faster than exp(log(2)*x). The rational approximation is available for the other precisions (Nash?), so we could use that if it was desireable. I think we could also do better for log2 using frexp if needed. Probably the same with logaddexp2. But that is for later polishing.
Chuck