5 Nov
2008
5 Nov
'08
3:13 p.m.
Charles R Harris wrote:
Hmm... but I'm thinking one has to be clever here because the main reason I heard for using logs was that normal floating point numbers had insufficient range. So maybe something like
logadd(a,b) = a + log(1 + exp(b - a))
where a > b ?
On 11/5/2008 1:48 AM David Cournapeau apparently wrote:
Yes, that's the idea. AFAIK, that's generally known as logsumexp algorithm, at least in the machine learning community, I opened a task ticket on it, but I have not done any work on it:
Of possible relevance (BSD license): http://code.google.com/p/pyspkrec/source/browse/pyspkrec/gmm.py?r=109 (Search on logsumexp.) Alan Isaac