max / min / smallest float value on Python 2.5
steve at REMOVE-THIS-cybersource.com.au
Sun Feb 7 05:31:36 CET 2010
On Sun, 07 Feb 2010 03:02:05 +0000, duncan smith wrote:
> The precise issue is that I'm supplying a default value of
> 2.2250738585072014e-308 for a parameter (finishing temperature for a
> simulated annealing algorithm) in an application. I develop on
> Ubuntu64, but (I am told) it's too small a value when run on a Win32
> server. I assume it's being interpreted as zero and raising an
> exception. Thanks.
I'm trying to think of what sort of experiment would be able to measure
temperatures accurate to less than 3e-308 Kelvin, and my brain boiled.
Surely 1e-100 would be close enough to zero as to make no practical
difference? Or even 1e-30? Whatever you're simulating surely isn't going
to require 300+ decimal points of accuracy.
I must admit I'm not really familiar with simulated annealing, so I could
be completely out of line, but my copy of "Numerical Recipes ..." by
Press et al has an example, and they take the temperature down to about
1e-6 before halting. Even a trillion times lower that that is 1e-15.
More information about the Python-list