[Python-Dev] Caching float(0.0)

Delaney, Timothy (Tim) tdelaney at avaya.com
Tue Oct 3 01:47:03 CEST 2006


skip at pobox.com wrote:

>     Steve> By these statistics I think the answer to the original
>     question Steve> is clearly "no" in the general case.
> 
> As someone else (Guido?) pointed out, the literal case isn't all that
> interesting.  I modified floatobject.c to track a few interesting
> floating point values:
> 
>     static unsigned int nfloats[5] = {
>             0, /* -1.0 */
>             0, /*  0.0 */
>             0, /* +1.0 */
>             0, /* everything else */
>             0, /* whole numbers from -10.0 ... 10.0 */
>     };
> 
>     PyObject *
>     PyFloat_FromDouble(double fval)
>     {
>             register PyFloatObject *op;
>             if (free_list == NULL) {
>                     if ((free_list = fill_free_list()) == NULL)
>                             return NULL;
>             }
> 
>             if (fval == 0.0) nfloats[1]++;
>             else if (fval == 1.0) nfloats[2]++;
>             else if (fval == -1.0) nfloats[0]++;
>             else nfloats[3]++;
> 
>             if (fval >= -10.0 && fval <= 10.0 && (int)fval == fval) {
>                     nfloats[4]++;
>             }

This doesn't actually give us a very useful indication of potential
memory savings. What I think would be more useful is tracking the
maximum simultaneous count of each value i.e. what the maximum refcount
would have been if they were shared.

Tim Delaney


More information about the Python-Dev mailing list