Reading a non-standard floating point format

Tim Peters tim_one at
Sun Apr 27 01:41:43 CEST 2003

> ...
> The closest thing I have to a definition from the AlphaBasic programming
> manual: The reason for this is that floating point numbers occupy six
> bytes of storage. Of the 48 bits in use for each 6-byte variable, the
> high order bit is the sign of the mantissa. The next 8 bits represent
> the signed exponent in excess-128 notation, giving a range of
> approximately 2.9*10^-39 through 1.7*10^38. The remaining 39 bits
> contain the mantissa, which is normalized with an implied high-order bit
> of one. This gives an effective 40-bit mantissa which results in an
> accuracy of 11 significant digits.
> I had a go this afternoon and managed to read the sign and something
> approximating the exponent, though I will have to test the edge
> conditions carefully :)

That's helpful, but not enough to specify all the details you need to know.
Things it leaves unanswered:

+ Is an exponent field of all zeroes special?  Note that going strictly
  on what the quote said, the float 0.0 isn't representable!  (There's
  always an implied 1 bit in the mantissa, according to the quote.)

+ Is an exponent field of all ones special?

+ What's the base to which the exponent is to be raised?  2 and 16
  have both been used, even in hardware.

+ Where is the radix point in the mantissa?  Choices include "at the
  right end", between the implied 1 bit and the explicit bits (this
  is the most common choice when a 1 bit is implied), or to the left
  of the implied 1 bit.

+ Is the storage format big-endian or little-endian?

Note that 2.0**-128 ~= 2.9e-39 and 2.0**127 ~= 1.7e38.  Those are the bounds
in the quote.  From that we can deduce that the exponent base is 2.
However, if the exponent bias is really 128 (as the quote said), this is
pretty baffling, because an unbiased exponent of -128 is represented by a
biased exponent of -128 + 128 (the excess) == 0, and a biased exponent of 0
is almost always special-cased in schemes with an implied mantissa bit,
reserved to represent 0.0, and possibly also subnormals.  If this scheme
represented subnormals too, the lower bound would be smaller than 2.9e-39.
OTOH, if this scheme uses a biased exponent field of 0 to represent 0 and
the radix point is viewed as "between", the lower bound would be larger than
2.9e-39.  It's possible that the mantissa radix point is to be viewed as
being to the left of the implied 1 bit, and then we could get a lower bound
of 2.9e-39 via a mantissa of all zeroes and a biased exponent of 1.  (Then
the value is 0.5 * 2**(1-128) == 2**-128.)

The good news <wink> is that these kind of docs are often wrong, and usually
in the lower and/or upper bounds they claim.

There's no way to guess without examples, though.  If you can, show us the
48-bit byte strings for these numbers:


That should be enough to fill in the missing pieces.

More information about the Python-list mailing list