Floatingpoint implementations
Is anyone aware of any implementations that use other than 64bit floatingpoint? I'd be particularly interested in any that use greater precision than the usual 56bit mantissa. Do modern 64bit systems implement anything wider than the normal double?
regards Steve
On Tue, Dec 9, 2008 at 5:15 PM, Steve Holden steve@holdenweb.com wrote:
Is anyone aware of any implementations that use other than 64bit floatingpoint? I'd be particularly interested in any that use greater precision than the usual 56bit mantissa. Do modern 64bit systems implement anything wider than the normal double?
I don't know of any. There are certainly places in the codebase that assume 56 bits are enough. (I seem to recall it's something like 56 bits for IBM, 53 bits for IEEE 754, 48 for Cray, and 52 or 56 for VAX.)
Many systems have a "long double" type, which usually seems to be either 80bit (with a 64bit mantissa) or 128bit. The latter is sometimes implemented as a pair of doubles, effectively giving a 106bit mantissa, and sometimes as an IEEE extended precision type; I don't know how many bits the mantissa would have in that case, but surely not more than 117.
I asked a related question a while ago:
http://mail.python.org/pipermail/pythondev/2008February/076680.html
Mark
On Tue, Dec 9, 2008 at 5:24 PM, Mark Dickinson dickinsm@gmail.com wrote:
I don't know of any. There are certainly places in the codebase that assume 56 bits are enough. (I seem to recall it's something like 56 bits for IBM, 53 bits for IEEE 754, 48 for Cray, and 52 or 56 for VAX.)
Quick correction, after actually bothering to look things up rather than relying on my poor memory: VAX doubles have either *53* (not 52) or 56 bit mantissas. More precisely, the VAX G_floating format has a 53bit mantissa (52 bits stored directly, one implicit 'hidden' bit), while the (now rare) D_floating format has a 56bit mantissa (again, including the implicit 'hidden' bit).
Mark
On Tue, Dec 9, 2008 at 5:15 PM, Steve Holden steve@holdenweb.com wrote:
precision than the usual 56bit mantissa. Do modern 64bit systems implement anything wider than the normal double?
I may have misinterpreted your question. Are you asking simply about what the hardware provides, or about what the C compiler and library support? Or something else entirely?
It looks like IEEEconforming 128bit floats would have a 113bit mantissa (including the implicit leading '1' bit).
Mark
Mark Dickinson wrote:
On Tue, Dec 9, 2008 at 5:15 PM, Steve Holden steve@holdenweb.com wrote:
precision than the usual 56bit mantissa. Do modern 64bit systems implement anything wider than the normal double?
I may have misinterpreted your question. Are you asking simply about what the hardware provides, or about what the C compiler and library support? Or something else entirely?
It looks like IEEEconforming 128bit floats would have a 113bit mantissa (including the implicit leading '1' bit).
I was actually asking about Python implementations, and read your original answer as meaning "no, there aren't any". I had assumed, correctly or otherwise, that the C library would have to offer wellintegrated support to enable its use in Python. In fact I had assumed it would need to be pretty much a dropin repleacement, but it sounds as though there are some hardcoded assumptions about float size that would not allow that.
regards Steve
On Tue, 09 Dec 2008 12:15:53 0500, Steve Holden wrote:
Is anyone aware of any implementations that use other than 64bit floatingpoint? I'd be particularly interested in any that use greater precision than the usual 56bit mantissa. Do modern 64bit systems implement anything wider than the normal double?
regards Steve
Why don't we create a DecimalFloat datatype which is a variablewidth floating point number. Decimal is variable precision fixedpoint number, while the plain ol' float would be system dependent floating point.
Lie Ryan wrote:
On Tue, 09 Dec 2008 12:15:53 0500, Steve Holden wrote:
Is anyone aware of any implementations that use other than 64bit floatingpoint? I'd be particularly interested in any that use greater precision than the usual 56bit mantissa. Do modern 64bit systems implement anything wider than the normal double?
regards Steve
Why don't we create a DecimalFloat datatype which is a variablewidth floating point number. Decimal is variable precision fixedpoint number, while the plain ol' float would be system dependent floating point.
Because it's a large amount of work? For a limited return ... the implementation is bound to be hugely slow compared with hardware floating point, and as Martin already pointed out gmpy provides higherprecision arithmetic where required, and the Decimal module provides arbitraryrange fixedpoint arithmetic.
regards Steve
On Tue, Dec 9, 2008 at 9:48 PM, Lie Ryan lie.1296@gmail.com wrote:
Why don't we create a DecimalFloat datatype which is a variablewidth floating point number. Decimal is variable precision fixedpoint number, while the plain ol' float would be system dependent floating point.
Decimal is *already* floatingpoint. Its handling of exponents and significant zeros mean that it can do a pretty good job of imitating fixedpoint as well, but it's still at root a floatingpoint type.
Mark
Is anyone aware of any implementations that use other than 64bit floatingpoint?
As I understand you are asking about Python implementations: sure, the gmpy package supports arbitraryprecision floating point.
I'd be particularly interested in any that use greater precision than the usual 56bit mantissa.
Nitpickingly: it's usual that the mantissa is 53bit.
Do modern 64bit systems implement anything wider than the normal double?
As Mark said: sure. x86 systems have supported 80bit "extended" precision for ages. Some architectures have architecture support for 128bit floats (e.g. Itanium, SPARC v9); it's not clear to me whether they actually implement the long double operations in hardware, or whether they trap and get softwareemulated.
Regards, Martin
participants (4)

"Martin v. Löwis"

Lie Ryan

Mark Dickinson

Steve Holden