The support of hexadecimal floating literals (like 0xC.68p+2) is included in just released C++17 standard. Seems this becomes a mainstream.
In Python float.hex() returns hexadecimal string representation. Is it a time to add more support of hexadecimal floating literals? Accept them in float constructor and in Python parser? And maybe add support of hexadecimal formatting ('%x' and '{:x}')?
Dear all
This is my very first email to python-ideas, and I strongly support this idea. float.hex() does the job for float to hexadecimal conversion, and float.fromhex() does the opposite. But a full support for hexadecimal floating-point literals would be great (it bypasses the decimal to floating-point conversion), as explained for general purpose here : http://www.exploringbinary.com/hexadecimal-floating-point-constants/
The support for hexadecimal formatting was introduced in C99 with the '%a' formatter for string formatting (see http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf page 57-58 for literals, or http://en.cppreference.com/w/cpp/language/floating_literal), and, it would be great if python could support it.
Thanks
Thibault
The support of hexadecimal floating literals (like 0xC.68p+2) is included in just released C++17 standard. Seems this becomes a mainstream.
In Python float.hex() returns hexadecimal string representation. Is it a time to add more support of hexadecimal floating literals? Accept them in float constructor and in Python parser? And maybe add support of hexadecimal formatting ('%x' and '{:x}')?
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Dr Thibault HILAIRE
Université Pierre et Marie Curie (Associate Professor) Computing Science Lab (LIP6) Engineering school Polytech Paris UPMC
4 place Jussieu 75005 PARIS, France
tel: +33 (0)1.44.27.87.73 email: thibault.hilaire@lip6.fr web: http://www.docmatic.fr
2017-09-07 23:57 GMT-07:00 Serhiy Storchaka storchaka@gmail.com:
The support of hexadecimal floating literals (like 0xC.68p+2) is included in just released C++17 standard. Seems this becomes a mainstream.
Floating literal using base 2 (or base 2^n, like hexadecimal, 2^4) is the only way to get exact values in a portable way. So yeah, we need it. We already have float.hex() since Python 2.6.
In Python float.hex() returns hexadecimal string representation. Is it a time to add more support of hexadecimal floating literals? Accept them in float constructor and in Python parser? And maybe add support of hexadecimal formatting ('%x' and '{:x}')?
I dislike "%x" % float, since "%x" is a very old format from C printf and I expect it to only work for integers. For example, bytes.hex() exists (since Python 3.5) but b'%x' % b'hello' doesn't work.
Since format() is a "new" way to format strings, and each type is free to implement its own formatters, I kind of like the idea of support float.hex() here.
Do we need a short PEP, since it changes the Python grammar? It may be nice to describe the exact grammar for float literals.
Victor
On Fri, Sep 8, 2017 at 12:05 PM, Victor Stinner victor.stinner@gmail.com wrote:
2017-09-07 23:57 GMT-07:00 Serhiy Storchaka storchaka@gmail.com:
The support of hexadecimal floating literals (like 0xC.68p+2) is included in just released C++17 standard. Seems this becomes a mainstream.
Floating literal using base 2 (or base 2^n, like hexadecimal, 2^4) is the only way to get exact values in a portable way. So yeah, we need it. We already have float.hex() since Python 2.6.
In Python float.hex() returns hexadecimal string representation. Is it a time to add more support of hexadecimal floating literals? Accept them in float constructor and in Python parser? And maybe add support of hexadecimal formatting ('%x' and '{:x}')?
I dislike "%x" % float, since "%x" is a very old format from C printf and I expect it to only work for integers. For example, bytes.hex() exists (since Python 3.5) but b'%x' % b'hello' doesn't work.
Since format() is a "new" way to format strings, and each type is free to implement its own formatters, I kind of like the idea of support float.hex() here.
Do we need a short PEP, since it changes the Python grammar? It may be nice to describe the exact grammar for float literals.
Yes, this needs a PEP.
-- --Guido van Rossum (python.org/~guido)
Instead of modifying the Python grammar, the alternative is to enhance float(str) to support it:
k = float("0x1.2492492492492p-3") # 1/7
Victor
2017-09-08 8:57 GMT+02:00 Serhiy Storchaka storchaka@gmail.com:
The support of hexadecimal floating literals (like 0xC.68p+2) is included in just released C++17 standard. Seems this becomes a mainstream.
In Python float.hex() returns hexadecimal string representation. Is it a time to add more support of hexadecimal floating literals? Accept them in float constructor and in Python parser? And maybe add support of hexadecimal formatting ('%x' and '{:x}')?
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 2017-09-12, Victor Stinner wrote:
Instead of modifying the Python grammar, the alternative is to enhance float(str) to support it:
k = float("0x1.2492492492492p-3") # 1/7
Making it a different function from float() would avoid backwards compatibility issues. I.e. float() no longer returns errors on some inputs.
E.g.
from math import hexfloat
k = hexfloat("0x1.2492492492492p-3")
I still think a literal syntax has merits. The above cannot be optimized by the compiler as it doesn't know what hexfloat() refers to. That in turn destroys constant folding peephole stuff that uses the literal.
On Mon, Sep 11, 2017 at 06:26:16PM -0600, Neil Schemenauer wrote:
On 2017-09-12, Victor Stinner wrote:
Instead of modifying the Python grammar, the alternative is to enhance float(str) to support it:
k = float("0x1.2492492492492p-3") # 1/7
Making it a different function from float() would avoid backwards compatibility issues. I.e. float() no longer returns errors on some inputs.
I don't think many people will care about backwards compatibility of errors. Intentionally calling float() in order to get an exception is not very common (apart from test suites). Its easier to use raise if you want a ValueError.
The only counter-example I can think of is beginner programmers who write something like:
num = float(input("Enter a number:"))
and are surprised when the "invalid" response "0x1.Fp2" is accepted. But then they've already got the same so-called problem with int accepting "invalid" strings like "0xDEADBEEF". So I stress that this is a problem in theory, not in practice.
E.g.
from math import hexfloat
k = hexfloat("0x1.2492492492492p-3")
I don't think that's necessary. float() is sufficient.
I still think a literal syntax has merits. The above cannot be optimized by the compiler as it doesn't know what hexfloat() refers to. That in turn destroys constant folding peephole stuff that uses the literal.
Indeed. If there are use-cases for hexadecimal floats, then we should support both a literal 0x1.fp2 form and the float constructor.
-- Steve
On Tue, Sep 12, 2017 at 9:20 PM, Steven D'Aprano steve@pearwood.info wrote:
On Mon, Sep 11, 2017 at 06:26:16PM -0600, Neil Schemenauer wrote:
On 2017-09-12, Victor Stinner wrote:
Instead of modifying the Python grammar, the alternative is to enhance float(str) to support it:
k = float("0x1.2492492492492p-3") # 1/7
Making it a different function from float() would avoid backwards compatibility issues. I.e. float() no longer returns errors on some inputs.
I don't think many people will care about backwards compatibility of errors. Intentionally calling float() in order to get an exception is not very common (apart from test suites). Its easier to use raise if you want a ValueError.
The only counter-example I can think of is beginner programmers who write something like:
num = float(input("Enter a number:"))
and are surprised when the "invalid" response "0x1.Fp2" is accepted. But then they've already got the same so-called problem with int accepting "invalid" strings like "0xDEADBEEF". So I stress that this is a problem in theory, not in practice.
Your specific example doesn't work as int() won't accept that by default - you have to explicitly say "base=0" to make that acceptable. But we have other examples where what used to be an error is now acceptable:
Python 3.5.3 (default, Jan 19 2017, 14:11:04) [GCC 6.3.0 20170118] on linux Type "help", "copyright", "credits" or "license" for more information.
int("1_234_567") Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: '1_234_567'
Python 3.7.0a0 (heads/master:cb76029b47, Aug 30 2017, 23:43:41) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information.
int("1_234_567") 1234567
Maybe hex floats should be acceptable only with float(str, base=0)?
ChrisA
On Tue, Sep 12, 2017 at 2:30 PM, Chris Angelico rosuav@gmail.com wrote:
>
Your specific example doesn't work as int() won't accept that by default - you have to explicitly say "base=0" to make that acceptable. But we have other examples where what used to be an error is now acceptable:
I'm surprised that it's not "base=None".
––Koos
--
2017-09-12 3:48 GMT+02:00 Steven D'Aprano steve@pearwood.info:
k = float("0x1.2492492492492p-3") # 1/7
Why wouldn't you just write 1/7?
1/7 is irrational, so it's not easy to get the "exact value" for a 64-bit IEEE 754 double float.
I chose it because it's easy to write. Maybe math.pi is a better example :-)
math.pi.hex() '0x1.921fb54442d18p+1'
Victor
On Tue, Sep 12, 2017 at 09:23:04AM +0200, Victor Stinner wrote:
2017-09-12 3:48 GMT+02:00 Steven D'Aprano steve@pearwood.info:
k = float("0x1.2492492492492p-3") # 1/7
Why wouldn't you just write 1/7?
1/7 is irrational, so it's not easy to get the "exact value" for a 64-bit IEEE 754 double float.
1/7 is not irrational. It is the ratio of 1 over 7, by definition it is a rational number. Are you thinking of square root of 7?
1/7 gives the exact 64-bit IEEE 754 float closest to the true rational number 1/7. And with the keyhole optimizer in recent versions of Python, you don't even pay a runtime cost.
py> (1/7).hex() '0x1.2492492492492p-3'
I do like the idea of having float hex literals, and supporting them in float itself (although we do already have float.fromhex) but I must admit I'm struggling for a use-case.
But perhaps "C allows it now, we should too" is a good enough reason.
I chose it because it's easy to write. Maybe math.pi is a better example :-)
math.pi.hex() '0x1.921fb54442d18p+1'
3.141592653589793 is four fewer characters to type, just as accurate, and far more recognisable.
-- Steve
Hi everybody
I chose it because it's easy to write. Maybe math.pi is a better example :-)
math.pi.hex() '0x1.921fb54442d18p+1'
3.141592653589793 is four fewer characters to type, just as accurate, and far more recognizable.
Of course, for a lost of numbers, the decimal representation is simpler, and just as accurate as the radix-2 hexadecimal representation. But, due to the radix-10 and radix-2 used in the two representations, the radix-2 may be much easier to use. In the "Handbook of Floating-Point Arithmetic" (JM Muller et al, Birkhauser editor, page 40),the authors claims that the largest exact decimal representation of a double-precision floating-point requires 767 digits !! So it is not always few characters to type to be just as accurate !! For example (this is the largest exact decimal representation of a single-precision 32-bit float):
1.17549421069244107548702944484928734882705242874589333385717453057158887047561890426550235133618116378784179687e-38 and 0x1.fffffc0000000p-127 are exactly the same number (one in decimal representation, the other in radix-2 hexadecimal)!
So, we have several alternatives:
support one of some of the following possibilities:
a) support the hexadecimal floating-point literals, like released in C++17 (I don't know if some other languages already support this)
>>> x = 0x1.2492492492492p-3
b) extend the constructor float to be able to build float from hexadecimal
>>> x = float('0x1.2492492492492p-3')
I don't know if we should add a "base=None" or not c) extend the string formatting with '%a' (as in C since C99) and '{:a}'
>>> s = '%a' % (x,)
Serhly proposes to use '%x' and '{:x}', but I prefer to be consistent with C
To my point of view (my needs are maybe not very representative, as computer scientist working in computer arithmetic), a full support for radix-2 representation is required (it is sometimes easier/quicker to exchange data between different softwares in plain text, and radix-2 hexadecimal is the best way to do it, because it is exact). Also, support option a) will help me to generate python code (from other python or C code) without building the float at runtime with fromhex(). My numbers will be literals, not string converted in float! Option c) will help me to print my data in the same way as in C, and be consistent (same formatting character) And option b) will be just here for consistency with new hexadecimal literals...
Finally, I am now considering writing a PEP from Serhly Storchaka's idea, but if someone else wants to start it, I can help/contribute.
Thanks
Thibault
On Wed, Sep 13, 2017 at 04:36:49PM +0200, Thibault Hilaire wrote:
Of course, for a lost of numbers, the decimal representation is simpler, and just as accurate as the radix-2 hexadecimal representation. But, due to the radix-10 and radix-2 used in the two representations, the radix-2 may be much easier to use.
Hex is radix 16, not radix 2 (binary).
In the "Handbook of Floating-Point Arithmetic" (JM Muller et al, Birkhauser editor, page 40),the authors claims that the largest exact decimal representation of a double-precision floating-point requires 767 digits !! So it is not always few characters to type to be just as accurate !! For example (this is the largest exact decimal representation of a single-precision 32-bit float):
1.17549421069244107548702944484928734882705242874589333385717453057158887047561890426550235133618116378784179687e-38 and 0x1.fffffc0000000p-127 are exactly the same number (one in decimal representation, the other in radix-2 hexadecimal)!
That may be so, but that doesn't mean you have to type all 100+ digits in order to reproduce the float exactly. Just 1.1754942106924411e-38 is sufficient:
py> 1.1754942106924411e-38 == float.fromhex('0x1.fffffc0000000p-127') True
You may be mistaking two different questions:
(1) How many decimal digits are needed to exactly convert the float to decimal? That can be over 100 for a C single, and over 700 for a double.
(2) How many decimal digits are needed to uniquely represent the float? Nine digits (plus an exponent) is enough to represent all possible C singles; 17 digits is enough to represent all doubles (Python floats).
I'm not actually opposed to hex float literals. I think they're cool. But we ought to have a reason more than just "they're cool" for supporting them, and I'm having trouble thinking of any apart from "C supports them, so should we". But maybe that's enough.
-- Steve
All this talk about accurate representation left aside, please consider what a newbie would think when s/he sees:
x = 0x1.fffffc0000000p-127
There's really no need to make Python scripts cryptic. It's enough to have a helper function that knows how to read such representations and we already have that.
-- Marc-Andre Lemburg eGenix.com
Professional Python Services directly from the Experts (#1, Sep 14 2017)
Python Projects, Coaching and Consulting ... http://www.egenix.com/ Python Database Interfaces ... http://products.egenix.com/ Plone/Zope Database Interfaces ... http://zope.egenix.com/
::: We implement business ideas - efficiently in both time and costs :::
eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/
Hi everyone
Of course, for a lost of numbers, the decimal representation is simpler, and just as accurate as the radix-2 hexadecimal representation. But, due to the radix-10 and radix-2 used in the two representations, the radix-2 may be much easier to use.
Hex is radix 16, not radix 2 (binary). Of course, Hex is radix-16! I was talking about radix-2 because all the exactness problem comes when converting binary and decimal, and Hex can be seen as an (exact) compact way to express binary, and that's what we want (build literal floats in the same exact way they are stored internally, and export them exactly in a compact way).
In the "Handbook of Floating-Point Arithmetic" (JM Muller et al, Birkhauser editor, page 40),the authors claims that the largest exact decimal representation of a double-precision floating-point requires 767 digits !! So it is not always few characters to type to be just as accurate !! For example (this is the largest exact decimal representation of a single-precision 32-bit float):
1.17549421069244107548702944484928734882705242874589333385717453057158887047561890426550235133618116378784179687e-38 and 0x1.fffffc0000000p-127 are exactly the same number (one in decimal representation, the other in radix-2 hexadecimal)!
That may be so, but that doesn't mean you have to type all 100+ digits in order to reproduce the float exactly. Just 1.1754942106924411e-38 is sufficient:
py> 1.1754942106924411e-38 == float.fromhex('0x1.fffffc0000000p-127') True
You may be mistaking two different questions:
(1) How many decimal digits are needed to exactly convert the float to decimal? That can be over 100 for a C single, and over 700 for a double.
(2) How many decimal digits are needed to uniquely represent the float? Nine digits (plus an exponent) is enough to represent all possible C singles; 17 digits is enough to represent all doubles (Python floats).
You're absolutely right, 1.1754942106924411e-38 is enough to reproduce the float exactly, BUT it is still different to 0x1.fffffc0000000p-127 (or it's 112-digits decimal representation). Because 1.1754942106924411e-38 is rounded at compile-time to 0x1.fffffc0000000p-127 (so exactly to 1.17549421069244107548702944484928734882705242874589333385717453057158887047561890426550235133618116378784179687e-38 in decimal).
So 17 digits are enough to reach each double, after the compile-time quantization. But "explicit is better than implicit", as someone says ;-), so I prefer, in some particular occasions, to explicitly express the floating-point number I want (like 0x1.fffffc0000000p-127), rather than hoping the quantization of my decimal number (1.1754942106924411e-38) will produce the right floating-point (0x1.fffffc0000000p-127)
And that's one of the reasons why the hexadecimal floating-point representation exist:
I'm not actually opposed to hex float literals. I think they're cool. But we ought to have a reason more than just "they're cool" for supporting them, and I'm having trouble thinking of any apart from "C supports them, so should we". But maybe that's enough.
To sum up:
I hope this can be seen as a sufficient reason to support hexadecimal floating literals.
Thibault
And that's one of the reasons why the hexadecimal floating-point representation exist:
I suspect no one here thinks floathex representation is unimportant...
>
To sum up:
Right. But it addresses all of the points you make. The functionality is there. Making a new literal will buy a slight improvement in writability and performance.
Is that worth much in a dynamic language like python?
-CHB
On 21 September 2017 at 10:44, Chris Barker - NOAA Federal
chris.barker@noaa.gov wrote: [Thibault]
To sum up:
Right. But it addresses all of the points you make. The functionality is there. Making a new literal will buy a slight improvement in writability and performance.
Is that worth much in a dynamic language like python?
I think so, as consider this question: how do you write a script that accepts a user-supplied string (e.g. from a CSV file) and treats it as hex floating point if it has the 0x prefix, and decimal floating point otherwise?
You can't just blindly apply float.fromhex(), as that will also treat unprefixed strings as hexadecimal:
float.fromhex("0x10") 16.0 float.fromhex("10") 16.0
So you need to do the try/except dance with ValueError instead:
try:
float_data = float(text)
except ValueError:
float_values = float.fromhex(text)
At which point you may wonder why you can't just write "float_data = float(text, base=0)" the way you can for integers:
int("10", base=0) 10 int("0x10", base=0) 16
And if the float() builtin were to gain a "base" parameter, then it's only a short step from there to allow at least the "0x" prefix on literals, and potentially even "0b" and "0o" as well.
So I'm personally +0 on the idea - it would improve interface consistency between integers and floating point values, and make it easier to write correctness tests for IEEE754 floating point hardware and algorithms in Python (where your input & output test vectors are going to use binary or hex representations, not decimal).
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Thu, Sep 21, 2017 at 11:13:44AM +1000, Nick Coghlan wrote:
I think so, as consider this question: how do you write a script that accepts a user-supplied string (e.g. from a CSV file) and treats it as hex floating point if it has the 0x prefix, and decimal floating point otherwise?
float.fromhex(s) if s.startswith('0x') else float(s)
[...]
And if the float() builtin were to gain a "base" parameter, then it's only a short step from there to allow at least the "0x" prefix on literals, and potentially even "0b" and "0o" as well.
So I'm personally +0 on the idea
I agree with your arguments. I just wish I could think of a good reason to make it +1 instead of a luke-warm +0.
-- Steve
On 21 September 2017 at 02:53, Steven D'Aprano steve@pearwood.info wrote:
On Thu, Sep 21, 2017 at 11:13:44AM +1000, Nick Coghlan wrote:
I think so, as consider this question: how do you write a script that accepts a user-supplied string (e.g. from a CSV file) and treats it as hex floating point if it has the 0x prefix, and decimal floating point otherwise?
float.fromhex(s) if s.startswith('0x') else float(s)
[...]
And if the float() builtin were to gain a "base" parameter, then it's only a short step from there to allow at least the "0x" prefix on literals, and potentially even "0b" and "0o" as well.
So I'm personally +0 on the idea
I agree with your arguments. I just wish I could think of a good reason to make it +1 instead of a luke-warm +0.
I'm also +0.
I think +0 is pretty much the correct response - it's OK with me, but someone who actually needs or wants the feature will need to implement it.
It's also worth remembering that there will be implementations other than CPython that will need changes, too - Jython, PyPy, possibly Cython, and many editors and IDEs. So setting the bar at "someone who wants this will have to step up and provide a patch" seems reasonable to me.
Paul
On Thu, Sep 21, 2017 at 1:57 AM, Paul Moore p.f.moore@gmail.com wrote:
...
It's also worth remembering that there will be implementations other than CPython that will need changes, too - Jython, PyPy, possibly Cython, and many editors and IDEs. So setting the bar at "someone who wants this will have to step up and provide a patch" seems reasonable to me.
It would be more or less trivial for Jython to add such support, given that Java has such support natively, and we already leverage this support in our current implementation of 2.7. See https://docs.oracle.com/javase/7/docs/api/java/lang/Double.html#valueOf(java...) if curious. We just need to add the correct floating point constant that is parsed to Java's constant pool, as used by Java bytecode, and it's done.
I'm much more concerned about finishing the rest of 3.x.
2017-09-21 3:53 GMT+02:00 Steven D'Aprano steve@pearwood.info:
float.fromhex(s) if s.startswith('0x') else float(s)
My vote is now -1 on extending the Python syntax to add hexadecimal floating literals.
While I was first in favor of extending the Python syntax, I changed my mind. Float constants written in hexadecimal is a (very?) rare use case, and there is already float.fromhex() available.
A new syntax is something more to learn when you learn Python. Is it worth it? I don't think so. Very few people need to write hexadecimal constants in their code.
For hardcore developers loving bytes, struct.unpack() is also available for your pleasure.
Moreover, there is also slowly a trend to compute floating point numbers in decimal since it's easier to understand and debug (by humans ;-)). We already have a fast "decimal" module in Python 3.
Victor
On Thu, Sep 21, 2017 at 8:23 AM, Victor Stinner victor.stinner@gmail.com wrote:
While I was first in favor of extending the Python syntax, I changed my mind. Float constants written in hexadecimal is a (very?) rare use case, and there is already float.fromhex() available.
A new syntax is something more to learn when you learn Python. Is it worth it? I don't think so. Very few people need to write hexadecimal constants in their code.
It is inconsistent that you can write hexadecimal integers but not floating point numbers. Consistency in syntax is fewer things to learn, not more. That said, I agree it's a rare use case, so it probably doesn't matter much either way.
-1
Writing a floating point literal requires A LOT more knowledge than writing a hex integer.
What is the bit length of floats on your specific Python compile? What happens if you specify more or less precision than actually available. Where is the underflow to subnormal numbers? What is the bit representation of information? Nan? -0 vs +0?
There are people who know this and need to know this. But float.fromhex() is already available to them. A literal is an attractive nuisance for people who almost-but-not-quite understand IEEE-854. I.e. those people who named neither Tim Peters nor Mike Cowlishaw.
On Sep 21, 2017 9:48 AM, "Lucas Wiman" lucas.wiman@gmail.com wrote:
On Thu, Sep 21, 2017 at 8:23 AM, Victor Stinner victor.stinner@gmail.com wrote:
While I was first in favor of extending the Python syntax, I changed my mind. Float constants written in hexadecimal is a (very?) rare use case, and there is already float.fromhex() available.
A new syntax is something more to learn when you learn Python. Is it worth it? I don't think so. Very few people need to write hexadecimal constants in their code.
It is inconsistent that you can write hexadecimal integers but not floating point numbers. Consistency in syntax is fewer things to learn, not more. That said, I agree it's a rare use case, so it probably doesn't matter much either way.
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Tablet autocorrect: bit representation of inf and -inf.
On Sep 21, 2017 1:09 PM, "David Mertz" mertz@gnosis.cx wrote:
-1
Writing a floating point literal requires A LOT more knowledge than writing a hex integer.
What is the bit length of floats on your specific Python compile? What happens if you specify more or less precision than actually available. Where is the underflow to subnormal numbers? What is the bit representation of information? Nan? -0 vs +0?
There are people who know this and need to know this. But float.fromhex() is already available to them. A literal is an attractive nuisance for people who almost-but-not-quite understand IEEE-854. I.e. those people who named neither Tim Peters nor Mike Cowlishaw.
On Sep 21, 2017 9:48 AM, "Lucas Wiman" lucas.wiman@gmail.com wrote:
On Thu, Sep 21, 2017 at 8:23 AM, Victor Stinner victor.stinner@gmail.com wrote:
While I was first in favor of extending the Python syntax, I changed my mind. Float constants written in hexadecimal is a (very?) rare use case, and there is already float.fromhex() available.
A new syntax is something more to learn when you learn Python. Is it worth it? I don't think so. Very few people need to write hexadecimal constants in their code.
It is inconsistent that you can write hexadecimal integers but not floating point numbers. Consistency in syntax is fewer things to learn, not more. That said, I agree it's a rare use case, so it probably doesn't matter much either way.
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Thu, Sep 21, 2017 at 01:09:11PM -0700, David Mertz wrote:
-1
Writing a floating point literal requires A LOT more knowledge than writing a hex integer.
What is the bit length of floats on your specific Python compile?
Are there actually any Python implementations or builds which have floats not equal to 64 bits? If not, perhaps it is time to make 64 bit floats a language guarantee.
What happens if you specify more or less precision than actually available.
I expect the answer will be "exactly the same as what already happens right now". Why wouldn't it be?
py> float.fromhex('0x1.81cd5c28f5c290000000089p+13') 12345.67 py> float('12345.6700000000000089') 12345.67
Where is the underflow to subnormal numbers?
Same place it is right now.
What is the bit representation of information? Nan? -0 vs +0?
Same as it is now.
There are people who know this and need to know this. But float.fromhex() is already available to them. A literal is an attractive nuisance for people who almost-but-not-quite understand IEEE-854. I.e. those people who named neither Tim Peters nor Mike Cowlishaw.
Using a different sized float is going to affect any representation of floats, whether it is in decimal or in hex. If your objections are valid for hex literals, then they're equally valid (if not more so!) for decimal literals, and we're left with the conclusion that nobody except Tim Peters and Mike Cowlishaw can enter floats into source code, or convert them from strings.
And I think that's silly. Obviously many people can and do successfully use floats all the time, without worrying whether or not the code is absolutely, 100% consistent across all platforms, including that weird build on Acme YouNicks with 57 bit floats.
People who care about weird builds can use sys.float_info to find out what they need to know, and adjust accordingly. Those who don't will continue to do what they're already doing: assume floats are 64-bit C doubles, and live in a state of blissful ignorance about alternatives until somebody reports a bug, which they'll close as "won't fix".
-- Steve
On Thu, Sep 21, 2017 at 7:32 PM, Steven D'Aprano steve@pearwood.info wrote:
On Thu, Sep 21, 2017 at 01:09:11PM -0700, David Mertz wrote:
-1
Writing a floating point literal requires A LOT more knowledge than writing a hex integer.
What is the bit length of floats on your specific Python compile?
Are there actually any Python implementations or builds which have floats not equal to 64 bits? If not, perhaps it is time to make 64 bit floats a language guarantee.
Jython passes the hexadecimal float tests in Lib/test/test_float.py, since Java uses 64-bit IEEE 754 double representation for the storage type of its double primitive type. (One can further constrain with strictfp, for intermediate representation, not certain how widely used that would be. I have never seen it.) In turn, Jython uses such doubles for its PyFloat implementation.
I wonder if CPython is the only implementation that could potentially supports other representations, such as found on System/360 (or the successor z/OS architecture). And I vaguely recall VAX VMS had an alternative floating point, but is that still around???
I think you are missing the point I was assuming at. Having a binary/hex float literal would tempt users to think "I know EXACTLY what number I'm spelling this way"... where most users definitely don't in edge cases.
Spelling it float.fromhex(s) makes it more obvious "this is an expert operation I may not understand the intricacies of."
On Sep 21, 2017 6:32 PM, "Steven D'Aprano" steve@pearwood.info wrote:
On Thu, Sep 21, 2017 at 01:09:11PM -0700, David Mertz wrote:
-1
Writing a floating point literal requires A LOT more knowledge than writing a hex integer.
What is the bit length of floats on your specific Python compile?
Are there actually any Python implementations or builds which have floats not equal to 64 bits? If not, perhaps it is time to make 64 bit floats a language guarantee.
What happens if you specify more or less precision than actually available.
I expect the answer will be "exactly the same as what already happens right now". Why wouldn't it be?
py> float.fromhex('0x1.81cd5c28f5c290000000089p+13') 12345.67 py> float('12345.6700000000000089') 12345.67
Where is the underflow to subnormal numbers?
Same place it is right now.
What is the bit representation of information? Nan? -0 vs +0?
Same as it is now.
There are people who know this and need to know this. But float.fromhex() is already available to them. A literal is an attractive nuisance for people who almost-but-not-quite understand IEEE-854. I.e. those people who named neither Tim Peters nor Mike Cowlishaw.
Using a different sized float is going to affect any representation of floats, whether it is in decimal or in hex. If your objections are valid for hex literals, then they're equally valid (if not more so!) for decimal literals, and we're left with the conclusion that nobody except Tim Peters and Mike Cowlishaw can enter floats into source code, or convert them from strings.
And I think that's silly. Obviously many people can and do successfully use floats all the time, without worrying whether or not the code is absolutely, 100% consistent across all platforms, including that weird build on Acme YouNicks with 57 bit floats.
People who care about weird builds can use sys.float_info to find out what they need to know, and adjust accordingly. Those who don't will continue to do what they're already doing: assume floats are 64-bit C doubles, and live in a state of blissful ignorance about alternatives until somebody reports a bug, which they'll close as "won't fix".
-- Steve
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Thu, Sep 21, 2017 at 7:57 PM, David Mertz mertz@gnosis.cx wrote:
I think you are missing the point I was assuming at. Having a binary/hex float literal would tempt users to think "I know EXACTLY what number I'm spelling this way"... where most users definitely don't in edge cases.
That problem has never stopped us from using decimals. :-)
Spelling it float.fromhex(s) makes it more obvious "this is an expert operation I may not understand the intricacies of."
I don't see why that would be more obvious than if it were built into the language -- just because something is a function doesn't mean it's an expert operation.
-- --Guido van Rossum (python.org/~guido)
On 22/09/17 03:57, David Mertz wrote:
I think you are missing the point I was assuming at. Having a binary/hex float literal would tempt users to think "I know EXACTLY what number I'm spelling this way"... where most users definitely don't in edge cases.
Quite. What makes me -0 on this idea is that a lot of the initial enthusiasm on this list came from people saying exactly that.
-- Rhodri James - Kynesim Ltd
On 22/09/2017 02:32, Steven D'Aprano wrote: >
Are there actually any Python implementations or builds which have floats not equal to 64 bits? If not, perhaps it is time to make 64 bit floats a language guarantee.
This will be unfortunate when Intel bring out a processor with 256-bit floats (or by "64 bit" do you mean "at least 64 bit"?). Hm, is there an analog of Moore's law that says the number of floating-point bits doubles every X years? :-)
Unrelated thought: Users might be unsure if the exponent in a hexadecimal float is in decimal or in hex.
Rob Cliffe
>
Unrelated thought: Users might be unsure if the exponent in a hexadecimal float is in decimal or in hex.
I was playing around with float.fromhex() for this thread, and the first number I tried to spell used a hex exponent because that seemed like "the obvious thing"... I figured it out quickly enough, but the actual spelling feels less obvious to me.
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
[David Mertz mertz@gnosis.cx]
-1
Writing a floating point literal requires A LOT more knowledge than writing a hex integer.
But not really more than writing a decimal float literal in "scientific notation". People who use floats are used to the latter. Besides using "p" instead of "e" to mark the exponent, the only differences are that the mantissa is expressed in hex instead of in decimal, and the implicit base to which the exponent is applied is 2 instead of 10.
Either way it's a notation for a rational number, which may or may not be exactly representable in the native HW float format.
What is the bit length of floats on your specific Python compile? What happens if you specify more or less precision than actually available. Where is the underflow to subnormal numbers?
All the same answers apply as when using decimal "scientific notation". When the denoted rational isn't exactly representable, then a maze of rounding, overflow, and/or underflow rules apply. The base the literal is expressed in doesn't really have anything to do with those.
What is the bit representation of [infinity]? Nan?
Hex float literals in general have no way to spell those: it's just a way to spell a subset of (mathematical) rational numbers. Python, however, does support special cases for those:
float.fromhex("inf") inf float.fromhex("nan") nan
-0 vs +0?
The obvious first attempts work fine for those ;-)
There are people who know this and need to know this. But float.fromhex() is already available to them. A literal is an attractive nuisance for people who almost-but-not-quite understand IEEE-854.
As notations for rationals, nobody needs to understand 854 at all to use these things, so long as they stick to exactly representable numbers. Whether a specific literal _is_ exactly representable, and what happens if it's not, does require understanding a whole lot - but that's also true of decimal float literals.
I.e. those people who named neither Tim Peters nor Mike Cowlishaw.
Or Mark Dickinson ;-)
All that said, I'm -0 on the idea. I doubt I (or Mark, or Mike) would
use it, because the need is rare and float.fromhex() is already
sufficient. Indeed, fromhex()
is less annoying, because it does
support special cases for infinities and NaNs, and doesn't _require_ a
"0x" prefix.
When I teach, I usually present this to students:
(0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3) False
This is really easy as a way to say "floating point numbers are approximations where you often encounter rounding errors" The fact the "edge cases" are actually pretty central and commonplace in decimal approximations makes this a very easy lesson to teach.
After that, I might discuss using deltas, or that better yet is math.isclose() or numpy.isclose(). Sometimes I'll get into absolute tolerance versus relative tolerance passingly at this point.
Simply because the edge cases for working with e.g. '0xC.68p+2' in a hypothetical future Python are less obvious and less simple to demonstrate, I feel like learners will be tempted to think that using this base-2/16 representation saves them all their approximation issues and their need still to use isclose() or friends.
On Thu, Sep 21, 2017 at 8:14 PM, Tim Peters tim.peters@gmail.com wrote:
[David Mertz mertz@gnosis.cx]
-1
Writing a floating point literal requires A LOT more knowledge than writing a hex integer.
But not really more than writing a decimal float literal in "scientific notation". People who use floats are used to the latter. Besides using "p" instead of "e" to mark the exponent, the only differences are that the mantissa is expressed in hex instead of in decimal, and the implicit base to which the exponent is applied is 2 instead of 10.
Either way it's a notation for a rational number, which may or may not be exactly representable in the native HW float format.
What is the bit length of floats on your specific Python compile? What happens if you specify more or less precision than actually available. Where is the underflow to subnormal numbers?
All the same answers apply as when using decimal "scientific notation". When the denoted rational isn't exactly representable, then a maze of rounding, overflow, and/or underflow rules apply. The base the literal is expressed in doesn't really have anything to do with those.
What is the bit representation of [infinity]? Nan?
Hex float literals in general have no way to spell those: it's just a way to spell a subset of (mathematical) rational numbers. Python, however, does support special cases for those:
float.fromhex("inf") inf float.fromhex("nan") nan
-0 vs +0?
The obvious first attempts work fine for those ;-)
There are people who know this and need to know this. But float.fromhex() is already available to them. A literal is an attractive nuisance for people who almost-but-not-quite understand IEEE-854.
As notations for rationals, nobody needs to understand 854 at all to use these things, so long as they stick to exactly representable numbers. Whether a specific literal _is_ exactly representable, and what happens if it's not, does require understanding a whole lot - but that's also true of decimal float literals.
I.e. those people who named neither Tim Peters nor Mike Cowlishaw.
Or Mark Dickinson ;-)
All that said, I'm -0 on the idea. I doubt I (or Mark, or Mike) would
use it, because the need is rare and float.fromhex() is already
sufficient. Indeed, fromhex()
is less annoying, because it does
support special cases for infinities and NaNs, and doesn't _require_ a
"0x" prefix.
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
On Thu, Sep 21, 2017 at 8:30 PM, David Mertz mertz@gnosis.cx wrote:
Simply because the edge cases for working with e.g. '0xC.68p+2' in a hypothetical future Python are less obvious and less simple to demonstrate, I feel like learners will be tempted to think that using this base-2/16 representation saves them all their approximation issues and their need still to use isclose() or friends.
Show them 1/4949, and explain why for i < 49, (1/i)i equals 1 (lucky rounding).
-- --Guido van Rossum (python.org/~guido)
On 22 September 2017 at 13:38, Guido van Rossum guido@python.org wrote:
On Thu, Sep 21, 2017 at 8:30 PM, David Mertz mertz@gnosis.cx wrote: >
Simply because the edge cases for working with e.g. '0xC.68p+2' in a hypothetical future Python are less obvious and less simple to demonstrate, I feel like learners will be tempted to think that using this base-2/16 representation saves them all their approximation issues and their need still to use isclose() or friends.
Show them 1/4949, and explain why for i < 49, (1/i)i equals 1 (lucky rounding).
If anything, I'd expect the hex notation to make the binary vs decimal representational differences easier to teach, since instructors would be able to directly show things like:
>>> 0.5 == 0x0.8 == 0o0.4 == 0b0.1 # Negative power of two!
True
>>> (0.1 + 0.2) == 0.3 # Not negative powers of two
False
>>> 0.3 == 0x1.3333333333333p-2
True
>>> (0.1 + 0.2) == 0x1.3333333333334p-2
True
While it's possible to provide a demonstration along those lines today, it means writing the last two lines as:
>>> 0.3.hex() == "0x1.3333333333333p-2"
True
>>> (0.1 + 0.2).hex() == "0x1.3333333333334p-2"
True
(Which invites the question "Why does 'hex(3)' work, but I have to write '0.3.hex()' instead"?)
To illustrate that hex floating point literals don't magically solve all your binary floating point rounding issues, an instructor could also demonstrate:
>>> one_tenth = 0x1.0 / 0xA.0
>>> two_tenths = 0x2.0 / 0xA.0
>>> three_tenths = 0x3.0 / 0xA.0
>>> three_tenths == one_tenth + two_tenths
False
Again, a demonstration along those lines is already possible, but it involves using integers in the rational expressions, rather than floats.
Given syntactic support, it would also be reasonable for the hex()/oct()/bin() builtins to be expanded to handle printing floating point numbers in those formats, and for floats to gain support for the corresponding print formatting codes.
So overall, I'm still +0, on the grounds of improving int/float API consistency.
While I'm sympathetic to the concerns about potentially changing the way the binary/decimal representation distinction is taught for floating point values, I don't think having better support for more native representations of binary floats is likely to make that harder than it already is.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Thu, Sep 21, 2017 at 9:20 PM, Nick Coghlan ncoghlan@gmail.com wrote:
>>> one_tenth = 0x1.0 / 0xA.0
>>> two_tenths = 0x2.0 / 0xA.0
>>> three_tenths = 0x3.0 / 0xA.0
>>> three_tenths == one_tenth + two_tenths
False
OMG Regardless of whether we introduce this feature, .hex() is the way to show what's going on here:
0.1.hex() '0x1.999999999999ap-4' 0.2.hex() '0x1.999999999999ap-3' 0.3.hex() '0x1.3333333333333p-2' (0.1+0.2).hex() '0x1.3333333333334p-2'
This shows so clearly that there's 1 bit difference!
-- --Guido van Rossum (python.org/~guido http://python.org/%7Eguido)
On Fri, Sep 22, 2017 at 8:37 AM, Guido van Rossum guido@python.org wrote:
On Thu, Sep 21, 2017 at 9:20 PM, Nick Coghlan ncoghlan@gmail.com wrote:
>>> one_tenth = 0x1.0 / 0xA.0
>>> two_tenths = 0x2.0 / 0xA.0
>>> three_tenths = 0x3.0 / 0xA.0
>>> three_tenths == one_tenth + two_tenths
False
OMG Regardless of whether we introduce this feature, .hex() is the way to show what's going on here:
0.1.hex() '0x1.999999999999ap-4' 0.2.hex() '0x1.999999999999ap-3' 0.3.hex() '0x1.3333333333333p-2' (0.1+0.2).hex() '0x1.3333333333334p-2'
This shows so clearly that there's 1 bit difference!
Thanks! I really should add this example to the math.isclose() docs....
.hex is mentioned in:
https://docs.python.org/3/tutorial/floatingpoint.html
but I don't see it used in a nice clear example like this.
-CHB
> >
-- --Guido van Rossum (python.org/~guido http://python.org/%7Eguido)
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
--
Christopher Barker, Ph.D. Oceanographer
Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception
Chris.Barker@noaa.gov
When I teach, I usually present this to students:
(0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3) False
This is really easy as a way to say "floating point numbers are approximations where you often encounter rounding errors" The fact the "edge cases" are actually pretty central and commonplace in decimal approximations makes this a very easy lesson to teach.
After that, I might discuss using deltas, or that better yet is math.isclose() or numpy.isclose(). Sometimes I'll get into absolute tolerance versus relative tolerance passingly at this point.
Simply because the edge cases for working with e.g. '0xC.68p+2' in a hypothetical future Python are less obvious and less simple to demonstrate, I feel like learners will be tempted to think that using this base-2/16 representation saves them all their approximation issues and their need still to use isclose() or friends.
On Thu, Sep 21, 2017 at 8:14 PM, Tim Peters tim.peters@gmail.com wrote:
[David Mertz mertz@gnosis.cx]
-1
Writing a floating point literal requires A LOT more knowledge than writing a hex integer.
But not really more than writing a decimal float literal in "scientific notation". People who use floats are used to the latter. Besides using "p" instead of "e" to mark the exponent, the only differences are that the mantissa is expressed in hex instead of in decimal, and the implicit base to which the exponent is applied is 2 instead of 10.
Either way it's a notation for a rational number, which may or may not be exactly representable in the native HW float format.
What is the bit length of floats on your specific Python compile? What happens if you specify more or less precision than actually available. Where is the underflow to subnormal numbers?
All the same answers apply as when using decimal "scientific notation". When the denoted rational isn't exactly representable, then a maze of rounding, overflow, and/or underflow rules apply. The base the literal is expressed in doesn't really have anything to do with those.
What is the bit representation of [infinity]? Nan?
Hex float literals in general have no way to spell those: it's just a way to spell a subset of (mathematical) rational numbers. Python, however, does support special cases for those:
float.fromhex("inf") inf float.fromhex("nan") nan
-0 vs +0?
The obvious first attempts work fine for those ;-)
There are people who know this and need to know this. But float.fromhex() is already available to them. A literal is an attractive nuisance for people who almost-but-not-quite understand IEEE-854.
As notations for rationals, nobody needs to understand 854 at all to use these things, so long as they stick to exactly representable numbers. Whether a specific literal _is_ exactly representable, and what happens if it's not, does require understanding a whole lot - but that's also true of decimal float literals.
I.e. those people who named neither Tim Peters nor Mike Cowlishaw.
Or Mark Dickinson ;-)
All that said, I'm -0 on the idea. I doubt I (or Mark, or Mike) would
use it, because the need is rare and float.fromhex() is already
sufficient. Indeed, fromhex()
is less annoying, because it does
support special cases for infinities and NaNs, and doesn't _require_ a
"0x" prefix.
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
On Thu, 21 Sep 2017 22:14:27 -0500 Tim Peters tim.peters@gmail.com wrote:
[David Mertz mertz@gnosis.cx]
-1
Writing a floating point literal requires A LOT more knowledge than writing a hex integer.
But not really more than writing a decimal float literal in "scientific notation". People who use floats are used to the latter. Besides using "p" instead of "e" to mark the exponent, the only differences are that the mantissa is expressed in hex instead of in decimal, and the implicit base to which the exponent is applied is 2 instead of 10.
The main difference is familiarity. "scientific" notation should be well-known and understood even by high school kids. Who knows about hexadecimal notation for floats, apart from floating-point experts?
So for someone reading code, the scientific notation poses no problem as they understand it intuitively (even if they may not grasp the difficulties of the underlying conversion to binary FP), while for hexadecimal float notation need they have to go out of their way to learn about it, parse the number slowly and try to make out what its value is.
Regards
Antoine.
[Antoine Pitrou solipsis@pitrou.net]
... The main difference is familiarity. "scientific" notation should be well-known and understood even by high school kids. Who knows about hexadecimal notation for floats, apart from floating-point experts?
Here's an example: you <0x0.2p0 wink>. For people who understand both hex and (decimal) scientific notation, learning what hex float notation means is easy.
So for someone reading code, the scientific notation poses no problem as they understand it intuitively (even if they may not grasp the difficulties of the underlying conversion to binary FP), while for hexadecimal float notation need they have to go out of their way to learn about it, parse the number slowly and try to make out what its value is.
I've seen plenty of people on StackOverflow who (a) don't understand hex notation for integers; and/or (b) don't understand scientific notation for floats. Nothing is self-evident about either; they both have to be learned at first. Same for hex float notation. Of course it's true that many (not all) people do know about hex integers and/or decimal scientific notation from prior (to Python) experience.
My objection is that we already have a way to use hex float notation, and the _need_ for it is rare. If someone uninitiated sees a rare:
x = 0x1.aaap-4
they're going to ask on StackOverflow what the heck it's supposed to mean. But if they see a rare:
x = float.fromhex("0x1.aaap-4")
they can Google for "python fromhex" and find the docs themselves at once. The odd method name makes it highly "discoverable", and I think that's a feature for rare gimmicks with a small, specialized audience.
Le 22/09/2017 à 19:15, Tim Peters a écrit :
I've seen plenty of people on StackOverflow who (a) don't understand hex notation for integers; and/or (b) don't understand scientific notation for floats. Nothing is self-evident about either; they both have to be learned at first.
Sure. But, unless I'm mistaken, most people learn about the scientific notation as teenagers (that was certainly my case at least, and that was not from my parents AFAIR). Which of them learn about hexadecimal float notation at the same time?
But if they see a rare:
x = float.fromhex("0x1.aaap-4")
they can Google for "python fromhex" and find the docs themselves at once. The odd method name makes it highly "discoverable", and I think that's a feature for rare gimmicks with a small, specialized audience.
Basically agreed. Moreover, "float.fromhex" spells it out, while the literal syntax does not even make it obvious it's a floating-point number at all.
Regards
Antoine.
On 23 September 2017 at 03:15, Tim Peters tim.peters@gmail.com wrote:
But if they see a rare:
x = float.fromhex("0x1.aaap-4")
they can Google for "python fromhex" and find the docs themselves at once. The odd method name makes it highly "discoverable", and I think that's a feature for rare gimmicks with a small, specialized audience.
Given how often I've used "It's hard to search for magic syntax" as a design argument myself, I'm surprised I missed the fact it also applies in this case.
Now that you've brought it up though, I think it's a very good point, and it's enough to switch me from +0 to -1.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Lucas Wiman wrote:
It is inconsistent that you can write hexadecimal integers but not floating point numbers. Consistency in syntax is /fewer/ things to learn, not more.
You still need to learn the details of the hex syntax for floats, though. It's not obvious e.g. that you need to use "p" for the exponent.
-- Greg
21.09.17 18:23, Victor Stinner пише:
My vote is now -1 on extending the Python syntax to add hexadecimal floating literals.
While I was first in favor of extending the Python syntax, I changed my mind. Float constants written in hexadecimal is a (very?) rare use case, and there is already float.fromhex() available.
A new syntax is something more to learn when you learn Python. Is it worth it? I don't think so. Very few people need to write hexadecimal constants in their code.
Initially I was between -0 and +0. The cost of implementing this feature is not zero, but it looked harmless (while almost useless). But after reading the discussion (in particular the comments of proponents) I'm closer to -1.
This feature can be useful for very few people. And they already have float.fromhex(). Taking to account the nature of Python the arguments for literals are weaker than in case of statically compiled languages. For the rest of users it rather adds confusion and misunderstanding. And don't forgot about non-zero cost. You will be impressed by the number of places just in the CPython core and stdlib that should be updated for supporting a new type of literals.
Yeah, I agree, +0. It won't confuse anyone who doesn't care about it and those who need it will benefit.
On Wed, Sep 20, 2017 at 6:13 PM, Nick Coghlan ncoghlan@gmail.com wrote:
On 21 September 2017 at 10:44, Chris Barker - NOAA Federal
chris.barker@noaa.gov wrote: [Thibault]
To sum up:
Right. But it addresses all of the points you make. The functionality is there. Making a new literal will buy a slight improvement in writability and performance.
Is that worth much in a dynamic language like python?
I think so, as consider this question: how do you write a script that accepts a user-supplied string (e.g. from a CSV file) and treats it as hex floating point if it has the 0x prefix, and decimal floating point otherwise?
You can't just blindly apply float.fromhex(), as that will also treat unprefixed strings as hexadecimal:
float.fromhex("0x10") 16.0 float.fromhex("10") 16.0
So you need to do the try/except dance with ValueError instead:
try:
float_data = float(text)
except ValueError:
float_values = float.fromhex(text)
At which point you may wonder why you can't just write "float_data = float(text, base=0)" the way you can for integers:
int("10", base=0) 10 int("0x10", base=0) 16
And if the float() builtin were to gain a "base" parameter, then it's only a short step from there to allow at least the "0x" prefix on literals, and potentially even "0b" and "0o" as well.
So I'm personally +0 on the idea - it would improve interface consistency between integers and floating point values, and make it easier to write correctness tests for IEEE754 floating point hardware and algorithms in Python (where your input & output test vectors are going to use binary or hex representations, not decimal).
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido van Rossum (python.org/~guido)