I'd like to propose two minor changes to float and complex
formatting, for 3.1. I don't think either change should prove
(1) Currently, '%f' formatting automatically changes to '%g' formatting for
numbers larger than 1e50. For example:
>>> '%f' % 2**166.
>>> '%f' % 2**167.
I propose removing this feature for 3.1
More details: The current behaviour is documented (standard
library->builtin types). (Until very recently, it was actually
misdocumented as changing at 1e25, not 1e50.)
"""For safety reasons, floating point precisions are clipped to 50; %f
conversions for numbers whose absolute value is over 1e50 are
replaced by %g conversions.  All other errors raise exceptions."""
There's even a footnote:
""" These numbers are fairly arbitrary. They are intended to
avoid printing endless strings of meaningless digits without
hampering correct use and without having to know the exact
precision of floating point values on a particular machine."""
I don't find this particularly convincing, though---I just don't see
a really good reason not to give the user exactly what she/he
asks for here. I have a suspicion that at least part of the
motivation for the '%f' -> '%g' switch is that it means the
implementation can use a fixed-size buffer. But Eric has
fixed this (in 3.1, at least) and the buffer is now dynamically
allocated, so this isn't a concern any more.
Other reasons not to switch from '%f' to '%g' in this way:
- the change isn't gentle: as you go over the 1e50 boundary,
the number of significant digits produced suddenly changes
from 56 to 6; it would make more sense to me if it
stayed fixed at 56 sig digits for numbers larger than 1e50.
- now that we're using David Gay's 'perfect rounding'
code, we can be sure that the digits aren't entirely
meaningless, or at least that they're the 'right' meaningless
digits. This wasn't true before.
- C doesn't do this, and the %f, %g, %e formats really
owe their heritage to C.
- float formatting is already quite complicated enough; no
need to add to the mental complexity
- removal simplifies the implementation :-)
On to the second proposed change:
(2) complex str and repr don't behave like float str and repr, in that
the float version always adds a trailing '.0' (unless there's an
exponent), but the complex version doesn't:
>>> 4., 10.
>>> 4. + 10.j
I propose changing the complex str and repr to behave like the
float version. That is, repr(4. + 10.j) should be "(4.0 + 10.0j)"
rather than "(4+10j)".
Mostly this is just about consistency, ease of implementation,
and aesthetics. As far as I can tell, the extra '.0' in the float
repr serves two closely-related purposes: it makes it clear to
the human reader that the number is a float rather than an
integer, and it makes sure that e.g., eval(repr(x)) recovers a
float rather than an int. The latter point isn't a concern for
the current complex repr, but the former is: 4+10j looks to
me more like a Gaussian integer than a complex number.
Floating point printing is tricky, as I am sure you know. You might
want to refrefresh your understanding by consulting the literture--I
know I would. For example, you might want to look at
Guy Steele's paper:
Guy L. Steele , Jon L. White, How to print floating-point numbers accurately, ACM SIGPLAN Notices, v.39 n.4, April 2004
is a classic and worthy of a read.
The bugs.python.org site seems to be down. ping gives me
the following (from Ireland):
Macintosh-4:py3k dickinsm$ ping bugs.python.org
PING bugs.python.org (18.104.22.168): 56 data bytes
36 bytes from et.2.16.rs3k6.rz5.hetzner.de (22.214.171.124):
Destination Host Unreachable
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 5400 77e1 0 0000 3a 01 603d 192.168.1.2 126.96.36.199
Various others on #python-dev have confirmed that it's not working for them.
Does anyone know what the problem is?
You might want to note in the PEP that the problem that's being solved
is known as the "loop and a half" problem.
> Author: raymond.hettinger
> Date: Sun Apr 26 02:34:36 2009
> New Revision: 71946
> Revive PEP 315.
> Modified: peps/trunk/pep-0315.txt
> --- peps/trunk/pep-0315.txt (original)
> +++ peps/trunk/pep-0315.txt Sun Apr 26 02:34:36 2009
> @@ -2,9 +2,9 @@
> Title: Enhanced While Loop
> Version: $Revision$
> Last-Modified: $Date$
> -Author: W Isaac Carroll <icarroll(a)pobox.com>
> - Raymond Hettinger <python(a)rcn.com>
> -Status: Deferred
> +Author: Raymond Hettinger <python(a)rcn.com>
> + W Isaac Carroll <icarroll(a)pobox.com>
> +Status: Draft
> Type: Standards Track
> Content-Type: text/plain
> Created: 25-Apr-2003
> Python-checkins mailing list
Assuming that Mark's and my changes in the py3k-short-float-repr branch
get checked in shortly, I'd like to deprecate PyOS_ascii_formatd. Its
functionality is largely being replaced by PyOS_double_to_string, which
we're introducing on our branch.
PyOS_ascii_formatd was introduced to fix the issue in PEP 331.
PyOS_double_to_string addresses all of the same issues, namely a
non-locale aware double-to-string conversion. PyOS_ascii_formatd has an
unfortunate interface. It accepts a printf-like format string for a
single double parameter. It must parse the format string into the
parameters it uses. All uses of it inside Python already know the
parameters and must build up a format string using sprintf, only to turn
around and have PyOS_ascii_formatd reparse it.
In the branch I've replaced all of the internal calls to
PyOS_ascii_format with PyOS_double_to_string.
My proposal is to deprecate PyOS_ascii_formatd in 3.1 and remove it in 3.2.
The 2.7 situation is tricker, because we're not planning on backporting
the short-float-repr work back to 2.7. In 2.7 I guess we'll leave
PyOS_ascii_formatd around, unfortunately.
FWIW, I didn't find any external callers of it using Google code search.
And as a reminder, the py3k-short-float-repr changes are on Rietveld at
http://codereview.appspot.com/33084/show. So far, no comments.
Does anyone have any ideas about what to do with issue 5830 and handling the problem in a general way (not just for sched)?
The basic problem is that decorate/compare/undecorate patterns no longer work when the primary sort keys are equal and the secondary
keys are unorderable (which is now the case for many callables).
>>> tasks = [(10, lambda: 0), (20, lambda: 1), (10, lambda: 2)]
Traceback (most recent call last):
TypeError: unorderable types: function() < function()
Would it make sense to provide a default ordering whenever the types are the same?
def object.__lt__(self, other):
if type(self) == type(other):
return id(self) < id(other)
I have the following code:
# len(all_svs) = 10
# the I call a function with 2 list parameters
def proc_line(line,all_svs) :
# inside the function the length of the list "all_svs" is 1 more -> 11
# I had to workaround it
for i in range(len(all_svs) - 1 ) : # some how the length of all_svs is incremented !!!!!!!!!!!!!!!!!!!!!!!!!!!
Is this a compiler bug ??
Or is it because of my first try of Python
I've recently subscribed to this list and received my first "Summary of
Python tracker Issues". What I find annoying are the dates, for example:
ACTIVITY SUMMARY (04/17/09 - 04/24/09)
3 x double-digits (have we learned nothing from Y2K? :-)) with the
_middle_ ones changing fastest!
I know it's the US standard, but Python is global. Could we have an
'international' style instead, say, year-month-day:
ACTIVITY SUMMARY (2009-04-17 - 2009-04-24)
Thank you for your attention, etc.
Is there a reason that the PyEval_CallFunction() and
PyEval_CallMethod() convenience functions remain undocumented? (i.e.,
would a doc-and-test patch to correct this be rejected?)
I didn't see any mention of this coming up in python-dev before.
Also, despite its name, PyEval_CallMethod() is quite useful for
calling module-level functions or classes (given that it's just a
PyObject_GetAttrString plus the implementation of
PyEval_CallFunction). Is there any reason (beyond its undocumented
status) to believe this use case would ever be deprecated?
Tim Lesher <tlesher(a)gmail.com>