[CC back to the list because you posted the same argument there but without
the numerical example, and my working through that might help others
understand your point]
On Fri, Mar 7, 2014 at 9:18 PM, Andrew Barnert <abarnert(a)yahoo.com> wrote:
> The main point I'm getting at is that by rounding 0.100000000000000012 to
> 0.1 instead of 0.10000000000000000555..., You're no longer rounding it to
> the nearest binary float, but instead to the second nearest
> Decimal(repr(binary float)) (since 0.10000000000000002 is closer than
OK, let me walk through that carefully. Let's name the exact mathematical
values and assign them to strings:
>>> a = '0.100000000000000012'
>>> b = '0.1000000000000000055511151231257827021181583404541015625'
>>> c = '0.10000000000000002'
Today, Decimal(float(a)) == Decimal(b). Under my proposal,
Decimal(float(a)) == Decimal('0.1'). The difference between float('0.1')
and float(c) is 1 ulp (2**-56), and a is between those, but closer to c;
but it is even closer to b (in the other direction). IOW for the
mathematical values, 0.1 < b < a < c, where a is closer to b than to c, So
if the choices for rounding a would be b or c, b is preferred. So far so
good. (And still good if we replace c with the slightly smaller exact value
And your point is that if we change the allowable choices to '0.1' or c, we
find that float(b) == float('0.1'), but a is closer to c than to 0.1. This
is less than 1 ulp, but more than 0.5 ulp.
I find the argument intriguing, but I blame it more on what happens in
float(a) than in what Decimal() does to the resulting value. If you
actually had the string a, and wanted to convert it to Decimal, you would
obviously write Decimal(a), not Decimal(float(a)), so this is really only a
problem when someone uses a as a literal in a program that is passed to
Decimal, i.e. Decimal(0.100000000000000012).
That's slightly unfortunate, but easy to fix by adding quotes. The only
place where I think something like this might occur in real life is when
someone copies a numerical recipe involving some very precise constants,
and mindlessly applies Decimal() without string quotes to the constants.
But that's a "recipe" for failure anyway, since if the recipe really uses
more precision than IEEE double can handle, *with* the quotes the recipe
would be calculated more exactly anyway. Perhaps another scenario would be
if the constant was calculated (by the recipe-maker) within 0.5 ulp using
IEEE double and rendered with exactly the right number of digits.
But these scenarios sound like either they should use the quotes anyway, or
the calculation would be better off done in double rather than Decimal. So
I think it's still pretty much a phantom problem.
> Of course that's not true for all reals (0.1 being the obvious
> counterexample), but it's true for some with your proposal, while today
> it's true for none. So the mean absolute error in Decimal(repr(f)) across
> any range of reals is inherently higher than Decimal.from_float(f). Put
> another way, you're adding additional rounding error. That additional
> rounding error is still less than the rule-of-thumb cutoff that people use
> when talking about going through float, but it's nonzero and not guaranteed
> to cancel out.
> On top of that, the distribution of binary floats is uniform (well, more
> complicated than uniform because they have an exponent as well as a
> mantissa, but you know what I mean); the distribution of closest-repr
> values to binary floats is not.
> I have no idea whether either of these are properties that users of
> Decimal (or, rather, Decimal and float together) care about. But they are
> properties that Decimal(float) has today that would be lost.
--Guido van Rossum (python.org/~guido)
This is my first post here, following a recommendation from Alexander
Belopolsky to use this list, to try to convince the Python developers to
reopen a ticket. I am a long-time Python user, and a Django committer.
http://bugs.python.org/issue13936 is a complaint about the fact that midnight
-- datetime.time(0,0,0) -- is evaluated as False in Boolean contexts. It was
closed as invalid, under the claim that """It is odd, but really no odder than
"zero values" of other types evaluating to false in Boolean contexts""".
I would like to ask for this to be reconsidered; since the ticket was closed,
two main arguments were given for this:
1) The practical argument (paraphrasing Danilo Bergen and Andreas Pelme): The
current semantics is surprising and useless; users do not expect valid times
to be falsey, and do not normally want code to have special behavior on
midnight. Users who ask for Boolean evaluation of a variable that's supposed
to hold a time value, usually write "if var" as shorthand for "if var is not
2) The principled argument (which I think is at the root of the practical
argument); quoting myself from the ticket: """Midnight is not a "zero value",
it is just a value. It does not have any special qualities analogous to those
of 0, "", or the empty set. ... Midnight evaluting to false makes as much
sense as date(1,1,1) -- the minimal valid date value -- evaluating to
Just for the record, I never called them bills just because they contain a
dollar sign! I was thinking of a short word which describes an expression
who's evaluation changes depending on the local circumstances. Like a legal
bill, not a dollar bill.
The name macro doesn't really work, as it's only an expression, it's not a
The idea was never to start passing complex expressions around the place.
It was just to allow a function to take an expression containing about one
or two names, and evaluate the expression internally, so this...
func(lambda foo: foo > 0)
func($foo > 0)
Other uses are there, and might be helpful from time to time, but it's
mainly about cleaning up APIs. I personally work with user facing Python
APIs all day, and users are generally programming interactively, so perhaps
my take is atypical.
What the hell is a thunk anyway? It's a horrible name.
On the jQuery like syntax, $(foo), that might work and might do away with
the issue of expressions containing bills always evaluating to bills. I
took that too far. Good point.
Starting new thread because this bike has a different shape and color.
Yesterday I was thinking that just making the keyword lambda assignable
like True, False, and None, would be enough. But the issue with that is
lambda isn't a name to an actual object or type. That was the seed for
this idea. How to get lambda like functionality into some sort of object
that would be easy to use and explain.
This morning I thought we could have in a functions definition something,
like "*", and "**", to take an expression. Similar to Nicks idea with =:,
but more general.
The idea is to have "***" used in def mean to take "any" call expression
and not evaluate it until *** is used on it. ie... the same rules as *.
When used in a def to pack a tuple, and when used outside def, to unpack
it. So, "***" used in a def, stores the call expression, at call time, and
when used later, expresses it.
A function call that captures an expression may be tricky to do. Here's one
approach that requires sugar when a function defined with "***" is called.
def __init__(self, expr):
""" expr is a callable that takes no arguments. """
self.expr = expr
""" ***obj --> result """
(Any other suggestions for how to do this would be good.)
And at call time....
fn(...) --> fn(TriStar(expr=lambda:...))
So presuming we can do something like the above, the first case is ...
def star_fn(***expr) return ***expr
... = star_fn(...)
Which is a function that just returns whatever it's input is, and is even
more general than using *args, **kwds.
The call signature stored in expr isn't evaluated until it's returned with
***expr. So the evaluation is delayed, or lazy, but it's still explicit
and very easy to read.
This returns a lambda-like function.
def star_lambda(***expr): return expr
And is used this way...
result = star_lambda(a * b + c) # captures expression.
actual_result = ***result # *** resolves "result" here!
The resolution is done with ***name, rather than name().
That's actually very good because it can pass through callable tests. So
you can safely pass callable objects around without them getting called at
the wrong time or place.
We can shorten the name because star_lambda is just a function.
L = star_lambda
To me this is an exceptionally clean solution. Easy to use, and not to
hard to explain. Seems a lot more like a python solution to me as well.
Hoping it doesn't get shot down too quickly,
The tentatively proposed idea here is using dollar signed expressions to
define 'bills'. A bill object is essentially an expression which can be
evaluated any number of times, potentially in different scopes.
The following expression [a bill literal] would be pointless, but would
define a bill that always evaluates to 1.
Some better examples...
* assign a bill to `a` so that `a` will evaluate to the value of the name
`foo` any time that `a` is evaluated, in the scope of that evaluation
a = $foo
* as above, but always plus one
a = $foo + 1
* make `a` a bill that evaluates to the value of the name `foo` at the time
that `a` is evaluated, in that scope, plus the value of `bar` **at the time
and in the scope of the assignment to `a`**
a = $foo + bar
Note. Similarly to mixing floats with ints, any expression that contains a
bill evaluates to a bill, so if `a` is a bill, `b=a+1` makes `b` a bill
too. Passing a bill to eval should be the obvious way to get the value.
The point? It allows functions to accept bills to use internally. The
function would specify any names the bill can reference in the function's
API, like keywords.
def f(b): # the bill arg `b` can reference `item`
for item in something:
if eval(b): return True
f($item < 0)
You could also use a function call, for example `$foo()` would evaluate to
a bill that evaluates to a call to `foo` in the scope and at the time of
any evaluation of the bill.
I've no idea if this is even possible in Python, and have no hope of
implementing it, but thought I'd share :)
Python3.3 Decimal Library v0.3 is Released here:
pdeclib.py is the decimal library, pilib.py is the PI library.
pdeclib.py provides scientific and transcendental functions
for the C Accelerated Decimal module written by Stefan Krah. The
library is open source, GLPv3, comprised of two py files.
My idea for python is to two things really, 1) make floating point decimal
the default floating point type in python4.x, and 2) make
these functions ( pdeclib.py ) or equiv available in python4.x by
Thank you for your consideration.
Mark H. Harris
On Tue, Mar 4, 2014 at 8:51 AM, Simon Kennedy <sffjunkie(a)gmail.com> wrote:
> On Monday, 3 March 2014 18:55:17 UTC, Ziad Sawalha wrote:
>> Thanks, Guido.
>> I'll follow up with updates to common tools as I come across them (ex.
>> pep257: https://github.com/GreenSteam/pep257/pull/64).
> The footnote's still in the PEP text.
--Guido van Rossum (python.org/~guido)
PEP-257 includes this recommendation:
“The BDFL  recommends inserting a blank line between the last paragraph in a multi-line
docstring and its closing quotes, placing the closing quotes on a line by themselves. This way,
Emacs' fill-paragraph command can be used on it.”
I believe emacs no longer has this limitation. "If you do fill-paragraph in emacs in Python mode
within a docstring, emacs already ignores the closing triple-quote. In fact, the most recent version
of emacs supports several different docstring formatting styles and gives you the ability to switch
between them.” - quoting Kevin L. Mitchell who is more familiar with emacs than I am.
I’m considering removing that recommendation and updating some of the examples in PEP-257,
but I’d like some thoughts from this group before I submit the patch. Any thoughts or references to
conversations that may have already been had on this topic?