On Tue, May 13, 2008 at 11:03 AM, Leonardo Santagada email@example.com wrote:
On 13/05/2008, at 11:04, Guido van Rossum wrote:
On Tue, May 13, 2008 at 6:52 AM, Joshua Spoerri firstname.lastname@example.org wrote:
Should decimal be the default for floating period literals?
E.g. 1.2 would actually be decimal.Decimal("1.2") and float(1.2) would be used to get traditional binary float point.
Not in 3.0, there are too many things that are subtly different. Perhaps at some point post 3.0 we can invent a mechanism whereby modules can enable this feature on a per-module basis, and then some number of revisions later we can change the default.
I would be happier with a d in front of the number following the scheme of raw strings, either to mean Decimal or to mean Double, or maybe f. This way things would work for both cientists and the rest of the users.
I would prefer "from __future__ import default_decimals", with target version for being enforced the 6.0 or later; we'll all be carrying quantum laptops by then ;-)