
Michael McLay wrote:
I had originally expected the context for decimal calculations to be the module in which a statement is defined. If a function defined in another module is called the rules of that other module would be applied to that part of the calculation. My expectations of how Python would work with decimal numbers doesn't seem to match what Guido said about his conversation with Tim, and what you said in this message.
How can the rules for using decimals be stated so that a newbie can understand what they should expect to happen? We could set a default precision of 17 digits and all calculations that were not exact would be rounded to 17 digits. This would match how their calculator works. I would think this would be the model with the least suprises. For someone needing to be more precise, or less precise, how would this rule be modified?
I intend to have more discussions with Cowlishaw once I finish implementing his spec, but I suspect his answer will be that whoever calls the module should set the precision. -- --- Aahz (@pobox.com) Hugs and backrubs -- I break Rule 6 <*> http://www.rahul.net/aahz/ Androgynous poly kinky vanilla queer het Pythonista I don't really mind a person having the last whine, but I do mind someone else having the last self-righteous whine.