#- Me too. But let's promise that the docs will include an example
#- of how to use quantize() to do rounding, since the name will
#- not be obvious to the uninitiated.
I'll do my best with the docs. I have in mind a concept explanation about
Decimal and Context, and syntax and examples of each method.
But let's promise that you'll give me feedback about them, ;)
> > #- c.using_context(input)
> You mean adding a method to the context, to create a Decimal
> using itself as
> context and not the one from the thread?
> If yes, what about c.create_decimal(number) ?
> And with floats? c.create_decimal_from_float(number)? Or the
> same method
> that before?
+1 on Context.create_decimal(float) and Context.create_decimal(string)
(okay, so it's really just one method that takes two input
The name just fits my brain. The fact that it uses the context is
obvious from the fact that it's Context method.
-- Michael Chermside
>From: Barry Warsaw <barry(a)python.org>
>Date: Thu, 22 Apr 2004 09:10:21 -0400
>On Thu, 2004-04-22 at 09:02, John Belmonte wrote:
> > It may also be useful to compare with other config alternatives, such as
> > ZConfig.
>ZConfig takes a completely different approach than ConfigParser; it
>rocks but you do need to do more work (define a schema first, then a
>configuration file, although I usually do that in the reverse order ;).
>I'm hoping someday ZConfig itself (or something close to it) will make
>it into the standard library.
Anyone who has interest in ZConfig is strongly encouraged to look at my
alternative method as it is alot easier to set up and use and provides more
flexibility. I'm hoping to see some cross-pollinating of ideas so your
feedback would be greatly appreciated.
I will be making an open source project on sourceforge in the near future
(unless I hear compelling arguments to not) and will make announcements to
advertise it's existance (any suggestions on posting locations is
appreciated as well).
Watch LIVE baseball games on your computer with MLB.TV, included with MSN
#- >> Work which I heartily thank Facundo for doing.
#- > Hear, hear! (For non-English readers, that means
#- "garbanzo magnifico
#- > blotski!" <wink>.)
#- I agree entirely.
Thank you all. Very much.
And specially to Tim, because of heart-defending the stickness of the PEP to
the Spec. As I am a non native english speaker and a number-newbie, you're
From: Tim Peters
>> For this pass, simply staying with the spec is more than sufficient
> Indeed, it's a near crushing amount of work!
>> Work which I heartily thank Facundo for doing.
> Hear, hear! (For non-English readers, that means "garbanzo magnifico
> blotski!" <wink>.)
I agree entirely.
This e-mail and the documents attached are confidential and intended
solely for the addressee; it may also be privileged. If you receive this
e-mail in error, please notify the sender immediately and destroy it.
As its integrity cannot be secured on the Internet, the Atos Origin group
liability cannot be triggered for the message content. Although the
sender endeavours to maintain a computer virus-free network, the sender
does not warrant that this transmission is virus-free and will not be
liable for any damages resulting from any virus transmitted.
#- [Batista, Facundo]
#- > ...
#- > - Methods like round() don't need to be discussed: the
#- Spec defines
#- > how them work, and the PEP is for implementing the PEP.
#- Actually, there is no round() operation in the spec. I
#- don't remember
#- whether there used to be, but there definitely isn't now.
You're right. My fault.
#- Doesn't mean we
#- can't supply .round(), does mean we have to spell out what
#- it does. I
#- assume decimal.round(whatever) acts the same as the spec's
#- would act if "whatever" were temporarily (for the duration
#- of plus()) folded
#- into context. If so, that's all it needs to say.
Well, I think we must decide how it works
>>> d = Decimal('12345.678')
Decimal( (0, (1, 2, 3, 4, 5, 6, 7, 8), -3) )
And being the syntax Decimal.round(n), we have the
a) n is the quantity of relevant digits of the
final number (must be non negative).
Decimal( (0, (1, 2, 3, 5), 1L) )
b) n has the same behaviour that in the built in round().
Decimal( (0, (1, 2, 3, 4, 5, 7), -1L) )
Decimal( (0, (1, 2, 3, 5), 1L) )
What option do you all like more?
[Jewett, Jim J]
#- This is effectively saying
#- (1) Create a decimal using the default context.
#- (2) Change the context to my custom context.
#- (3) Perform various rounding and scaling operations.
#- (4) Change the context back to the default.
#- (1) Create a decimal using my custom context.
#- The four-step procedure may (or may not) be done just as
#- efficiently under the covers, but it is ugly.
#- Is there any reason why input and output should be the only
#- operations that do not honor an optional local context?
I didn't reviewed all the mails to write down the community will, but as far
I recall, you *could* use context in creation time.
I think that still is not clear if to use...
...(I prefer the laters) but I think you could use it.
But: There is no such thing as "scale" in the context.
#- >[Jewett, Jim J]
#- >#- Under the current implementation:
#- >#- (0, (2, 4, 0, 0, 0), -4)
#- >#- is not quite the same as
#- >#- (0, (2, 4) -1)
#- >#- Given this, is should be possible for the user to specify
#- >#- (at creation) which is desired.
#- >It *is* posible:
#- >Decimal( (0, (2, 4, 0, 0, 0), -4) )
#- >Decimal( (0, (2, 4), -1) )
#- <sarcasm>Great!</sarcasm>. One of my previous posts
#- specifically listed
#- that I didn't want to have to pre-parse and reformulate
#- string literals to
#- achieve the desired precision and scale. The "external"
<lost> what? </lost> :p
I still don't understand why do you want that.
#- >If you construct using precision, and the precision is
#- smaller than the
#- >quantity of digits you provide, you'll get rounded, but if
#- the precision is
#- >greater than the quantity of digits you provide, you don't
#- get filled with
#- Rounding is exactly what should be done if one exceeds the desired
#- precision. Using
#- less that the desired precision (i.e., not filling in zeros)
#- may be okay
#- for many applications.
#- This is because any operations on the value will have to be
#- with the precision
#- defined in the decimal context. Thus, the results will be
#- other than that the
#- Decimal instance may not store the maximum precision
#- available by the
If I don't misunderstand, you're saying that store additional zeroes is
important to your future operations?
Let's make an example.
If I have '2.4000', I go into decimal and get:
Decimal( (0, (2, 4, 0, 0, 0), -4) )
If I have '2.4', I go into decimal and get:
Decimal( (0, (2, 4), -1) )
Are you trying to say that you want Decimal to fill up that number with
>>>Decimal('2.4', scale=4) # behaviour don't intended, just an example
Decimal( (0, (2, 4, 0, 0, 0), -4) )
...just to represent that you have that precision in your measurements and
reflect that in future arithmetic operations?
If yes, I think that: a) '2.4' and '2.4000' will behaviour identically in
future operations; b) why do you need to represent in the number the
precision of your measurement?
>> For all other names, the compiler may assume that
>> the nearest enclosing binding of this name will
>> always be in the same namespace. (Names need not
>> all be in a single namespace, but once a particular
>> name is found, that namespace will always be the
>> correct place to look for that name.)
> This actually isn't that different from my proposal
> for builtins,
I had been assuming that class (and instance) attribute
resolution would be subject to the same speedup.
If this is really only about globals and builtins,
then you can just initialize each module's dictionary
with a copy of builtins. (Or cache them in the module
__dict__ on the first lookup, since you know where it
would have gone.) This still won't catch updates to
builtins, but it will eliminate the failed lookup and
the second dictionary lookup.
If you really want to track changes to builtin, it is
still faster to echo builtin changes across each module
than it would be to track every name's referrers in
every module (as in PEP 266.)
> Instead, names that are determined to be builtin are
> not allowed to be bound via __setattr__, and
> are never looked up in the globals dictionary.
Some of the bugs that got the global tracking backed
out involved changing __builtins__. If you only add
to it, then I suppose the current method (which allows
shadowing) is a reasonable fallback. It doesn't work
so well if you want to remove names from builtin.
In fairness, the language spec does warn that a new
builtin dict may contain more than you expect, and I
suppose it could be created with extra names pointed
to NotImplemented instead of just raising a NameError.
>>Question: Is there any reason that this should apply
>>only to builtins, rather than to any namespace?
> Simplicity. Functions today do only three kinds of
> lookups: LOAD_CELL(?), LOAD_FAST and LOAD_GLOBAL.
> LOAD_CELL is an indirect load from a known, specific
> nested scope. LOAD_FAST loads from an array offset
> into the current frame object. LOAD_GLOBAL checks
> globals and then builtins.
It could be converted to LOAD_CELL (or perhaps even
LOAD_FAST) if the compiler were allowed to assume no
changes in shadowing. (Including an assumption that
the same dictionaries will continue to represent the
globals and builtin namespaces for this code object.)
> if it was desired to disallow shadowing by modifiying
> globals(), then perhaps globals() and module.__dict__
> could simply return a dictionary proxy that prevented
> modification of the disallowed values. (But the bare
> dictionary would still be used by the eval loop.)
Why not just make NameDict a subclass that has a different
__setitem__? The times when eval cares should be exactly
the times when you need to do extra checks. This also lets
you use something like DLict (PEP 267) to move most lookup
overhead to compile-time.
At 04:39 PM 4/21/04 -0400, Jewett, Jim J wrote:
> >> If this is really only about globals and builtins,
> >> then you can just initialize each module's dictionary
> >> with a copy of builtins. (Or cache them in the module
> >> __dict__ on the first lookup, since you know where it
> >> would have gone.)
>Phillip J. Eby:
> > Interesting thought. The same process that currently
> > loads the __builtins__ member could instead update the
> > namespace directly.
> > There's only one problem with this idea, and it's a big
> > one: 'import *' would now include all the builtins,
> > causing one module's builtins (or changes thereto) to
> > propagate to other modules.
>Why is this bad?
Because some modules are examined by software, and only the expected names
belong there. For example, I believe if you run 'pydoc' on such a module,
it will proceed to document all the builtins.
>The reason to import * is that you intend to use the
>module's members as if they were your own. If the
>other module actually has modified a builtin, you'll
>need to do the same, or the imported members won't
>work quite right.
*shudder* I'm glad the language really doesn't work in the way you just
described. :) No, just because one module shadows a builtin, doesn't mean
you have to follow suit.
> > ... declare that the any builtin used in a module
> > that's known to be a builtin, is allowed to be
> > optimized to the meaning of that builtin.
> > In effect, '__builtins__' should be considered an
> > implementation detail, not part of the language,
>Many builtins (None, True, KeyError) are effectively
>keywords, and I would agree.
>Others, like __debug__, are really used for
>intermodule communication, because there isn't
>any other truly global namespace. (Perhaps
>there should be a conventional place to look,
>such as a settings module?)
__debug__ is also a builtin, in the sense of being optimizable by the
compiler, so I don't see any reason to look at it differently. In fact,
isn't __debug__ *already* treated as a constant by the compilers?
Python 2.2.2 (#37, Oct 14 2002, 17:02:34) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
IDLE 0.8 -- press F1 for help
SyntaxError: can not assign to __debug__ (<pyshell#1>, line 1)
Yep, I guess so.