decimal by default
Alex Martelli
aleax at mac.com
Wed Jun 28 23:03:50 EDT 2006
Daniel <4daniel at gmail.com> wrote:
...
> Ideally I'd like to have a way to tell the interpreter to use Decimal
> by default instead of float (but only in the eval() calls). I
> understand the performance implications and they are of no concern. I'm
> also willing to define a single global Decimal context for the
> expressions (not sure if that matters or not). Is there a way to do
> what I want without rolling my own parser and/or interpreter? Is there
> some other alternative that would solve my problem?
What about:
c = compile(thestring, thestring, '<eval>')
cc = new.code( ...all args from c's attributes, except the 5th
one, constants, which should instead be:
decimalize(c.co_consts)...)
i.e.
cc = new.code(c.co_argcount, c.co_nlocals, c.co_stacksize, c.co_flags,
c.co_code, decimalize(c.co_consts), c.co_names,
c.co_varnames, c.co_filename, c.co_name,
c.co_firstlineno, c.co_lnotab)
where
def decimalize(tuple_of_consts):
return tuple( maydec(c) for c in tuple_of_consts )
and
def maydec(c):
if isinstance(c, float): c = decimal.Decimal(str(c))
return c
Yeah, the new.code call IS very verbose, just because it needs to
tediously and boilerplatedly repeat every attribute as an argument, but
you can easily wrap the boilerplate in an auxiliary function and forget
about it. (If you often want to change one or two tiny thing in a code
object you probably already have a suitable auxiliary function around).
Now, you can eval(cc) [[I suggest you also explicitly pass dictionaries
for locals and globals, but, hey, that's just a good idea with eval in
general!-)]] and, ta-da!
Alex
More information about the Python-list
mailing list