[Tutor] greater precision?
John Collins
john at netcore.com.au
Mon Oct 29 11:32:36 CET 2012
Hi Dave
> Did you really leave two very-similar messages 3 minutes apart? Or are
> you using a broken gateway, like Google-groups, and it hiccoughed?
Sorry, I didn't intend to - flunky LH mouse microswitch!
> Without knowing the type of the arguments being passed to the
> crosspoint() function, we cannot say. I can GUESS that they are int,
> long, or float. But you can find that out by examining the source
> that's calling it. Judging from the comments, I might guess they're int
> or long, and if so, only the division being done in the function
> produces floats. If that's the case, then you're limited to float's
> precision. if they are some other type (eg. Decimal), then indeed there
> might be special functions being called.
Well, the two scripts are only about an email page long each, shall
I post them?
> That import gets you access to the particular symbols it imports.
> Normal arithmetic on floats is already built in, and doesn't need an import.
Right! Thanks.
> I'm assuming you're using CPython, and you say it's on XP. So
> presumably you're running an Intel (or Intel-compatible) processor with
> binary floating point built-in. That processor is the limit of float
> values in normal use. It's good to about 18 digits.
Sure, 32 bit uproc, intrinsic binary limit.
> Exactly. It's a side-effect of the first import of the module. On
> subsequent imports, the pyc file is used, unless the py file has been
> modified meanwhile.
Ah! Thanks!
> 18 digits is what you should get if the code is as I describe. But if
> there are lots of fp operations you're not showing, then an error can
> gradually get larger. And with any finite precision, you have the risk
> from things such as subtracting two nearly-equal values, which will
> reduce the final precision.
AFAIK, it's all FP. inputs are integers, outputs range from
-2.000000000000 to 2.000000000000
> If you need more than 18, then go to the Decimal module, which lets you
> set the precision to arbitrary levels. You will see a substantial
> slowdown, of course, if you set the precision very high. if that
> becomes a problem, consider CPython 3.3, which has optimized that
> module. But try not to change too many things at once, as there are
> lots of changes between 2.7 and 3.3.
I think I'll need to from what you have said / pointed out - ie, for
in excess of 'machine' precision, one needs to change base 10 floats
to a higher base, do foo, then come back. Sounds daunting!
John.
More information about the Tutor
mailing list