And these days
anybody who is using Decimal for Money (which ought to be everybody,
I'm not so sure about that -- using base-10 is nice, but it doesn't automatically buy you the appropriate rounding rules, etc that you need to "proper" accounting. And, as MA pointed out, in much "finance" work, the approximations of FP are just as appropriate as they are for science. (Which of course, floats are not always appropriate for...)
still wants to grab the SciPy stack so they can use pandas to analyse the data, and matplotlib to graph it, and bokeh to turn the results into a all-singing and dancing interactive graph.
There's no technical reason Numpy couldn't have a decimal dtype -- someone "just" has to write the code. The fact that no one has tells me that no one needs it that badly. (Or that numpy' dtype system is inscrutable :-) ) But while we're on Numpy -- there is s lesson there -- Numpy supports many different precision a of various styles - int8, int16, int32....float32, float64.... Back in the day, the coercion rules would tend to push user's arrays to larger dtypes: say you added a python float (float64) to a Numpy array of float32: you'd get a float64 array. But the fact is that people choose to use a smaller dtype for a reason -- so numpy's casting rules where changed to make it less likely that you'd accidentally upcast your arrays. A similar principle applies here. If someone is working with Decimals, they have a reason to do so. Likewise if they are Not working with Decimals... So it's all good... -CHB
Laura _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov