On Wed, Dec 4, 2019 at 3:16 PM Steven D'Aprano <steve@pearwood.info> wrote:

On Wed, Dec 04, 2019 at 01:47:53PM +1100, Chris Angelico wrote:

Integer sizes are a classic example of this. Is it acceptable to limit your integers to 2^16? 2^32? 2^64? Python made the choice to NOT limit its integers, and I haven't heard of any non-toy examples where an attacker causes you to evaluate 2**2**100 and eats up all your RAM.

Does self-inflicted attacks count? I've managed to bring down a production machine, causing data loss, *twice* by thoughtlessly running something like 10**100**100 at the interactive interpreter. (Neither case was a server, just a desktop machine, but the data loss was still very real.)

Hmm, and you couldn't Ctrl-C it? I tried and was able to. There ARE a few situations where I'd rather get a simple and clean MemoryError than have it drive my system into the swapper, but there are at least as many situations where you'd rather be able to use virtual memory instead of being forced to manually break a job up. But even there, you can't enshrine a limit in the language definition, since the actual threshold depends on the running system. (And can be far better enforced externally, at least on a Unix-like OS.)

OTOH, being able to do arbitrary precision arithmetic and not worry about an arbitrary limit to your precision is a very good thing.

I'll remind you of Guido's long-ago experience with ABC, which used arbitrary precision rationals (fractions) as their numeric type. That sounds all well and good, until you try doing a bunch of calculations and your numbers start growing to unlimited size. Do you really want a hundred billion digits of precision for a calculation based on measurements made to one decimal place?

Sometimes, yes! But if I don't, it's more likely that I want to choose a limit within the program, rather than run into a hard limit defined by the language. I've done a lot of work with fractions.Fraction and made good use of its immense precision. The Python float type gives a significant tradeoff in terms of performance vs precision. But decimal.Decimal lets you choose exactly how much precision to retain, rather than baking it into the language as "no more than 1,000,000 digits of precision, ever". The solution to "do you really want a hundred billion digits of precision" is "use the decimal context to choose", not "hard-code a limit". The value of the hard-coded limit in a float is that floats are way WAY faster than Decimals. ChrisA