On Wed, Dec 04, 2019 at 01:47:53PM +1100, Chris Angelico wrote:

Integer sizes are a classic example of this. Is it acceptable to limit your integers to 2^16? 2^32? 2^64? Python made the choice to NOT limit its integers, and I haven't heard of any non-toy examples where an attacker causes you to evaluate 2**2**100 and eats up all your RAM.

Does self-inflicted attacks count? I've managed to bring down a production machine, causing data loss, *twice* by thoughtlessly running something like 10**100**100 at the interactive interpreter. (Neither case was a server, just a desktop machine, but the data loss was still very real.)

OTOH, being able to do arbitrary precision arithmetic and not worry about an arbitrary limit to your precision is a very good thing.

I'll remind you of Guido's long-ago experience with ABC, which used arbitrary precision rationals (fractions) as their numeric type. That sounds all well and good, until you try doing a bunch of calculations and your numbers start growing to unlimited size. Do you really want a hundred billion digits of precision for a calculation based on measurements made to one decimal place? -- Steven