On Wed, Dec 4, 2019 at 1:31 PM Gregory P. Smith <greg@krypto.org> wrote:
Overall I like the idea of limits... But... in my experience, limits like this tend to impact generated source code or generated bytecode, and thus any program that transitively uses those.
Overall, I *dislike* the idea of limits, but accept them when there's a true and demonstrable benefit :) I've worked with a lot of systems - languages (or interpreters/compilers), file formats, etc, etc, etc - that have arbitrary limits in them. The usual problem is that, a few years down the track, what used to be "wow that's crazy huge" becomes "ugh now I'm hitting this silly limit". For instance, PostgreSQL is limited to 1600 columns per table. Is that an insanely high limit that you'll never hit, or an actual real limiting factor? Integer sizes are a classic example of this. Is it acceptable to limit your integers to 2^16? 2^32? 2^64? Python made the choice to NOT limit its integers, and I haven't heard of any non-toy examples where an attacker causes you to evaluate 2**2**100 and eats up all your RAM. OTOH, being able to do arbitrary precision arithmetic and not worry about an arbitrary limit to your precision is a very good thing. IMO the limit is, in itself, a bad thing. If it's guaranteeing that some exploit can't bring down your system, sure. If it permits a significant and measurable performance benefit, sure. But the advantage isn't the limit itself - it's what the limit enables. Which I'd like to see more evidence of. :) ChrisA