Steven D'Aprano wrote:
I'm going to second Chris' comment about efficiency. The purposes of this PEP (as I read it) are: (1) Security (less chance of code intentionally or accidentally exceeding low-level machine limits that allow a security exploit); (2) Improved memory use; (3) And such improved memory use will lead to faster code. 1 and 2 seem to be obviously true, but like Chris, I think its a bit much to expect us to take 3 on faith until after the PEP is accepted:
Reference Implementation None, as yet. This will be implemented in CPython, once the PEP has been accepted. I think the change you are asking for is akin to asking us to accept the GILectomy on the promise that "trust me, it will speed up CPython, no reference implementation is needed". It's a big thing to ask. Most of us a Python programmers, not experts on the minutia of the interaction between C code and CPU cache locality and prediction etc.
While I personally am okay putting in limits where necessary for security or if there's a clear performance win and the margin for people is high enough, I agree with Steven that the PEP currently doesn't lay that out yet beyond conjecture that this should have some benefit. I can also see the argument that having a language standard versus it be something that's per-interpreter so one can know their code will run everywhere, but I would also argue most auto-generated code is probably not for a library that's going to approach any limit being proposed and it's more going to be app code which is going to be more tied to a specific interpreter. And in that case I would rather let the interpreters manage their own limits as their performance characteristics will be different thus these caps may bring them no benefit. And so artificial constraints in the name of interpreter consistency goes against "practicality beats purity".