I was thinking the same thing. We should distinguish limits with respect to the codegen process, which seem reasonable, vs runtime. Classes and coroutines are objects, and like objects in general, the program should have the option of filling its heap with any arbitrary objects. (Whether wise or not, this design is not for us to arbitrarily limit. For example, I recall that Eve Online is/was running large numbers of stackless coroutines, possibly well in excess of 1M.)
For some comparison:
Note the JVM has it made easier to tune the use of the native heap for class objects since Java 8, in part to relax earlier constraints around "permgen" allocation - by default, class objects are automatically allocated from the heap without limit (this is managed by "metaspace"). I suppose if this was a tunable option, maybe it could be useful, but probably not - Java's ClassLoader design is prone to leaking classes, as we know from our work on Jython. There's nothing comparable to my knowledge for why this would be the case for CPython class objects more than other objects.
I also would suggest for PEP 611 that any limits are discoverable (maybe in sys) so it can be used by other implementations like Jython. There's no direct correspondence between LOC and generated Python or Java bytecode, but it could possibly still be helpful for some codegen systems. Jython is limited to 2**15 bytes per method due to label offsets, although we do have workarounds for certain scenarios, and could always compile, then run Python bytecode for large methods. (Currently we use CPython to do that workaround compilation, thanks!)
Lastly, PEP 611 currently erroneously conjectures that "For example, Jython might need to use a lower class limit of fifty or sixty thousand becuase of JVM limits."
- Jim