On Fri, 6 Dec 2019 at 09:33, Steven D'Aprano <steve@pearwood.info> wrote:
Although I am cautiously and tentatively in favour of setting limits if the benefits Mark suggests are correct, I have thought of at least one case where a million classes may not be enough.
I've seen people write code like this:
for attributes in list_of_attributes: obj = namedtuple("Spam", "fe fi fo fum")(*attributes) values.append(obj)
not realising that every obj is a singleton instance of a unique class. They might end up with a million dynamically created classes, each with a single instance, when what they wanted was a single class with a million instances.
But isn't that the point here? A limit would catch this and prompt them to rewrite the code as cls = namedtuple("Spam", "fe fi fo fum") for attributes in list_of_attributes: obj = cls(*attributes) values.append(obj)
Could there be people doing this deliberately? If so, it must be nice to have so much RAM that we can afford to waste it so prodigiously: a namedtuple with ten items uses 64 bytes, but the associated class uses 444 bytes, plus the sizes of the methods etc. But I suppose there could be a justification for such a design.
You're saying that someone might have a justification for deliberately creating a million classes, based on an example that on the face of it is a programmer error (creating multiple classes when a single shared class would be better) and presuming that there *might* be a reason why this isn't an error? Well, yes - but I could just as easily say that someone might have a justification for creating a million classes in one program, and leave it at that. Without knowing (roughly) what the justification is, there's little we can take from this example. Having said that, I don't really have an opinion on this change. Basically, I feel that it's fine, as long as it doesn't break any of my code (which I can't imagine it would) but that's not very helpful! https://xkcd.com/1172/ ("Every change breaks someone's workflow") comes to mind here. Paul