Okay, it's clearer. I would still add a *little* bit of implementation to the examples, e.g.
@typing.datraclass_transform()
def create_model(cls: type) -> Callable[[T], T]:
cls.__init__ = ...
cls.__eq__ = ...
return cls
One of the things that threw me off was the return type, Callable[[T], T]. That's not really the type a class decorator should return -- but apparently `type` is sufficiently vague that it matches this, *and* mypy ignores class decorators (assuming they return the original class), so this passes mypy. But a callable isn't usable as a base class, so e.g.
class ProUser(User):
plan: str
wouldn't work (and in fact pyright rejects this, at least if I leave the @dataclass_transform() call out -- I haven't tried installing that version yet).
Wouldn't it make more sense if create_model was typed like this?
T = TypeVar("T", bound=type)
def create_model(cls: T) -> T:
cls.__init__ = ...
cls.__eq__ = ...
return cls
(Is it possible that you were just accidentally using the signature of dataclass_transform() as the signature of the create_model() class decorator? In the "Runtime Behavior" section you show that it returns Callable[[T], T].)
Another thing that surprises me is that apparently the names of the various allowable keyword arguments (eq, final etc.) is fixed, and you can only specify whether they are supported and their default setting? That seems super dependent on the current set of popular "dataclass-like" functions. (And unlike Paul, I don't think that now dataclasses exist in the stdlib, the other libraries should defer to its interface or implementation -- I think there's plenty of room for new class decorators along these lines.)
I haven't really tried to understand what you're doing with the field descriptors, I suppose it's similarly narrow?