Hm, using slices to indicate *real* intervals sounds odd, given the pretty strong association of slices with linear progressions of integers. Would you write slice(0, 0.5) for 0 <= p < 0.5? There's also the issue of bounds. E.g. probabilities should use the *closed* interval [0, 1], but slice() is traditionally used to mean a *half-open* interval, so slice(0, 1) would be [0, 1). Whereas you wrote 0 < p < 1 which sounds just wrong.
I'm really looking forward to using this!
Annotations like `ValueRange` and `MaxLen` (and `MinLen`) will be very useful indeed for tools like Hypothesis and CrossHair, allowing us to automatically infer much more precise tests for other people's code than just the type.
Of course, that's only going to work if the ecosystem is reasonably consistent in how we represent numeric and size bounds, and to that end I'd like to propose that we standardise on the `slice` object. For example:
# numeric bounds
Probability: Annotated[Real, slice(0, 1)] # all probabilities must be 0 < p < 1
Nat: Annotated[int, slice(1, None)] # natural numbers 1, 2, 3, 4, ...
# collection size bounds
NonEmptyList: Annotated[List[T], slice(1, None)] # self-explanatory, I hope
NumpyShape: Annotated[Tuple[int, ...], slice(0, 32)] # Numpy arrays can have at most 32 dimensions
I would actually encourage the use of semantically meaningful aliases or wrappers (to e.g. ensure that `start is not None` for collection sizes), but using a `slice` object at runtime ensures that these common annotations are interoperable between libraries.
Thoughts? If nobody objects I'm happy to write up a short PR against the docs.
Typing-sig mailing list -- firstname.lastname@example.org
To unsubscribe send an email to email@example.com