On Sat, Aug 8, 2020 at 16:41 Dominik Vilsmeier <dominik.vilsmeier@gmx.de> wrote:

On 08.08.20 05:48, David Mertz wrote:

On Fri, Aug 7, 2020, 6:03 PM Paul Moore <p.f.moore@gmail.com> wrote:
> x: int[0:]  # any ints greater than or equal to zero would match, others would fail
> x: int[:101]  # any ints less than 101 match
> x: int[0:101:2]  # even less than 101

I suspect the biggest issue with this is that it's likely to be
extremely hard (given the dynamic nature of Python) to check such type assertions statically.

Yes, it's hard in the sense that it would require solving the halting problem.

How is it any more difficult than already existing type checks? The proposal talks about type hints for static type analysis and hence implies using literals for the slices. There's no dynamic nature to it. You can define `int[x]` to be a subtype of `int[y]` if the numbers defined by `x` are a subset of those defined by `y`.

Referring to one of the examples including `random.randint`:

    def foo(x: int[0:11]):
        pass

    foo(random.randint(0, 10))

Regarding static type analysis, the only information about `random.randint` is that it returns an `int` and hence this should be an error. If you want to use it that way, you'd have to manually cast the value:

    foo(typing.cast(int[0:11], random.randint(0, 10)))

This of course raises questions about the usefulness of the feature.

By the way, I would find it more intuitive if the `stop` boundary was included in the interval, i.e. `int[0:10]` means all the integer numbers from 0 to 10 (including 10). I don't think that conflicts with the current notion of slices since they always indicate a position in the sequence, not the values directly (then `int[0:10]` would mean "the first 10 integer numbers", but where do they even start?).


> This of course raises questions about the usefulness of the feature.

I feel like this discussion boils down to one argument being the checking is doable as long as you keep it to something with very limited usefulness (static literal ranges) and the other being in order to make it useful you'd need dependent typing and a sophisticated theorem prover.

I think about dependent typing when I look at TypeVar and Literal, but for it to work, you'd have to be able to do gnarly things like add and compare two TypeVars according to the operations of their literal types, like in this case

A = TypeVar('A', int)
B = TypeVar('B', int)

def randint(a: A, b: B) -> int[A:Literal[1]+B]:  # ?
    ...

And randint(1, 1_000_000_001)

means B = Literal[1_000_000_001]

but plusone's type int[1:1_000_000_000] requires B < Literal[1_000_000_000], which fails.

This kind of thing is so powerful, and I would love to see tooling capable of it in the python ecosystem. I believe folks who say it's very hard to implement correctly, but I don't know that that's a good reason not to make the proposed range syntax legal for some primitive types like int. Tools like mypy can choose not to handle int[A:B] until they are ready, or forever, but someone else can still work on it.

Are we just talking about the language features here, or does this list include ideas for tooling like mypy?

- Mike Smith