On Sat, Aug 8, 2020, 7:40 PM Steven D'Aprano
Any checker capable of doing bounds checks would know that the range of possible ints is unbounded in both directions, and therefore an int does not fit into the range [1:10**9]. Hence that will be a static bounds
check failure:

Just so you can win a bet at the library or the bar, the largest Python int is 2**(2**63)-1, and the smallest -2**(2**63). Ints have a struct member of bit_length that is a fixed size in C. :-)

A computer capable of storing that number cannot, of course, fit in the physical universe.

Let's be less vague:

def step()->int[-3:4]:
pass

n = 0: int[-100:101]
for i in range(N):
n += step()

You and I can reason that after N steps, the maximum possible value of n is 3*N, which could be greater than 100 unless N was provably less than 34, which it may not be. And so the bounds check fails.

Yes, this is absolutely doable! I guess my argument in several posts is that this simple level of analysis of "possible bounds violation" is rarely useful (at least in a Python context[*]). Vastly more complicated formal proofs might be useful... But, y'know, way more work for tools.

[*] For languages with bounded data types, this is more compelling. If I think a variable will *definitely* fit in a uint8, having the static tool tell me it might not is powerful.