# Challenge: optimizing isqrt

Steven D'Aprano steve+comp.lang.python at pearwood.info
Sat Nov 1 02:29:40 CET 2014

```There is an algorithm for calculating the integer square root of any
positive integer using only integer operations:

def isqrt(n):
if n < 0: raise ValueError
if n == 0:
return 0
bits = n.bit_length()
a, b = divmod(bits, 2)
x = 2**(a+b)
while True:
y = (x + n//x)//2
if y >= x:
return x
x = y

This returns the integer part of the square root of n, that is, the greatest
whole number less than or equal to the square root of n:

py> isqrt(35)
5
py> isqrt(36)
6

That makes it equivalent to int(math.sqrt(n)), which also happens to be
much, much faster, at least for small values of n. However, for large
values of n, using floating point intermediate calculations fail:

py> import math
py> int(math.sqrt(2**3000))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: long int too large to convert to float

Another problem is that, above a certain size, the resolution of floats is
larger than 1, so you can't convert every int into a float without loss:

py> float(2**90-1) == 2**90-1
False

which means that using math.sqrt is not correct:

py> isqrt(2**90-1)
35184372088831
py> int(math.sqrt(2**90-1))  # Off by one.
35184372088832

So, the challenge is to identify when it is safe to optimise isqrt(n) as
int(math.sqrt(n)):

Q1: What is the largest value of M, such that

all(isqrt(i) == int(math.sqrt(n)) for n in range(M))

returns True?

I have done a brute force test, and in nine hours confirmed that M is at
least 7627926244. That took nine hours, and I expect that a brute force
test of every int representable as a float would take *months* of
processing time.

Q2: For values above M, is there a way of identifying which values of n are
okay to use the optimized version?

Q3: What is the largest value of n beyond which you can never use the float
optimization?

You can assume that Python floats are IEEE-754 C doubles, and that
math.sqrt() is correctly rounded.

--
Steven

```