Working with the set of real numbers
rosuav at gmail.com
Tue Mar 4 23:18:33 CET 2014
On Wed, Mar 5, 2014 at 9:02 AM, Oscar Benjamin
<oscar.j.benjamin at gmail.com> wrote:
> On 4 March 2014 21:18, Chris Angelico <rosuav at gmail.com> wrote:
>> On Wed, Mar 5, 2014 at 7:55 AM, Oscar Benjamin
>> <oscar.j.benjamin at gmail.com> wrote:
>>> I don't quite follow your reasoning here. By "cut-and-try" do you mean
>>> bisection? If so it gives the first N decimal digits in N*log2(10)
>>> iterations. However each iteration requires a multiply and when the
>>> number of digits N becomes large the multiplication is worse than
>>> linear. So the result is something like N**2 log(N)log(log(N)),
>> By "cut and try" I'm talking about the really REALLY simple algorithm
>> for calculating square roots. It's basically brute force.
>> epsilon = 0.0001
>> def sqrt(n):
>> guess1, guess2 = 1, n
>> while abs(guess1-guess2) > epsilon:
>> guess1 = n/guess2
>> guess2 = (guess1 + guess2)/2
>> return guess1
> That's the exact same algorithm I showed! How on earth would you call
> that brute force?
It uses a lot of division. There are various refinements that can be
done that replace some of that division with multiplication, but I'd
have to go do some research to figure it out.
This is the purest form of attempted-division algorithm. If you're
describing it on a blackboard, you would write it pretty much like
this. At each iteration, you have to divide by a number that's n
digits long, and then do some additional arithmetic.
>> It's generally going to take roughly O(n*n) time to generate n digits,
>> give or take.
> It does not take O(n*n) time. This is Newton iteration and for
> well-behaved problems such as this it generates more than n digits
> after n iterations. I modified my code to show the error (x**2 - y) at
> each iteration:
> $ python3.3 root.py
> The number of correct digits doubles at each iteration so after n
> iterations you have 2**n digits (I misstated this as n**2 before).
> This means that it takes log(N) iterations to get N digits.
It seems I'm partly mistaken, though not entirely. Let's compare two
versions. In the first, you set the precision (I'm talking in terms of
REXX's "NUMERIC DIGITS" statement - anything beyond this many digits
will be rounded (and represented exponentially, if necessary); I'm not
sure if decimal.Decimal precision works this way) such that you get 10
digits. Each iteration requires division by a 10-digit number, which
is an operation that takes a certain amount of time; and it's going to
take some number of iterations to get to the final answer.
Second version, you set the precision so you get 20 digits. Now, it's
going to take you approximately one more iteration to get to the final
answer. (This bit I was mistaken on. I thought it would take something
like 25% more or 50% more iterations.) But each iteration will take
longer. The complexity of division depends on the algorithm - grade
school long division would be O(n) with a fixed-length dividend, I
think, but you could probably pull that down a bit.
So that's why I said it'd be very roughly O(n*n) - because the
division in each step is O(n), and I thought it'd take O(n) steps.
Turns out it's O(n*log(n)), which is a lot better.
>> That's the baseline against which anything else can be
>> compared. There are plenty of better ways to calculate them.
> Such as?
Improved versions of the above, and I was under the impression that
there were some completely different techniques that converged a lot
more quickly. But it may be that I was mistaken, as I'd been expecting
this to converge in O(n) steps. Reducing the amount of division would
speed things up significantly, although it probably won't change the
So, put it down to misremembering and thinking the simple algorithm
was worse than it actually is :)
More information about the Python-list