[Python-3000] Math in Python 3.0

Giovanni Bajo rasky at develer.com
Fri May 12 20:25:37 CEST 2006


Tim Peters wrote:

> [Giovanni Bajo]
>>> Since we're at it, do you have any idea on why Python is so slow
>>> when doing such bit-full code?
>
> Is it, compared to the other interpreted languages?  Doesn't look like
> it.

Well, it seems like one of the few area where Python is slower than other
scripting languages.

> The cost of going around the eval loop once utterly swamps the
> cost of doing a bitwise operation on native ints, so relative
> interpreter overhead is massive.

Yup. I was wondering if there's something else that is a known issue and
could be optimized. It really "feels" a little too slower than it could be.

>> See for instance
>>
http://shootout.alioth.debian.org/debian/benchmark.php?test=nsievebits&lang=all,
>> where the Python version needs to play caching tricks to get some
>> speed.
>
> So they mandate use of an algorithm that's naive in several ways.
> What a waste of time ;-)  You can get a strong speedup (at the cost of
> strongly increased memory use) by using a vector of bits instead of
> playing C-ish chunking games:
>
> def primes_in_range(M) :
>     bits = [1] * M
>     count = 0
>     for prime in xrange(2, M):
>         if bits[prime]:
>             count += 1
>             bits[2*prime: M: prime] = [0] * ((M-1)//prime - 1)
>     return count
>
> That does "the inner loop" (turned into one slice assignment) at C
> speed, so is much faster.

Yeah, that's another benchmark in fact:
http://shootout.alioth.debian.org/debian/benchmark.php?test=nsieve&lang=all
and the existing Python solution is very similar to your proposed version :)

Mandating the use of bits is to measure performance for algorithms that
require bits. For instance, years ago I had implemented several ciphers in
pure Python, and they were little slow. Don't get me wrong, I knew they were
going to be slow, but just not *that* slow. Even if I've been using Python
as my primary language for some years now, it still feels like this kind of
bit manipulation is really slower than the "average" slowdown you expect
when you using Python.

I mean, it's just my feeling, so I might be very wrong. But the fact that
nsievebits is one of the *few* benchmarks where Python doesn't rate well
among other scripting languages make me believe there's something going on.
Also this one:
http://shootout.alioth.debian.org/debian/benchmark.php?test=partialsums&lang=all.
Even using the sum() builtin with a genexp is slower than the *naive*
for-loop implementation in other scripting languages.
-- 
Giovanni Bajo



More information about the Python-3000 mailing list