Hi Ian, On 20 February 2014 12:53, Ian Ozsvald <ian@ianozsvald.com> wrote:
def calculate_z(maxiter, zs, cs, output): """Calculate output list using Julia update rule"""
This particular example uses numpy is a very strange and useless way, as I'm sure you know. It builds a regular list of 1'000'000 items; then it converts it to a pair of numpy arrays; then it iterates over the array. It's obviously better to just iterate over the original list (also in CPython). But I know it's not really the point; the point is rather that numpy is slow with PyPy, slower than we would expect. This is known, basically, but a good reminder that we need to look at it from the performance point of view. So far, we focused on completeness. [ Just for reference, I killed numpy from your example and got a 4x speed-up (down from 5s to 1.25s). Afterwards, I expanded the math:
# expanding the math make it 2 seconds slower #while n < maxiter and (z.real * z.real + z.imag * z.imag) < 4:
which is good in theory because abs() requires a square root. As it turns out, it is very good indeed. This results in another 5x speed-up, to 0.25s. This is close enough from Cython speed (which is probably mostly gcc's speed in this example) that I'd say we are done. ] A bientôt, Armin.