python simply not scaleable enough for google?

sturlamolden sturlamolden at
Sat Nov 14 11:22:54 CET 2009

On 14 Nov, 09:47, "Alf P. Steinbach" <al... at> wrote:

> > Python is slow is really a misconception.
> Sorry, no, I don't think so.

No, i really think a lot of the conveived slowness in Python comes
from bad programming practices. Sure we can deomstrate that C or
LuaJIT is faster by orders of magnitude for CPU-bound tasks like
comparing DNA-sequences or or calculating the value of pi.

But let me give an example to the opposite from graphics programming,
one that we often run into when using OpenGL. This is not a toy
benchmark problem but one that is frequently encountered in real

We all know that calling functions in Python has a big overhead. There
are a dictionary lookup for the attribute name, and arguments are
packed into a tuple (and somtimes  a dictionary). Thus calling
glVertex* repeatedly from Python will hurt. Doing it from C or Fortran
might still be ok (albeit not always recommended).  So should we
conclude that Python is too slow and use C instead?


What if we use glVertexArray or a display list instead? In case of a
vertex array (e.g. using NumPy ndarray for storage), there is
practically no difference in performance of C and Python. With a
display list, there is a difference on creation, but not on
invocation. So slowness from calling glVertex* multiple times is
really slowness from bad Python programming. I use numpy ndarrays to
store vertices, and pass them to OpenGL as a vertex arrays, instead of
hammering on  glVertex* in a tight loop. And speed wise, it does not
really matter if I use C or Python.

But what if we need some computation in the graphics program as well?
We might use OpenCL, DirectCompute or OpenGL vertex shaders to control
the GPU. Will C be better than Python for this? Most likely not. A
program for the GPU is compiled by the graphics driver at run-time
from a text string passed to it. It is much better to use Python than
C to generate these. Will C on the CPU be better than OpenCL or a
vertex shader on the GPU? Most likely not.

So we might perhaps conclude that Python (with numpy) is better than C
for high-performance graphics? Even though Python is slower than C, we
can do just as well as C programmers by not falling into a few stupid
pitfalls. Is Python really slower than C for practical programming
like this? Superficially, perhaps yes. In practice, only if you use it
badly. But that's not Python's fault.

But if you make a CPU-bound benchmark like Debian, or time thousands
of calls to glVertex*, yes it will look like C is much better. But it
does not directly translate to the performance of a real program. The
slower can be the faster, it all depends on the programmer.

Two related issues:

- For the few cases where a graphics program really need C, we can
always resort to using ctypes, f2py or Cython. Gluing Python with C or
Fortran is very easy using these tools. That is much better than
keeping it all in C++.

- I mostly find myself using Cython instead of Python for OpenGL. That
is because I am unhappy with PyOpenGL. It was easier to expose the
whole of OpenGL to Cython than create a full or partial wrapper for
Python. With Cython there is no extra overhead from calling glVertex*
in  a tight loop, so we get the same performance as C in this case.
But because I store vertices in NumPy arrays on the Python side, I
mostly end up using glVertexArray anyway.

More information about the Python-list mailing list