Speed of data structures in python

Dave dmieluk at exemail.com.au
Mon Mar 13 15:13:40 CET 2006

As the OP, i thought i'd follow up with my experience, in case anyone else  
is learning pyopengl and as mystified as i was (am?).

Thank you to all the posters who responded, especially the one who  
mentioned display lists...

Initially, i had built a very simple prototype, getting 5 fps. This was  
very poorly designed though, calculating normals and positions (sqrt, sin,  
pow) on the fly.

when i posted the OP, i suspecteded that the biggest improvement would be  
achieved by storing the normals and positions in the fastest data  

after posting, i implemented numpy arrays to hold normals and positions,  
and got 18 fps.

i then increased the complexity of the protoype to the bare-min specs  
required, and got 1-2 fps.

i then implemented display lists (opengl stuff), and am now getting 190  
fps (on the larger protoype).

so, my point is, the limiting factor had nothing to do with speed of data  
structures in python, but the way data was being consumed by opengl (and  
my absolute newbieness at opengl ;-)

i hope this helps anyone who is learning similar material


On Sat, 11 Mar 2006 16:54:06 +1100, Steven D'Aprano  
<steve at REMOVETHIScyber.com.au> wrote:

> On Fri, 10 Mar 2006 21:06:27 -0600, Terry Hancock wrote:
>> On Sat, 11 Mar 2006 13:12:30 +1100
>> "Steven D'Aprano" <steve at REMOVETHIScyber.com.au> wrote:
>>> On Fri, 10 Mar 2006 23:24:46 +1100, Dave wrote:
>>> > Hi. I am learning PyOpenGL and I am working with a
>>> > largish fixed scene   composed of several thousand
>>> > GLtriangles. I plan to store the coords and   normals in
>>> > a NumPy array.
>>> >
>>> > Is this the fastest solution in python?
>>> Optimization without measurement is at best a waste of
>>> time and at worst counter-productive. Why don't you time
>>> your code and see if it is fast enough?
>>> See the timeit module, and the profiler.
>> Talk about knee-jerk reactions. ;-)
> Yes, let's.
>> It's a *3D animation* module -- of course it's going to be
>> time-critical.  Sheesh.  Now *that* is stating the obvious.
> Did I say it wasn't? I asked if the current solution is fast enough. If
> the current solution is fast enough, then why waste time trying to speed
> it up? Does the Original Poster think that PCs will get slower in the
> future?
>> The obvious solution is actually a list of tuples.
> But that's not the solution being asked about, nor did I suggest it.
>> But
>> it's very possible that that won't be fast enough, so the
>> NumPy approach may be a significant speedup. I doubt you
>> need more than that, though.
> I didn't argue against the NumPy approach. I suggested that, instead of
> *asking* if there was something faster, the O.P. should actually *try it*
> and see if it is fast enough.
> If you think that is bad advice, please tell us what you consider good
> advice.

Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

More information about the Python-list mailing list