Hi, This is not sopposed to be an evil question; instead I'm hoping for the answer: "No, generally we get >=95% the speed of a pure C/fortran implementation" ;-) But as I am the strongest Python/numarray advocate in our group I get often the answer that Matlab is (of course) also very convenient but generally memory handling and overall execution performance is so bad that for final implementation one would generally have to reimplement in C. We are a bio-physics group at UCSF developping new algorithms for deconvolution (often in 3D). Our data sets are regularly bigger than several 100MB. When deciding for numarray I was assuming that the "Hubble Crowd" had a similar situation and all the operations are therefore very much optimized for this type of data. Is 95% a reasonable number to hope for ? I did wrap my own version of FFTW (with "plan-caching"), which should give 100% of the C-speed. But concerns arise from expression like "a=b+c*a" (think "convenience"!): If a,b,c are each 3D-datastacks creation of temporary data-arrays for 'c*a' AND then also for 'b+...' would have to be very costly. (I think this is at least happening for Numeric - I don't know about Matlab and numarray) Hoping for comments, Thanks Sebastian Haase UCSF, Sedat Lab