I use norm() frequently in my own codes and I was recently surprised to see it near the top of my profiling results. It seems that both SciPy and NumPy incur a large amount of overhead in this operation. Consider the following script ================================= from time import clock from numpy import inner, sqrt, ones import numpy.linalg import scipy.linalg from scipy.lib.blas import get_blas_funcs x = ones(5*1000*1000) scipy_norm = scipy.linalg.norm numpy_norm = numpy.linalg.norm def adhoc_norm(x): return sqrt(inner(x,x)) blas_norm = get_blas_funcs(('nrm2',),(x,))[0] for fn in [scipy_norm, numpy_norm, adhoc_norm, blas_norm ]: start = clock() for i in range(10): n = fn(x) end = clock() print "%f seconds per call" % ((end - start)/10.0) ================================= Which outputs the following on my laptop: 0.407000 seconds per call 0.089000 seconds per call 0.013000 seconds per call 0.012000 seconds per call -- Nathan Bell wnbell@gmail.com http://graphics.cs.uiuc.edu/~wnbell/
On Wed, Mar 19, 2008 at 2:22 AM, Nathan Bell <wnbell@gmail.com> wrote:
I use norm() frequently in my own codes and I was recently surprised to see it near the top of my profiling results. It seems that both SciPy and NumPy incur a large amount of overhead in this operation.
They are general functions for computing a variety of norms, not just the L2 norm. The numpy implementation of the L2 norm avoids some unnecessary function calls and could replace the scipy one. Both do a general formulation for both complex and real arrays which is probably costing us. If you would like to code up the extra test for realness to evaluate the L2 norm faster, go ahead. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
participants (2)
-
Nathan Bell
-
Robert Kern