Can I do this faster?

Bernhard Herzog herzog at online.de
Wed Aug 9 08:43:10 EDT 2000


Horst Gassner <horst at proceryon.at> writes:

> Thank you for your help.
> Unfortunately your solution is much slower.
> 
> Profiler says:
> old approach: 17033 calls, cumtime 2.49
> new approach:  2597 calls, cumtime 16.91

That's probably because for one thing filter/map&co with a lambda are
usually slower than an inline for loop (function calls in Python are
relatively expensive), and more importantly, because the original
version returns as soon as the item is found while the filter-approach
looks at all items.

> > > def __GetRowID (s, row):
> > >       for key in s.__rowDict.keys():
> > >               if s.__rowDict[key]['data'] == row:
> > >                       return key

Would it make sense to build up a cache of values? E.g:

def __GetRowID (s, row):
	try:
		return s.__cache[row]
	except KeyError:
		for key in s.__rowDict.keys():
			if s.__rowDict[key]['data'] == row:
				self.__cache[row] = key
				return key 

That's assuming that row can be used as a dictinary key and there'd have
to be some way to clear the cache whenever it becomes invalid.

An alternative might be to construct a complete row->key dictionary at
one point after __rowDict has been filled and before you start to call
__GetRowID. But whether that would be feasible depends on your program.

-- 
Bernhard Herzog   | Sketch, a drawing program for Unix
herzog at online.de  | http://sketch.sourceforge.net/



More information about the Python-list mailing list