ineffective optimization: method tables

Well, I wasted a fair amount of my time for no apparent gain. The idea was to have function pointer tables indexed by type that could be used for common operations. First, how to do we index things by type? Here's my solution: #define PyType_UNKNOWN 0 #define PyType_NONE 1 #define PyType_INT 2 #define PyType_Ord(t) ((t)->tp_ord) #define PyObject_TypeOrd(o) PyType_Ord((o)->ob_type) Here is an example of methods for PyObject_IsTrue: int int_istrue(PyObject *v) { return ((PyIntObject *)v)->ob_ival != 0; } int none_istrue(PyObject *v) { return 0; } inquiry istrue_table[] = { PyObject_IsTrue, /* PyType_UNKNOWN */ none_istrue, /* PyType_NONE */ int_istrue, /* PyType_INT */ }; #define PyObject_IS_TRUE(v) istrue_table[PyObject_TypeOrd(v)](v) There is a patch at: http://arctrix.com/nas/python/method_table1.diff I did have 2-D tables for binary operations but since they were quite sparse I took them out in favor of arrays and case statements. Unfortunately, all my benchmarks show this patch to be ineffective in terms of speeding up the interpreter. Anyone know why? Neil

Well, I wasted a fair amount of my time for no apparent gain. [...] Unfortunately, all my benchmarks show this patch to be ineffective in terms of speeding up the interpreter. Anyone know why?
Probably you're optimizing something that is already quite fast. While your code saves a C call and a few tests, those kind of operations are rarely what slows down Python these days. My suspicion is that most of the the time goes into (1) allocating and deallocating objects, and (b) calling methods... --Guido van Rossum (home page: http://www.python.org/~guido/)
participants (2)
-
Guido van Rossum
-
Neil Schemenauer