
Here some results, dictionaries have 1000 entries:
timing for strings old= 5.097 new= 5.088 timing for bad integers (<<10) old=101.540 new=12.610 timing for bad integers (<<16) old=571.210 new=19.220
Even though I think concentrating on string keys would provide more performance boost for Python in general, I think you have a point there. +1 from here. BTW, would changing the hash function on strings from the simple XOR scheme to something a little smarter help improve the performance too (e.g. most strings used in programming never use the 8-th bit) ? I also think that we could inline the string compare function in dictobject:lookdict_string to achieve even better performance. Currently it uses a function which doesn't trigger compiler inlining. And finally: I think a generic PyString_Compare() API would be useful in a lot of places where strings are being compared (e.g. dictionaries and keyword parameters). Unicode already has such an API (along with dozens of other useful APIs which are not available for strings). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/