[issue11303] b'x'.decode('latin1') is much slower than b'x'.decode('latin-1')
Alexander Belopolsky
report at bugs.python.org
Fri Feb 25 20:34:02 CET 2011
Alexander Belopolsky <belopolsky at users.sourceforge.net> added the comment:
Committed issue11303.diff and doc change in revision 88602.
I think the remaining ideas are best addressed in issue11322.
> Given that we are starting to have a whole set of such aliases
> in the C code, I wonder whether it would be better to make the
> string comparisons more efficient, e.g.
I don't think we can do much better than a string of strcmp()s. Even if a more efficient algorithm can be found, it will certainly be less readable. Moving strcmp()s before normalize_encoding() (and either forgoing optimization for alternative capitalizations or using case insensitive comparison) may be a more promising optimization strategy. In any case all these micro-optimizations are dwarfed by that of bypassing Python calls and are probably not worth pursuing.
----------
assignee: -> belopolsky
resolution: -> fixed
stage: -> committed/rejected
status: open -> pending
superseder: -> encoding package's normalize_encoding() function is too slow
_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue11303>
_______________________________________
More information about the Python-bugs-list
mailing list