[Python-Dev] PEP 393 decode() oddity

Antoine Pitrou solipsis at pitrou.net
Sun Mar 25 19:01:37 CEST 2012


Hi,

On Sun, 25 Mar 2012 19:25:10 +0300
Serhiy Storchaka <storchaka at gmail.com> wrote:
> 
> But decoding is not so good.

The general problem with decoding is that you don't know up front what
width (1, 2 or 4 bytes) is required for the result. The solution is
either to compute the width in a first pass (and decode in a second
pass), or decode in a single pass and enlarge the result on the fly
when needed. Both incur a slowdown compared to a single-size
representation.

> The first oddity in that the characters from the second half of the 
> Latin1 table decoded faster than the characters from the first half. I 
> think that the characters from the first half of the table must be 
> decoded as quickly.

It's probably a measurement error on your part.

> The second sad oddity in that UTF-16 decoding in 3.3 is much slower than 
> even in 2.7. Compared with 3.2 decoding is slower in 2-3 times. This is 
> a considerable regress. UTF-32 decoding is also slowed down by 1.5-2 times.

I don't think UTF-32 is used a lot.
As for UTF-16, if you can optimize it then why not.

Regards

Antoine.




More information about the Python-Dev mailing list