unicode() vs. s.decode()

Thorsten Kampe thorsten at thorstenkampe.de
Fri Aug 7 11:13:07 EDT 2009


* alex23 (Fri, 7 Aug 2009 06:53:22 -0700 (PDT))
> Thorsten Kampe <thors... at thorstenkampe.de> wrote:
> > Bollocks. No one will even notice whether a code sequence runs 2.7 or
> > 5.7 seconds. That's completely artificial benchmarking.
> 
> But that's not what you first claimed:
> 
> > I don't think any measurable speed increase will be
> > noticeable between those two.
> 
> But please, keep changing your argument so you don't have to admit you
> were wrong.

Bollocks. Please note the word "noticeable". "noticeable" as in 
recognisable as in reasonably experiencable or as in whatever.

One guy claims he has times between 2.7 and 5.7 seconds when 
benchmarking more or less randomly generated "one million different 
lines". That *is* *exactly* nothing.

Another guy claims he gets times between 2.9 and 6.2 seconds when 
running decode/unicode in various manifestations over "18 million 
words" (or is it 600 million?) and says "the differences are pretty 
significant". I think I don't have to comment on that.

If you increase the number of loops to one million or one billion or 
whatever even the slightest completely negligible difference will occur. 
The same thing will happen if you just increase the corpus of words to a 
million, trillion or whatever. The performance implications of that are 
exactly none.

Thorsten



More information about the Python-list mailing list