.readline() - VERY SLOW compared to PERL
aleaxit at yahoo.com
Thu Nov 16 14:32:56 CET 2000
<cbarker at jps.net> wrote in message news:8uus6b$psp$1 at nnrp1.deja.com...
> for line in whole_file:
> data = map(float,string.split(string.strip(line),','))
> data = tuple(map(lambda x: 6*x, data))
> outfile.write('%12g, %12g, %12g, %12g, %12g, %12g, %12g\n'%data )
Just wondering -- what happens with some Python 2 version
of the processing, such as...:
data = ['%12g'%(6.0*float(x)) for x in ','.split(line)]
(the .strip should be useless here, as float takes no
notice of leading/trailing whitespace anyway, I think)...?
I do realize that the list-comprehension version is not as
'optimizing' as one would think (it can even make things
slower, or at least I recall seing results to that effect
posted), but I wonder if removing the multiple map calls,
old-style string.split (which now delegates to the string
object's split method -- gotta be some overhead in that),
redundant string.strip, lambda, and tuple-building, makes
a discernible effect in the overall resulting performance.
More information about the Python-list