speedy Python strings?

Uwe Mayer merkosh at hadiko.de
Tue Jan 20 01:43:24 CET 2004


thanks to previous help my wrapper for an abstract file type with variable
record length is working now.
I did some test runs and its awfully slow:

I didn't want to read in the whole file at once and I didn't like to read it
step by step (contains 32 bit length terminated strings (Delphi) and 32bit
integers), so I read in i.e. 2MB, work on that buffer and if the buffer
goes empty i load some more 2MB, etc.
For this buffer I use ordinary strings:

class myFile(file):
        def read(self, *args):
                self.buffer += file.read(self, *args)

and after reading information from the buffer I remove the read part from
        text = struct.unpack("L", self.buffer[:4])
        self.buffer = self.buffer[4:]

During debugging I saw the program accelerating when the buffersize
<len(self.buffer) > grew smaller, thus my conclusion: since strings are
immutable the string operations are so slow.

I tested different sizes of the buffer and it performed best when I didn't
use that buffering system (which is sad, since I spend so much time on it).

Is there anything else I can use instead of normal strings that'll speed
this up?
What do you suggest how to deal with this situation? Do you suggest I remove
any buffering of the data?

Thanks for any comments

More information about the Python-list mailing list