How to read from a file to an arbitrary delimiter efficiently?

Paul Rubin at nospam.invalid
Thu Feb 25 02:48:59 EST 2016

Steven D'Aprano <steve+comp.lang.python at> writes:
>     while b:
>         buffer.append(b)

This looks bad because of the overhead of list elements, and also the
reading of 1 char at a time.  If it's bytes that you're reading, try
using bytearray instead of list:

    def chunkiter(f,delim):
        buf = bytearray()
        bufappend = buf.append   # avoid an attribute lookup when calling
        fread =    # similar
        while True:
            c = fread(1)
            if c in delim:
                yield str(buf)
                del buf[:]

If that's still not fast enough, you could do a more hacky thing of
reading large chunks of input at once ( or whatever),
splitting on the delimiter set with re.split, and yielding the split
output, refilling the buffer when you don't find more delimiters.  That
doesn't tell you what delimiters actually match: do you need that?
Maybe there is nicer a way to get at it than adding up the lengths of
the chunks to index into the buffer.  How large do you expect the chunks
to be?

More information about the Python-list mailing list