CSV performance

psaffrey at googlemail.com psaffrey at googlemail.com
Wed Apr 29 14:52:13 CEST 2009

> rows = fh.read().split()
> coords = numpy.array(map(int, rows[1::3]), dtype=int)
> points = numpy.array(map(float, rows[2::3]), dtype=float)
> chromio.writelines(map(chrommap.__getitem__, rows[::3]))

My original version is about 15 seconds. This version is about 9. The
chunks version posted by Scott is about 11 seconds with a chunk size
of 16384.

When integrated into the overall code, reading all 28 files, it
improves the performance by about 30%.

Many thanks to everybody for their help,


More information about the Python-list mailing list