Memory usage per top 10x usage per heapy
Tim Chase
python.list at tim.thechases.com
Mon Sep 24 19:22:20 EDT 2012
On 09/24/12 16:59, MrsEntity wrote:
> I'm working on some code that parses a 500kb, 2M line file line
> by line and saves, per line, some derived strings into various
> data structures. I thus expect that memory use should
> monotonically increase. Currently, the program is taking up so
> much memory - even on 1/2 sized files - that on 2GB machine I'm
> thrashing swap.
It might help to know what comprises the "into various data
structures". I do a lot of ETL work on far larger files,
with similar machine specs, and rarely touch swap.
> 2) How can I diagnose (and hopefully fix) what's causing the
> massive memory usage when it appears, from heapy, that the code
> is performing reasonably?
I seem to recall that Python holds on to memory that the VM
releases, but that it *should* reuse it later. So you'd get
the symptom of the memory-usage always increasing, never
decreasing.
Things that occur to me:
- check how you're reading the data: are you iterating over
the lines a row at a time, or are you using
.read()/.readlines() to pull in the whole file and then
operate on that?
- check how you're storing them: are you holding onto more
than you think you are? Would it hurt to switch from a
dict to store your data (I'm assuming here) to using the
anydbm module to temporarily persist the large quantity of
data out to disk in order to keep memory usage lower?
Without actual code, it's hard to do a more detailed
analysis.
-tkc
More information about the Python-list
mailing list