How to safely maintain a status file

Michael Hrivnak mhrivnak at
Mon Jul 9 22:47:22 CEST 2012

Please consider batching this data and doing larger writes.  Thrashing
the hard drive is not a good plan for performance or hardware
longevity.  For example, crawl an entire FQDN and then write out the
results in one operation.  If your job fails in the middle and you
have to start that FQDN over, no big deal.  If that's too big of a
chunk for your purposes, perhaps break each FQDN up into top-level
directories and crawl each of those in one operation before writing to

There are existing solutions for managing job queues, so you can
choose what you like.  If you're unfamiliar, maybe start by looking at


On Mon, Jul 9, 2012 at 1:52 AM, Plumo <richardbp at> wrote:
>> What are you keeping in this status file that needs to be saved
>> several times per second?  Depending on what type of state you're
>> storing and how persistent it needs to be, there may be a better way
>> to store it.
>> Michael
> This is for a threaded web crawler. I want to cache what URL's are
> currently in the queue so if terminated the crawler can continue next
> time from the same point.
> --

More information about the Python-list mailing list