Tremendous slowdown due to garbage collection
Tue Apr 15 18:55:41 CEST 2008
Aaron Watters <aaron.watters at gmail.com> writes:
> Even with Btree's if you jump around in the tree the performance can
> be awful.
The Linux file cache really helps. The simplest approach is to just
"cat" the index files to /dev/null a few times an hour. Slightly
faster (what I do with Solr) is mmap the files into memory and read a
byte from each page now and then. Assuming (as in Lucene) that the
index file format is compressed, this approach is far more
ram-efficient than actually unpacking the index into data
structures. though of course you take the overhead (a few
microseconds) of a couple system calls at each access to the index
even when it's all in cache.
More information about the Python-list