[Twisted-Python] Twistd memory usage
I have an application, ran as twistd plugin. If I run it as daemon, twistd consumes all available memory (of 4GB available), then after few seconds of swapping twistd is killed by kernel's OOM killer. I cann't reproduce this when I run this application in no-daemon mode (twistd -n), the memory footprint is constant and does not exceed 150MB. Is this normal behaviour? -- Jarek Zgoda "We read Knuth so you don't have to."
On Tue, 2007-03-20 at 15:34 +0100, Jarek Zgoda wrote:
I have an application, ran as twistd plugin. If I run it as daemon, twistd consumes all available memory (of 4GB available), then after few seconds of swapping twistd is killed by kernel's OOM killer. I cann't reproduce this when I run this application in no-daemon mode (twistd -n), the memory footprint is constant and does not exceed 150MB. Is this normal behaviour?
That is not normal behavior. Maybe you could expand a bit on what the program does, what the logs say, etc.?
Itamar Shtull-Trauring napisaĆ(a):
I have an application, ran as twistd plugin. If I run it as daemon, twistd consumes all available memory (of 4GB available), then after few seconds of swapping twistd is killed by kernel's OOM killer. I cann't reproduce this when I run this application in no-daemon mode (twistd -n), the memory footprint is constant and does not exceed 150MB. Is this normal behaviour?
That is not normal behavior. Maybe you could expand a bit on what the program does, what the logs say, etc.?
The application is search/indexing server using PyLucene. It is done as twisted web resource (rpy). Due to PyLucene limitation, the application must use "special" thread class (PyLucene.PythonThread) to access index content. This thread is started in reactor's after startup event trigger. There is one really large datastructure in the application (the Queue.Queue object holding dictionaries of documents to be indexed). As the memory consumption grows in the time of adding documents to queue, I suspect some problem with garbage collector and the fact that I observe this behaviour only in daemonized application adds only more confusion. There's nothing special in logs, neither twistd or system. Of course, OOM killer activation is reflected in /var/log/messages, but with no apparent clues for possible causes. Twistd did not log anything (even its death). For now, twistd is run in nodaemon mode (-ny), but I know it's suboptimal... -- Jarek Zgoda "We read Knuth so you don't have to."
participants (2)
-
Itamar Shtull-Trauring
-
Jarek Zgoda