[Mailman-Users] mailman python-2.4 using 96% cpu

Goodman, William wgoodman at jcvi.org
Sat Feb 7 00:46:29 CET 2009


Cool Mike that helped a lot...

I was so frustrated I set it to:

QRUNNER_SLEEP_TIME = seconds(10)

That seem to calm it down a bit.

top - 18:38:02 up 56 min,  2 users,  load average: 1.25, 1.20, 1.83
Tasks: 109 total,   1 running, 108 sleeping,   0 stopped,   0 zombie
Cpu(s): 20.3%us,  0.3%sy,  0.0%ni, 79.2%id,  0.2%wa,  0.0%hi,  0.0%si,
0.0%st
Mem:   3866604k total,   494800k used,  3371804k free,   175632k buffers
Swap:  4194296k total,        0k used,  4194296k free,   172108k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
19021 mailman   25   0  150m  12m 2752 S   41  0.3   1:59.92 python2.4
    8 root      10  -5     0    0    0 S    0  0.0   0:03.72 events/0
19797 root      16   0 12584 1068  800 S    0  0.0   0:00.30 top
    1 root      15   0 10324  692  580 S    0  0.0   0:00.40 init

But I still see 99% spikes from time to time. Is there a BOUNCERUNNER
and INCOMINGRUNNER parameter?

Bill

-----Original Message-----
From: Mark Sapiro [mailto:mark at msapiro.net] 
Sent: Friday, February 06, 2009 5:19 PM
To: Goodman, William; mailman-users at python.org
Subject: RE: [Mailman-Users] mailman python-2.4 using 96% cpu

Goodman, William wrote:

>Sorry to say Mike this is after applying all patches... but it now 
>archiving


OK. That's good.


>top - 16:33:18 up 1 day, 23:08,  2 users,  load average: 2.87, 2.24,
>1.32
>Tasks: 106 total,   3 running, 103 sleeping,   0 stopped,   0 zombie
>Cpu(s): 58.7%us,  3.5%sy,  0.0%ni, 37.4%id,  0.2%wa,  0.0%hi,  0.2%si, 
>0.0%st
>Mem:   3866604k total,  1600176k used,  2266428k free,   442080k
buffers
>Swap:  4194296k total,        4k used,  4194292k free,   373852k cached
>
>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 8969 mailman   25   0  150m  12m 2748 R   93  0.3   5:37.84 python2.4
> 8971 mailman   16   0  147m 9952 2808 S    9  0.3   1:42.04 python2.4
> 8967 mailman   16   0  151m  12m 2748 S    1  0.3   0:20.62 python2.4
>    2 root      RT  -5     0    0    0 S    0  0.0   0:06.35
migration/0
>10760 postfix   16   0 54212 2624 2064 S    0  0.1   0:00.12 local
>10895 postfix   15   0 54252 2356 1820 S    0  0.1   0:00.11 cleanup
>
># ps -fwp 8969
>UID        PID  PPID  C STIME TTY          TIME CMD
>mailman   8969  8965 73 16:25 ?        00:06:43 /usr/bin/python2.4
>/opt/software/mailman/bin/qrunner --runner=IncomingRunner:0:1 -s
>
># ps -fwp 8971
>UID        PID  PPID  C STIME TTY          TIME CMD
>mailman   8971  8965 22 16:25 ?        00:02:10 /usr/bin/python2.4
>/opt/software/mailman/bin/qrunner --runner=OutgoingRunner:0:1 -s
>
># ps -fwp 8967
>UID        PID  PPID  C STIME TTY          TIME CMD
>mailman   8967  8965  4 16:25 ?        00:00:28 /usr/bin/python2.4
>/opt/software/mailman/bin/qrunner --runner=BounceRunner:0:1 -s
>
>Any other suggestions are welcomed.


I hope you don't have anything like

QRUNNER_SLEEP_TIME = 0

in mm_cfg.py. It's unlikely that that would cause a pattern like this.
More likely would be all runners using aproximately equal CPU.

You could try putting

QRUNNER_SLEEP_TIME = seconds(5)

in mm_cfg.py (the default is 1) and restarting Mailman to see if that
changes things. If that doesn't help, you may need to strace the PID of
IncomingRunner to see what it's doing. It should be spending almost all
of it's time waiting for

select(0, NULL, NULL, NULL, {1, 0})

(or maybe

select(0, NULL, NULL, NULL, {5, 0})

if you made the QRUNNER_SLEEP_TIME change).

-- 
Mark Sapiro <mark at msapiro.net>        The highway is for gamblers,
San Francisco Bay Area, California    better use your sense - B. Dylan



More information about the Mailman-Users mailing list