
Hello Listers, I migrated to a VMware ESXI environment from an old P3/512 MB mailman server (RedHat/sendmail, mailman had been upgraded to 2.1.x (I think)). The new environment is CentoOS 6.2 Postfix Mailman 2.1.16rc2. Anyway the old server was a rock! The new mailman environment is a processor sponge (clearly not an all-natural hand collected sponge either) I will go over the entire process. After quite a bit of work with the migration (mostly because my learning curve with postfix) I got Mailman working. So we noticed right off the bat that Mailman was maxing the processor and filling memory and later I found that it was filling up the hard drive... So I had the ESXI admin add another processor and memory. Fixed the memory problem. Then all the sudden messages stopped going out to lists ( that was when I found out the drive was filling up). I found that the data directory was large, dare I say bloated beyond recognition. Looked like old held messages were filling up the drive (I think from my Postfix testing, maybe). I went in and ran "bin/discard data/heldmsg-mailman-1*" restarted mailman. Seemed to fix the problem. I also noticed there was a owner-bounces.mbox -rw-------. 1 mailman mailman 51198511 May 8 10:25 owner-bounces.mbox The drive filling issue appears to under control now. Further trouble shooting and a report that we were getting bounce reports saying that google was not allowing more message per smtp connection. Led me to add these lines to my mm_cfg.py file. SMTP_MAX_RCPTS = 1 VERP_CONFIRMATIONS = Yes Now the processor is close to 100%. Messages are going out slow. I followed some trouble shooting guides to many to mention.
So I put it to you ohh masters of the Mailman universe.. Thanks for any help you can provide. Morgan Ecklund

On 05/08/2014 07:51 AM, Ecklund, Morgan wrote:
Now the processor is close to 100%. Messages are going out slow. I followed some trouble shooting guides to many to mention.
What does 'top' show about the processes using the CPU? If the process(es) is/are python processes, do 'ps -fw' on their PIDs to see exactly which runners they are.
Is your 'out' queue backlogged? The symptoms are a lot of files in qfiles/out (if this is the CentOS/RedHat package that's probably /var/spool/mailman/out).
Also, Mailman's 'smtp' log which contains entries like
May 08 08:39:11 2014 (5288) <message_id> smtp to listname for nnn recips, completed in tt.ttt seconds
If the queue is backlogged, each timestamp will be tt.ttt seconds following the preceding entry.
If the queue is backlogged, see some results of <https://www.google.com/#q=site:mail.python.org+inurl:mailman+out+queue+backl...> including <https://mail.python.org/pipermail/mailman-users/2012-January/072778.html>.
-- Mark Sapiro <mark@msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan

On 05/08/2014 07:51 AM, Ecklund, Morgan wrote:
Now the processor is close to 100%. Messages are going out slow. I followed some trouble shooting guides to many to mention.
What does 'top' show about the processes using the CPU? If the process(es) is/are python processes, do 'ps -fw' on their PIDs to see exactly which runners they are.
Is your 'out' queue backlogged? The symptoms are a lot of files in qfiles/out (if this is the CentOS/RedHat package that's probably /var/spool/mailman/out).
Also, Mailman's 'smtp' log which contains entries like
May 08 08:39:11 2014 (5288) <message_id> smtp to listname for nnn recips, completed in tt.ttt seconds
If the queue is backlogged, each timestamp will be tt.ttt seconds following the preceding entry.
If the queue is backlogged, see some results of <https://www.google.com/#q=site:mail.python.org+inurl:mailman+out+queue+backl...> including <https://mail.python.org/pipermail/mailman-users/2012-January/072778.html>.
-- Mark Sapiro <mark@msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan
participants (2)
-
Ecklund, Morgan
-
Mark Sapiro