As an experiment in analysing my system loads I added the following to my exim.conf:
# When this option is set, a delivery process is started whenever a # message is received, directing and routing is performed, and local # deliveries take place. However, if any SMTP deliveries are # required for domains that match queue_smtp_domains, they are not # immediately delivered, but instead the message waits on the queue # for the next queue run. Since routing of the message has taken # place, Exim knows to which remote hosts it must be delivered, and # so when the queue run happens, multiple messages for the same host # are delivered over a single SMTP connection. This option is # checked against the domains supplied in the incoming addresses, # before any widening is done (because that is part of routing). The # -odqs command line option causes all SMTP deliveries to be queued # in this way, and is equivalent to setting queue_smtp_domains to # *'. See also queue_remote_domains, which is subtly different.
queue_smtp_domains = "*"
Note that all of my lists are hand moderated resulting in outbound mail occurring in batches, typically in the range of 5K - 10K MTA spool entries per batch.
The primary effect of adding this config item is that (in general) a compleat moderation batch is enqueued by the MTA before delivery attempts start on any of the component messages. This turns out to have a significant effect on delivery time performance.
Loosely what seems to be happening is that without the above config Exim and Mailman's qrunner end up competing for CPU resources, with system loads climbing into the 30's and 40's with parallel deliveries to the same MX being infrequent to the point of nonexistence. [1] However with the above config Exim has the opportunity to batch all mail targeted to a given MX and parallel deliveries are the default. [2] This shows clearly in the logs with blocks of deliveries to the same MX all together. The apparent result of this batching is that the total delivery time [3] for a moderation batch is now reduced by a factor of almost 5. [3]
Not a bad side effect for a little idle poking about.
--
J C Lawrence
---------(*) Satan, oscillate my metallic sonatas.
claw@kanga.nu He lived as a devil, eh?
http://www.kanga.nu/~claw/ Evil is a name of a foeman, as I live.
I use 20 queue runners with up to 10 parallel deliveries per MX.
I have low rates of duplicate MX use in my subscriber base. AOL + MSN + Hotmail together form less than 3% of my subscriber base. Outside of resource contention this low collision rate is the prime factor in parallel deliveries not happening.
Loosely defining total delivery time as the taken to handle all deliveries to all fast MX'es (no errors, warnings, or responses taking more than a second).
This depends heavily on whether Mailman's qrunner can get the compleat moderation queue dumped to the MTA before it starts another queue run. Typically for my setup I can get ~4K spool entries queued between MTA queue runs. I *should* be able to get at least an order of magnitude more, and almost two (as determined by trivial test code doing a canned loop). I haven't yet analysed why the qrunner path is so slow and am unlikely to before I deploy v2.1. However as a general observation Mailman v2.0's qrunner seems absurdly slow in iterating across the membership roster and dumping RCPT TO bundles to the MTA.
On Mon, Dec 09, 2002 at 08:00:25PM -0800, J C Lawrence wrote:
together. The apparent result of this batching is that the total delivery time [3] for a moderation batch is now reduced by a factor of almost 5. [3]
Not a bad side effect for a little idle poking about.
Indeed, it will increase overall throughput. However, you are sacrificing turnaround time to several minutes or more as a result, and this from mere seconds with mm 2.1 (for that matter, with some of my lists, when I save my message in vim, by the time I get back to the mutt index, the message has been sent, processed by the MTA, handed out to mailman, which exploded it and sent it back to my mailbox, so my message shows up in my index in the half second that it took mutt to come back from vim and rescan my mailbox
If you ask me, that's pretty cool :-)
Now, I'm curious: In your case, you get bursts of messages with lots of subscribers on your list. How about a list server with many lists so that you always have 50 to 100 exims running and trying to clean the queue, deliver mail, or talking to mailman... In that case (i.e. really loaded server), I'm not convinced that spooling everything to disk and having exim process the message on the next queue run is going to be that much more effective. Is it?
Marc
"A mouse is a device used to point at the xterm you want to type in" - A.S.R. Microsoft is to operating systems & security .... .... what McDonalds is to gourmet cooking Home page: http://marc.merlins.org/ | Finger marc_f@merlins.org for PGP key
On Tue, 2002-12-10 at 07:36, Marc MERLIN wrote:
How about a list server with many lists so that you always have 50 to 100 exims running and trying to clean the queue, deliver mail, or talking to mailman... In that case (i.e. really loaded server), I'm not convinced that spooling everything to disk and having exim process the message on the next queue run is going to be that much more effective. Is it?
Well the answer is I don't know.... However an interesting point is that all the hints files will be updated, meaning that when the queue runner hits the first message/address routed for mx1.example.com it will then normally pick up the other messages with addresses waiting on mx1.example.com and push them down the same SMTP connection.
How this interacts with the max_parallel settings, and TLS/SSL is an interesting set of questions which early in the morning after a very late night I am not equipped to deal with :-/ [Switching off all outgoing TLS/SSL support on a mailing list machine may well be a good plan, although I *like* the idea of making things more difficult to look into].
Nigel.
-- [ Nigel Metheringham Nigel.Metheringham@InTechnology.co.uk ] [ - Comments in this message are my own and not ITO opinion/policy - ]
participants (3)
-
J C Lawrence
-
Marc MERLIN
-
Nigel Metheringham