Regarding Handlers/SMTPDirect.py and "chunkify"
Hello there,
I don't really know how to start this topic, I'm fairly new to Mailman and I don't speak anything but the most basic Python, but I am unsure whether the function "def chunkify(recips, chunksize):" which is defined in "Mailman/Handlers/SMTPDirect.py" might not have an adverse effect on mail delivery performance.
The problem is: At the moment, I'm looking at a specific setup without knowing anything about the "historical" development of "chunkify". There is a comment which mentions an initial suggestion made by Chuq Von Rospach and further improvements made by Barry Warsaw. Unfortunately, I was unable to find these original discussion, and frankly, without knowing why "chunkify" is implemented the way it is, I don't think I'm qualified to discuss my concern (which involves VERP delivery using a specific MTAs VERP implementation and a highly I/O saturated mail server) on this list.
What I found out from the archives is that "chunkfiy" has two purposes:
- Make sure that not more than mm_cfg.SMTP_MAX_RCPTS are passed to the MTA in a single transaction.
- To improve delivery performance by grouping destinations which will prevent messages which are addressed to dead/congested destinations blocking delivery of messages to working destinations.
Is this assumption of mine basically correct? If it is and if I didn't miss any important parts, I would like to point out a specific scenario in which the way "chunkify" is implemented and the way in which it is called based on delivery style (verp, bulk) and the setting of mm_cfg.SMTP_MAX_RCPTS might actually worsen overall mail delivery performance.
If I am totally wrong and "chunkfiy" serves a totally different purpose, please feel free to ignore this posting.
Cheers Stefan
Stefan Förster wrote:
I don't really know how to start this topic, I'm fairly new to Mailman and I don't speak anything but the most basic Python, but I am unsure whether the function "def chunkify(recips, chunksize):" which is defined in "Mailman/Handlers/SMTPDirect.py" might not have an adverse effect on mail delivery performance.
The problem is: At the moment, I'm looking at a specific setup without knowing anything about the "historical" development of "chunkify". There is a comment which mentions an initial suggestion made by Chuq Von Rospach and further improvements made by Barry Warsaw. Unfortunately, I was unable to find these original discussion, and frankly, without knowing why "chunkify" is implemented the way it is, I don't think I'm qualified to discuss my concern (which involves VERP delivery using a specific MTAs VERP implementation and a highly I/O saturated mail server) on this list.
See <http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq04.012.htp> for some background.
What I found out from the archives is that "chunkfiy" has two purposes:
- Make sure that not more than mm_cfg.SMTP_MAX_RCPTS are passed to the MTA in a single transaction.
- To improve delivery performance by grouping destinations which will prevent messages which are addressed to dead/congested destinations blocking delivery of messages to working destinations.
is correct.
is only partially correct. The purpose of grouping is to try to minimize the number of transactions between the outgoing MTA and the remote MTA. E.g., if all the aol.com addresses are passed from mailman to the outgoing MTA in one SMTP transaction, then the MTA is able (if so configured, etc., ...) to deliver the message to aol.com in one transaction with the same recipients. It has nothing to do with trying to prevent blocking, as the chunks are delivered serially anyway.
None of this is relevant if your MTA is VERPing the outgoing message, because the MTA will send each VERPed message to a single recipient because each recipient's message has a different envelope from (separate SMTP MAIL FROM command).
Is this assumption of mine basically correct? If it is and if I didn't miss any important parts, I would like to point out a specific scenario in which the way "chunkify" is implemented and the way in which it is called based on delivery style (verp, bulk) and the setting of mm_cfg.SMTP_MAX_RCPTS might actually worsen overall mail delivery performance.
If I am totally wrong and "chunkfiy" serves a totally different purpose, please feel free to ignore this posting.
I wouldn't say you are totally wrong, but you are not totally correct. In any case, if you have 'tuning' suggestions, either in general or for specific cases, we would be glad to receive them and either incorporate them in the code or add them to the FAQ as appropriate.
-- Mark Sapiro <mark@msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan
- Mark Sapiro <mark@msapiro.net> wrote:
Stefan Förster wrote:
I don't think I'm qualified to discuss my concern (which involves VERP delivery using a specific MTAs VERP implementation and a highly I/O saturated mail server) on this list.
See <http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq04.012.htp> for some background.
I apologize, I should have found that entry myself. This sure makes up for an interesting reading. Was bandwith really that expensive back in 2001? :)
- is only partially correct. The purpose of grouping is to try to minimize the number of transactions between the outgoing MTA and the remote MTA. E.g., if all the aol.com addresses are passed from mailman to the outgoing MTA in one SMTP transaction, then the MTA is able (if so configured, etc., ...) to deliver the message to aol.com in one transaction with the same recipients. It has nothing to do with trying to prevent blocking, as the chunks are delivered serially anyway.
I can fully understand the thoughts behind this. OTOH, as the FAQ entry you mentioned, if you do parallel deliveries to a single location, you get a greater throughput: "Cut network bandwidth but slow delivery to the larger domains." I remember a posting on this list about cuddling MTAs or trying to outsmart them ;-) [1]
In any case, if you have 'tuning' suggestions, either in general or for specific cases, we would be glad to receive them and either incorporate them in the code or add them to the FAQ as appropriate.
I don't know whether this is really a "tuning" suggestion. Let me please describe my specific setup in hopes that the reason for my request becomes clearer:
I recently had to take care of a server which was MX for a lagre company and their outgoing mail relay (common setup). This machine was doing fine until another server, dedicated to handle the companys many mailig lists, broke down and it had to take care of mailing list deliveries, too. The added load of permanent VERPed deliveries was totally killing the machine in terms of I/O capacity, a situation which could be resolved as of yesterday by letting the MTA do the VERPing.
Now, the reason this change was so effective was that the number of (disk) I/O operations was greatly reduced: Where a posting to a list with 25000 subscribers would inevitably lead to the creation of 25000 queue files (even more if message content and control information are kept in different files) before the change, only 25000/500 were created afterwards (or "should have been", see next paragraph). Additionally (but this has not really that much of an influence), a larger number of destinations is available much faster to the MTA, allowing it's queue manager to make decisions in a "more educated" way.
Since the server is still quite close to it's I/O limits, I would very much like to further reduce the number of I/O transactions. Therefore, I would like to see a configurable parameter, something like FLAT_RECIPIENT_CHUNKIFY: Right now, if mm_cfg.SMTP_MAX_RCPTS is not set <= 0, "chunkify" will always (i.e. for any real world recipient distribution) return more than one chunk of recipients, even if the total number of recipients is smaller than mm_cfg.SMTP_MAX_RCPTS. With the addition of the aforementioned "tunable", we could have chunkify simple split recipients into batches of mm_cfg.SMTP_MAX_RCPTS and don't do any groupings. This way, a delivery to 25000 recipients would really only involve (25000/mm_cfg.SMTP_MAX_RCPTS)*(number of queue files per message) file creation operations.
Let me please emphasize that I'm fully aware that I will need better disks or a new server, or get rid off VERP in the long run. I was just thinking that the addition of a tunable like that _might_ be a reasonable thing to propose if one absolutely wants to reduce the number of I/O operations during a "bulkdeliver" without having to set mm_cfg.SMTP_MAX_RCPTS to <= 0. But I think having one more option (and I don't think this is hard to code, even I could do that and I have read my first lines of Python on Friday evening) to adapt Mailman (or it's delivery strategy) to a local site's needs would not really hurt. If this option defaults to false, nobody would even notice it.
OTOH, it is one configuration parameter more, and one should not easily include these things to avoid unnecessary configuration complexity.
If you feel this is an unreasonable request, please do say so. If not, I'd be glad to get some feedback from the people on this list regarding "optimal" delivery strategies to the local MTA in this (or other) cases.
Cheers Stefan
[1] This was meant to be a joke. Just to be safe.
Stefan Förster http://www.incertum.net/ Public Key: 0xBBE2A9E9 "Wenn es sich um Geld handelt, gehört jeder der gleichen Religion an." -- Voltaire
Stefan Förster wrote:
Now, the reason this change was so effective was that the number of (disk) I/O operations was greatly reduced: Where a posting to a list with 25000 subscribers would inevitably lead to the creation of 25000 queue files (even more if message content and control information are kept in different files) before the change, only 25000/500 were created afterwards (or "should have been", see next paragraph). Additionally (but this has not really that much of an influence), a larger number of destinations is available much faster to the MTA, allowing it's queue manager to make decisions in a "more educated" way.
Since the server is still quite close to it's I/O limits, I would very much like to further reduce the number of I/O transactions. Therefore, I would like to see a configurable parameter, something like FLAT_RECIPIENT_CHUNKIFY: Right now, if mm_cfg.SMTP_MAX_RCPTS is not set <= 0, "chunkify" will always (i.e. for any real world recipient distribution) return more than one chunk of recipients, even if the total number of recipients is smaller than mm_cfg.SMTP_MAX_RCPTS. With the addition of the aforementioned "tunable", we could have chunkify simple split recipients into batches of mm_cfg.SMTP_MAX_RCPTS and don't do any groupings. This way, a delivery to 25000 recipients would really only involve (25000/mm_cfg.SMTP_MAX_RCPTS)*(number of queue files per message) file creation operations.
I understand what you are saying, but I wonder what the real world difference would be. As currently written, chunkify returns at most 4 partially filled chunks. Granted, 4 is significantly bigger than one, but given that the MTA is VERPing the deliveries, it may ultimately create an outgoing queue entry for each recipient anyway, so the extra 3 on the inbound side doesn't seem that significant (and it might increase parallelism in the MTA).
Given your 25000 member list, and assuming SMTP_MAX_RCPTS = 500, you would have at most 54 chunks (and more likely 53 or 52) instead of 50.
In any case, If I were coding this, I would be inclined to not make it an option, but just to change chunkify so it still grouped, but continued to fill the last chunk of a group from the next group so there would be at most one partial chunk.
-- Mark Sapiro <mark@msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan
Am 12.05.2008 um 23:20 schrieb Mark Sapiro:
I understand what you are saying, but I wonder what the real world difference would be. As currently written, chunkify returns at most 4 partially filled chunks. Granted, 4 is significantly bigger than one, but given that the MTA is VERPing the deliveries, it may ultimately create an outgoing queue entry for each recipient anyway, so the extra 3 on the inbound side doesn't seem that significant (and it might increase parallelism in the MTA).
First of all, I just noticed that the official code does indeed only
create at most 4 partially filled buckets. That's the problem when you
have to jump in for someone else: My SMTPDirect.py contains 26 TLDs.
Two thoughts:
- Even with only four buckets, when we have a real world distribution
amongst recipient addresses, this is four times the I/O needed. The
ratio get's better with the number of list subscribers growing, but if
there are less recipients than SMTP_MAX_RCPTS, it's exactly at 1:4. - Why even split recipients the way it's done now at all? You have to
either add new buckets (add new TLDs) or have all recipients outside
the hard coded TLDs be thrown into the same bucket. I could understand
it if you first created a list of TLDs involved and sorted by those -
though I don't know if it's a good idea if you run a really large list
and examine all recipients...
I didn't understand what you said about VERPing and outgoing queue
entries - surely any MTA will keep track of recipients on a per
message basis? As for parallelism, I think the best way to ensure fast
delivery is to make all target destinations known to the MTA as fast
as possible.
Given your 25000 member list, and assuming SMTP_MAX_RCPTS = 500, you would have at most 54 chunks (and more likely 53 or 52) instead of 50.
In any case, If I were coding this, I would be inclined to not make it an option, but just to change chunkify so it still grouped, but continued to fill the last chunk of a group from the next group so there would be at most one partial chunk.
At the moment, I changed the code to simply return SMTP_MAX_RCPTS per
chunk - or all recipients if there are less than that. Hardcoded, not
configurable. The way it is done now I can't see any real advantages -
especially living outside the U.S. Either improve the sorting
algorithm (all TLDs, don't return partial chunks) or make it
configurable to skip sorting altogether. Or at least that's what I
feel would be an improvement. Have it default to flat chunking. It
saves CPU time, I/O operations and gives the MTAs queue manager more
time to do it's job.
Cheers Stefan
Stefan Förster http://www.incertum.net/ Public Key: 0xBBE2A9E9 Written on OSX. Who ate my ~/.signature?
Stefan Förster wrote:
Am 12.05.2008 um 23:20 schrieb Mark Sapiro:
I understand what you are saying, but I wonder what the real world difference would be. As currently written, chunkify returns at most 4 partially filled chunks. Granted, 4 is significantly bigger than one, but given that the MTA is VERPing the deliveries, it may ultimately create an outgoing queue entry for each recipient anyway, so the extra 3 on the inbound side doesn't seem that significant (and it might increase parallelism in the MTA).
First of all, I just noticed that the official code does indeed only create at most 4 partially filled buckets. That's the problem when you have to jump in for someone else: My SMTPDirect.py contains 26 TLDs. Two thoughts:
- Even with only four buckets, when we have a real world distribution amongst recipient addresses, this is four times the I/O needed. The ratio get's better with the number of list subscribers growing, but if there are less recipients than SMTP_MAX_RCPTS, it's exactly at 1:4.
True.
- Why even split recipients the way it's done now at all? You have to either add new buckets (add new TLDs) or have all recipients outside the hard coded TLDs be thrown into the same bucket. I could understand it if you first created a list of TLDs involved and sorted by those - though I don't know if it's a good idea if you run a really large list and examine all recipients...
This predates my experience with Mailman. It is based on the statistics provided by Chuq and outlined in the FAQ. It's true that these statistics may only be applicable to lists with primarily US members, and may be outdated in any case, but I can't provide any more information on why it's done that way. Perhaps it's an idea that's outlived its usefulness.
I didn't understand what you said about VERPing and outgoing queue entries - surely any MTA will keep track of recipients on a per message basis?
I wasn't thinking clearly. I'm sure you're correct.
As for parallelism, I think the best way to ensure fast delivery is to make all target destinations known to the MTA as fast as possible.
Given your 25000 member list, and assuming SMTP_MAX_RCPTS = 500, you would have at most 54 chunks (and more likely 53 or 52) instead of 50.
In any case, If I were coding this, I would be inclined to not make it an option, but just to change chunkify so it still grouped, but continued to fill the last chunk of a group from the next group so there would be at most one partial chunk.
At the moment, I changed the code to simply return SMTP_MAX_RCPTS per chunk - or all recipients if there are less than that. Hardcoded, not configurable. The way it is done now I can't see any real advantages - especially living outside the U.S. Either improve the sorting algorithm (all TLDs, don't return partial chunks) or make it configurable to skip sorting altogether. Or at least that's what I feel would be an improvement. Have it default to flat chunking. It saves CPU time, I/O operations and gives the MTAs queue manager more time to do it's job.
I think you make a good argument. I'd like to hear from others on this.
-- Mark Sapiro <mark@msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On May 12, 2008, at 6:57 PM, Mark Sapiro wrote:
This predates my experience with Mailman. It is based on the
statistics provided by Chuq and outlined in the FAQ. It's true that these statistics may only be applicable to lists with primarily US members, and may be outdated in any case, but I can't provide any more information on why it's done that way. Perhaps it's an idea that's outlived its usefulness.
It may have. I see from the NEWS file that SMTP_MAX_RCPTS was
introduced on the last day of 1998, and these chunking tricks were
essential back then given the state of the art in MTAs. A decade is a
long time in this field, so it makes sense to re-evaluate Mailman's
MTA hand-off machinery.
What makes the most sense for us to do today? In principle, I would
like to simplify this code rather than complicate it with yet another
configuration option. So what use cases do we need to support? Can
we get closer interaction with the upstream MTA to hand it the
outgoing message and recipients more efficiently? The more we can
push these decisions into the MTA, the better off Mailman will be.
As an example of an efficiency I'm likely to add to MM3: I believe I
can avoid calculating the full recipient list, requiring storing it in
memory or in a queue file. I think I can build it so that we only
iterate through the recipients one at a time, as we're talking to the
upstream MTA. That would be a big I/O and memory footprint savings on
the Mailman side. Are there other thing we can do to push most of the
processing, chunking, and efficiency calculations into the MTA, where
it belongs (IMO)? To gain this benefit, we may have to ditch
chunking, or only support what grouping is available in SQL.
We clearly need to support personalization[1] in Mailman, although it
would be nice if there were extensions we could rely on in the MTA to
push even that out farther. I'd like to hear from any Sendmail,
Postfix, or Exim experts on this list to see if the chunking even
makes sense any more.
I will note that we still get requests to batch deliveries to the
outgoing MTA. People want to be able to throttle Mailman's outgoing
bandwidth to X number of messages per hour or day. Ideally, this
shouldn't be Mailman's task, but it's a valid request and one we
should keep in mind.
I would be happy if we could determine the two or three strategies
that address 80% of Mailman's users, package them up as simple-to-
configure defaults, and then provide a robust plugin architecture that
people can use to build all kinds of wacky strategies for the other 20%.
- -Barry
[1] Technically, not VERP, but close enough for our purposes.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin)
iEYEARECAAYFAkgpGGAACgkQ2YZpQepbvXHNwACgpBxMYwh136zybvpKWJtEKol4 Ge0An1vLzJWz9mRGOSwd/nR6bM+OkdEo =DNsQ -----END PGP SIGNATURE-----
--On 13 May 2008 00:26:08 -0400 Barry Warsaw <barry@list.org> wrote:
We clearly need to support personalization[1] in Mailman, although it would be nice if there were extensions we could rely on in the MTA to push even that out farther. I'd like to hear from any Sendmail, Postfix, or Exim experts on this list to see if the chunking even makes sense any more.
Yes, it does. At least, it makes sense to offer the option. There are two cases - (a) Mailman delivers all its mail to a smarthost, (b) Mailman delivers some or all of its mail directly to the MX hosts of the recipients.
Exim, by default, doesn't care about the number of recipients per message. It can be configured to using "recipients_max", and in that case will either defer or reject "recipients_max_reject" the excess. Both these configuration variables are global. Fine tuning of similar limits is available in ACLs, though, and can be configured differently for a known Mailman host.
If Mailman is configured to route mail through a smarthost, then an Exim smarthost should be configured to cope with whatever Mailman throws at it.
Otherwise, as the Exim docs caution, you might fall foul of RFC2821 section 4.5.3.1 Size limits and minimums:
recipients buffer The minimum total number of recipients that must be buffered is 100 recipients. Rejection of messages (for excessive recipients) with fewer than 100 RCPT commands is a violation of this specification. The general principle that relaying SMTP servers MUST NOT, and delivery SMTP servers SHOULD NOT, perform validation tests on message headers suggests that rejecting a message based on the total number of recipients shown in header fields is to be discouraged. A server which imposes a limit on the number of recipients MUST behave in an orderly fashion, such as to reject additional addresses over its limit rather than silently discarding addresses previously accepted. A client that needs to deliver a message containing over 100 RCPT commands SHOULD be prepared to transmit in 100-recipient "chunks" if the server declines to accept more than 100 recipients in a single message.... etc...
-- Ian Eiloart IT Services, University of Sussex x3148
- Barry Warsaw <barry@list.org> wrote:
On May 12, 2008, at 6:57 PM, Mark Sapiro wrote:
This predates my experience with Mailman. It is based on the statistics provided by Chuq and outlined in the FAQ. It's true that these statistics may only be applicable to lists with primarily US members, and may be outdated in any case, but I can't provide any more information on why it's done that way. Perhaps it's an idea that's outlived its usefulness.
It may have. I see from the NEWS file that SMTP_MAX_RCPTS was introduced on the last day of 1998, and these chunking tricks were essential back then given the state of the art in MTAs. A decade is a long time in this field, so it makes sense to re-evaluate Mailman's MTA hand-off machinery.
As Ian Eiloart already pointed out, Mailman MUST comply with RFC 2821. I'd even say lower the default setting of SMTP_MAX_RCPTS to 100 (mark this change in the NEWS file or changelog file, and add MTA specific instructions in the README.<MTA> files).
We clearly need to support personalization[1] in Mailman, although it would be nice if there were extensions we could rely on in the MTA to push even that out farther. I'd like to hear from any Sendmail, Postfix, or Exim experts on this list to see if the chunking even makes sense any more.
My inquiry on postfix-users on this matter hasn't yielded any satisfying answers yet, and I'm by no means a Postfix "expert", but I have several years of experience and I can read the documentation, available at http://www.postfix.org/SCHEDULER_README.html - especially the part talking about how the preemptive schedueler picks delivery jobs. Fro, what's written there, it seems Postfix's queue manager - qmgr - is able to group recipients on a roughly "per destination" base even if different mails are assigned to those recipients. That means, if the third batch contains an entry for "foo@aol.com.invalid" and the sixth batch has a recipient "bar@aol.com.invalid", by the time the qmgr begins to deliver to "@aol.com.invalid" it will correctly group delivery to those two recipients, applying concurrency feedback (see first part of SCHEDULER_README.html) as needed. Therefore, in an environment without latency, a zero message delivery time, an unlimited amount of delivery agents and an unlimited number of memory for recipient entries (well, last two don't need to be "unlimited", jst large enough so that every recipient of every incoming message can be processed instantly), we would lose performance by NOT grouping those "@aol.com.invalid" recipients.
In reality, the number of delivery agents, the maximum number of concurrent connections to a destination and the number of recipients which can be kept in Postfix's active_queue are limited, queue file generation, setting up a TCP connection and transmitting of a message do take more than zero time (the last two steps may even fail!). If in the above example batch six enters the active_queue before all delivery to "@aol.com.invalid" is done, the only drawback of NOT grouping recipients would then be the overhead to either keep the actual message contents in memory twice or to do two "read file" operations to get those contents to memory for transmission.
OTOH, if by grouping we increase the overall number if "DATA" commands (and therefore messages) transmitted to Postfix by Mailman, we might have even higher overhead.
Please don't take that statement for granted. I didn't write "qmgr". And to be honest, I didn't even start to think about network bandwith usage. I strongly believe chunking the way it's done doesn't make sense anymore, if you are not running some old qmail ;)
I will note that we still get requests to batch deliveries to the outgoing MTA. People want to be able to throttle Mailman's outgoing bandwidth to X number of messages per hour or day. Ideally, this shouldn't be Mailman's task, but it's a valid request and one we should keep in mind.
I would be happy if we could determine the two or three strategies that address 80% of Mailman's users, package them up as simple-to- configure defaults, and then provide a robust plugin architecture that people can use to build all kinds of wacky strategies for the other 20%
Ok, from my point of view, I'd say 80% of users are made up of two groups:
The first group doesn't care at all. They want their MTA to do the work, they fine tune it (or don't), but basically, as long as their machines don't crash, they couldn't care less about the order in which messages are delivered to the MTA. For those people, Mailman needs a way to get messages out to the downstream MTA quickly, which implies two constraints:
- The recipient list must be stored in a way that allows quick generation of the necessary SMTP commands.
- Either the MTA or Mailman must have an option similar to SMTP_MAX_RCPTS to avoid unnecessary delays and another option to not exhaust the maximum number of concurrent connections the MTA will acceppt.
Advantages of this approach:
- It is never Mailman's fault.
- All "tuning" can be done in the MTA's configuration, if "tuining" is needed.
- Recipient ordering is only donw once, saving a (tiny) amount of CPU time.
Drawbacks of ths approach:
- Mailman has absolutely zero control about the order or time messages are delivered to their final recipients.
- If, for any reason, delivery to the MTA fails, we (actually, you *g*) need to have code in place to handle a queue of outgoing messages anyways.
The second group of those 80% want a finer control, and from what you describe, they all want a "per timeslice" behaviour. Delivering only a certain amount of messages, delivering messages only during off-hours (don't know if this is real word, I mean evening and early morning), delivering only a certain volume of messages, all those are basically timeslice constraints. To handle those, Mailman needs to:
- Know what time it is. (obviously!)
- Considering that all delivery strategies are <N>/time_slice,
Mailman obviously need to know <N>, which can be:
- the total number of messages, as in "multiply every outgoing message whith the number of list subscribers and sum that up"
- for volume based delivery, the total number of messages and at least an educated guess on their size
- number of recipients for a single destination
- ...
So if Mailman would now the concept of "per timeslice" and the characteristics of outgoing mail, you already had a very flexible interface in place which allowed for refined control of message flow. OTOH, aside from some very special cases (UUCP, ETRN-only), Mailman doesn't know the internal state of the MTA it delivers to. If there were, for example, a network outage and the MTA would queue the contents of 10 timeslices until the network is restored, all that refined flow control was for nothing.
Honestly, I don't know what kind of "wacky" the remaining 20% of mailman users could want, but I can think of some things which are normally done in the MTA, for example concurrency control to certain destinations. I have no clue how that could be achieved, because, again, Mailman does NOT know about the state of the MTA it delivers to, and implementing an ESMTP engine and queue manager in Mailman seems like doing the job twice.
Cheers Stefan
P.S: While writing this message, I didn't have access to a dictionary. I apologize in advance.
Stefan Förster http://www.incertum.net/ Public Key: 0xBBE2A9E9 218 DSA/RSA host keys to go... Worst. Nightmare. Ever.
participants (4)
-
Barry Warsaw
-
Ian Eiloart
-
Mark Sapiro
-
Stefan Förster