[Mailman-Developers] (no subject)
J C Lawrence
Mon, 11 Dec 2000 23:15:31 -0800
On Mon, 11 Dec 2000 20:49:36 -0800
Chuq Von Rospach <email@example.com> wrote:
> At 7:51 PM -0800 12/11/00, J C Lawrence wrote:
>> ObTheme: All config files should be human readable unless those
>> files are dynamically created and contain data which will be
>> easily and automatically recreated.
> ObTheme: All configuration should be possible via the web, even if
> the system is misconfigured and non-functional. Anything that can
> NOT be safely reconfigured without breaking the system should not
> be configurable via the web. (in other words, anything you can
> change, you should be able to change remotely, unless you can
> break the ssytem. If you cna break the system, you shouldn't be
> allowed near it trivially...)
I'm currently working under the generous assumption that its
possible to cook up a web interface design for almost anything, so
I'm punting there for now.
>> 1) Using multiple simultaneous processes/threads to parallelise a
>> given task.
>> 2) Using multiple systems running parallel to parallelise a given
>> 3) Using multiple systems, each one dedicated to some portion(s)
>> or sub-set of the overall task (might be all working in parallel
>> on the entire problem (lock contention! failure modes!)).
> that's my model perfectly, althought I think 2 and 3 are
> it's cleaner architecturally to go to divesting and distributing
> functionality before 'clustering'. In fact, I'm not sure
> clustering (which I'll use to term multiple mailman systems
> running in parallel) implies a system really, really large, when
> you realize that the primary resource eaters (like delivery) can
> effectively be infinitely distributed.
Yup, I've accounted for that so far in the design.
> I'm not sure how big a Mailman system you'd need ot require
> parallelizing the core process, as long as you can divest off
> other pieces to a farm that could grow without bounds. So maybe we
> don't need that next (complicated) step, and make it parallelized
> and distributable for everything except that core control process,
> but manage the complexity of that control process to keep
> everyting out of it exect the absolute necessity.
I'm working on the principle that there is no core process, and thre
are no musical conductors or other time beaters, just discrete nodes
and processes competing for resources.
>> Observation: MLMs are primarily IO bound devices, and are
>> specifically IO bound on output. Internal processing on mail
>> servers, even given crypto authentication and expensive
>> membership generation processes (eg heavy SQL DB joins etc) are
>> an order of magnitude smaller problem than just getting the
>> outbound mail off the system.
> some of that is the MUA's problem, actually, but they get tied
> together. you don't, for instance, want an MLM who will dump 50K
> pieces of email an hour into the queues of an MUA that can only
> process 40K...
I think you mean MTA above and below.
> But in general, you're correct. Especially if you define DNS
> delays and SMTP protocol delays caused by the receiving machine to
> be "output" (grin)
Or just the simple FS commit requirements for MTA spools. Its a
>> Sites with large numbers of lists with large numbers of members
>> (and presumably large numbers of messages per list) are the
>> pessimal case, and is not one Mailman is currently targeting to
> but if you define the distribution capabilities correctly, this
> case is solved by throwing even more hardware at it, and the
> owners of this pessimal case presumably have a budget for it. If
> you see someone tryting to run Sourceforge on a 486 and a 128K DSL
> line, you laugh at them.
True, except that lock contention becomes a major problem and
scheduling strategies become critical.
>> Observation: Traffic bursts are bad. Minimally the MLM should
>> attempt to smooth out delivery rates to a given MTA to be no
>> higher than N messages/time.
> The obverse of that is that end-users seriously dislike delays,
> especially on conversational lists. It turns into the old "user
> expectation" problem -- it's better to hold ALL mail for 15
> minutes so users come to expect it than to normally deliver mail
> in 2 minutes, except during the worst bulges... But in general,
> the MLM should deliver as fast as it reasonable can without
> overloading the MUA, which implies some kind of monitoring setup
> for the MUA, or some user-controlled throttling system. the latter
> unfortunately, implies teaching admins how to monitor and adjust,
> a support issue. The former implies writing an interface for every
> MTA -- a development AND support issue.
My intent so far is just "deliver no more than N mesages per minute"
per outbound aueue runner. It knocks the peaks off the problem, and
the base structure ie easy to extend from there (and I don't want to
think about that now).
>> 1) Receipt of message by local MTA
> 1a) passthrough of message via a security wrapper from MTA to list
> server... (I think it's important we remember that, because we
> can't lose it, and it involves a layer of passthrough and a
> process spawning, so it's somewhat heavyweight -- but
I should note that my base design is very heavy in terms of process
forks (which happen to be quite light weight under Linux, but that's
another matter). The general structural approach I'm taking is:
There's a directory full of scripts/programs.
Run them all, in directory sort order, on this message to
determine if we should do XXX with it.
Now the default case could have those directories empty, meaning
that Mailman will default to internal/cheap implementations, but its
much easier to just have default implementations of the scripts for
those directories and then punt normally.
ObNote; Of course the default scripts could by python scripts and
could be processed in-line as modules rather than forking children.
>> 2) Receipt by list server 3) Approval/editing/moderation 4)
>> Processing of message and emission of any resultant message(s) 5)
>> Delivery of message to MTA for final delivery.
> 6) delivery of message to non-MTA recipients (the archiver,
> the logging thing, the digester, the bounce processor....)
I'm actually doing archiving by injecting new messages back into the
inbound queue which are addressed to the archiver. Digest
processing should probably be handled this way, but I've currently
got it as a pre-post script (not entirely keen on that). Bounces
are handled entirely OOB to the rest of the MLM, rather similarly in
fact to request messages.
>> #1 is significant only because we can can rely on the MTA to
>> distinguish between valif list-related addresses and non-list
> although one thing I've toyed with is to give a subdomain to the
> MLM, and simply pass everything to it (in sendmail terms, using
> virtusertable to pass @list.foo.bar to firstname.lastname@example.org). Then you
> take the MLM out of having to know what lists exist and
> administrative needs to keep that interface in sync.
There are a couple other list servers that demand that approach.
The problem is that it really doesn't fit well with people/sites
that don't control their own DNS.
> I've more or less decided than when I rewrite my internal
> corporate mail list, I'll do that rather than generate alias
> listings (for, oh, 12,000 groups) and teh hassles and overheads of
> all that. That'll be especially useful if we do waht I hope, which
> is set it up so the server has no data at all, but authenticates
> via LDAP to get list information on demand out of the corporate
> databases. There are some definite advantages to not knowing
> whether something exists until the need to know exists -- and as
> Mailman starts edging towards interfacing to non-Mailman data
> sources for list information, that ability grows in importance.
FWIW the design can do that right now. A message comes in, various
parameters are extracted from it, and the parameters are handled to
a directory of scripts the accumulated stdout of which forms the
distribution list for that message. The distribution list can be
passed thru a pre-processor (dupe removal, domain sorting, MX
sorting, whatever) to do any final processing of the distribution
list before attaching it to the message and putting it in the
So, want LDAP? Want SQL? Want local DBM? Want all three? No
> 6) is the processesing needed to support other functions that act
> on messages. The idea is that instead of delivering to the MTA, we
> have a suite of functions that deliver the message ot whatever
> needs to process it. Those can be asynchronous and don't need to
> be as timely as (5), and have different enough design needs that I
> split them out from the MTA delivery (although traditionally,
> stuff like digests are managed by doing an MTA transfer out of the
> MLM and back in to a different program...)
I have one set of queue processing functions geared solely for list
posts. There are then several parallel queue sequences dor
processing non-posts (such as bounces, command requests, etc).
Additionally, you can trivially set things up so that post
explosions occur on N machines, while command processing and bounce
processing occur only on say machine X (which is perhaps on the
other side of the DMZ and access rights to your internal backing
I don't see the different queues needing markedly different designs,
but needing to be able to have their processes supports cleanly
divisible. The base structures end up markedly similar after that.
> It also assumes that these non-delivery things are separate
> processes from teh act of making them available to those things,
> to keep (6) lightweight as possible.
Process fork overhead is a problem I've not confronted yet. Its
going to need looking at. That and distributed lock contention.
Bother are pretty ugly with my current model.
> and besides, they are basically independent, asynchronous
> processes that don't need to be managed by any of the core logic,
> other than handing messages into their queue and making sure they
> stay running. same with, IMHO, storing messages for archives,
BTW I'd like to have the MLM archive messages such that a member can
request, "SEND ME POST XXX" and have the MLM send it to him. Ditto
for digests. This is in addition to any web archiving.
> storing messages for digests, updating archives, processing
> digests (but the processed digest is fed back into the core logic
> for delivery), and whatever else we decide it needs to do that
> isn't part of the core, time-sensitive code base.
I've been thinking about this. I *REALLY* don't think there's much
time sensitive code in a MLM. There's a lot of data you want to get
out of the way as quickly as possible, because if you don't its just
going to build up and make a bigger problem, but the actual speed
with which individual bits prior to the explosion occur seems
arbitrary outside of a latency viewpoint.
Okay, it takes N ticks for a post to start exploding versus it takes
5N ticks to start exploding? Am I really going to care when
handling the explosion takes several hundred N? Even with say 50
forks per inbound list post prior to explosion, the comparitive
overhead compared to explosion is still trivial.
> (in fact, there's no reason why you couldn't have multiple flavors
> of these things, feeding archives into an mbox, another archiver
> into mhonarc or pipermail, something that updates the search
> engine indexes, and text adn mime digesters... by turning them
> into their own logic streams with their own queues, you
> effectivley have just made them all plug-in swappable, because
> you're writing to a queue, and not worrying about what happens
> once its there. you merely need to make sure it goes in the right
> queue, in the approved format.
Hurm. Good point. I like that idea. Just inject messages back
into the system targetted for the apppropriate stream. Nice.
Heck, perhaps I should shove this thing as-is into Barry's ZWiki.
> We're visiting the relatives. Cover us.
I missed you. Please wait while I reload.
J C Lawrence email@example.com
---------(*) : http://www.kanga.nu/~claw/
--=| A man is as sane as he is dangerous to his environment |=--