[Mailman-Developers] UI for Mailman 3.0 update
barry at list.org
Sat Jun 19 23:31:34 CEST 2010
On Jun 18, 2010, at 01:46 PM, Stephen J. Turnbull wrote:
>Barry Warsaw writes:
> > It's an interesting idea, but I'm not quite sure how a webserver pipeline
> > would work. The way the list server pipeline works now is by treating
> > messages as jobs that flow through the system. A web request is kind of a
> > different beast.
>Why? Abstractly, both web requests and mail messages are packages of
>data divided into metadata and payload. You line up a sequence of
>Handlers, each one looks at the metadata and decides whether it wants
>a crack at the package or not. If no, back to the pipeline. If yes,
>it may process metadata or payload (possibly modifying them), then
>decide to (a) do something final (reject/discard it or send something
>back out to the outside world), or (b) punt it back to the pipeline
>for further processing.
The primary difference is that with email jobs, we can handle the
asynchronously and there's little demand on handling them quickly. Web
requests are synchronous and must be handle immediately, or the browser will
It's certainly possible to turn a web request into an asynchronous job, but
it's much more complicated, and we don't often have the same access to the
underlying jobs. With the email pipeline the MTA hands us a message and that
(plus an accompanying metadata dictionary) *is* the job. Usually with a web
request we don't have quite the same thing, even in a WSGI environment.
>You can also keep state across requests. If it's request-specific the
>nature of HTTP and email both require cookies (aka one-time keys).
>So, what's so different?
I'm not sure I follow about email requiring cookies.
>It seems to me that it might also make communication between the
>webserver and the mail server(s) easier to organize (eg, when the user
>sends email to list-subcribe, then confirms by clicking on the web URL
>in the response) if these "jobs" had a unified format.
I don't quite see how that follows.
>It's possible that having a thousand handlers all looking at
>everything would be horribly inefficient, then you could divide things
>up into subpipelines (in the Linux kernel firewall they're called
>chains), with master Handlers in the toplevel pipeline dispatching to
>lower level subpipelines.
>I thought that was how Mailman 3 was organized. I know that Mailman
>2's mail pipeline has inspired a lot of my thoughts about how
>Roundup's internal implementation could be improved. (Roundup
>"auditors" and "reactors" look a lot like Handlers.) I guess I'd
>better go look more closely at Mailman 3.
Mailman 3 has the same basic architecture, except that the rule-checking
handlers live in a different pipeline and have a slightly different semantic
and interface. Please do take a look!
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 836 bytes
Desc: not available
More information about the Mailman-Developers