? This very message came to me with the following header:
All my bounces come to the list admin address, set in
the admin webpage, in the second field of General Options.
Do you have that set to something, and bounces still come to root?
> 2. Bounces are sent to the poor postmaster instead of a -admin address.
> I'm not entirely certain, but I think an Errors-To: header or something
> like that in all Mailman messages might allow one to distribute that load
> On Tue, May 09, 2000 at 09:12:49PM +0200, Harald Meland wrote:
> > > > Please write a patch which puts the string "Cookie could not be set" on the
> > > > web page so that I can see that pressing submit will not work :-)
> > > i think thats a good point... it would safe some user questions if
> > > MM tells exactly why the authorisation failed.
> > While I agree that such a warning would be nice, I don't think it's
> > possible to do such things with cookies.
> it's possible to set a test cookie to see if cookies are
Ahh, I didn't even think of using multiple cookies :)
If I understand you correctly, you're proposing something like this:
Whenever Mailman is about to write a login page (i.e. the user is
not already authenticated), it first issues a
Set-Cookie: Mailman_cookie_test="This cookie is only used to test whether your browser will be able to authenticate with Mailman"; Version=1
HTTP header (If other Mailman cookies set attributes like Path or
Domain, the test cookie should mirror these to make the test reflect
Next, once the user has pressed the "Let me in..." button, Mailman
checks whether the Cookie has been sent back. If it hasn't,
authentication fails (as the user won't be able to make any changes
anyway), and Mailman instructs the user to enable Cookies in her
browser before retrying login.
If the test Cookie is present, Mailman should issue a
Set-Cookie: Mailman_cookie_test="clickety click"; Max-Age=0; Version=1
HTTP header (to delete the test cookie, so that the test cookie
isn't later confused with test cookies for login attempts at other
Finally, Mailman proceeds with password authentication as usual,
possibly resulting in an authentication cookie.
Hmmmm... I guess the test cookie should contain info on what list it
is for, as well.
Have I understood you correctly? Does anyone think that implementing
this (apart from my misunderstandings, of course :) would be a bad
And, while we're talking about cookies: Does anyone know whether
switching from the cookie attribute "Expires" (which was part of the
original Netscape cookie proposal) to the RFC2109 cookie attribute
"Max-Age" is likely to cause any problems?
I've had a look at Cookie.py, and the value part of the Expires
attribute isn't enclosed in double quotes (in accordance with the
original Netscape cookie proposal), which I believe might confuse
Mailman in some situations where the browser sends back more than one
Of course, if there are any (major) browsers in use out there that
doesn't understand Max-Age, it would be a bad idea to change Mailman.
OS: Solaris 5.7
I don't know if this has been fixed in the CVS version, but I thought I'd
submit it anyways. There seems to be a small authentication bug with the
chunking in the membership management page. The page will load up correctly
the first time it is accessed with the first chunk size shown. However, when
you request a different chunk, it requires a reauthentication and then gives
the chunk=0 page instead of the requested chunk. If you then click on the
alternate chunk it loads fine. However, the additional click and
authentication is annoying.
Ted Cabeen http://www.pobox.com/~secabeen secabeen(a)pobox.com
Check Website or finger for PGP Public Key secabeen(a)midway.uchicago.edu
"I have taken all knowledge to be my province." -F. Bacon cococabeen(a)aol.com
"Human kind cannot bear very much reality."-T.S.Eliot 73126.626(a)compuserve.com
On Wed, 24 May 2000 20:47:21 -0700
Chuq Von Rospach <chuqui(a)plaidworks.com> wrote:
> At 6:38 PM -0700 5/24/2000, J C Lawrence wrote:
>> True. My curiosity however is what MTA's do MX sorting, and more
>> particularly, MX collapsing (eg for two different targets that
>> share an MX's among their lowest level). The potential gains
>> there are likely not huge, but could be (guesstimate) noticable
>> for high volume servers with broad standard deviations in their
>> target lists.
>> I'll have to check into that some time.
> but -- as the experts say, the first $500 buys you 90% of the
> stereo response, and the rest of the money goes into getting you
> as close to 100% as you can get.
Umm, true. Looking at it again, and doing a quick check of my user
base's MXing, I suspect we're dealing with a less than 1% gain.
Bigger fish are available. Methinks my brain was farting.
> MX sorting is definitely far up into that 90% range,
> computationally and time expensive, and lots of other stuff can be
> done first, with more gain, and less effort.
I don't believe that a list server has any business handling MX
sorting unless it is also taking responsibility for being the list
MTA. As Mailman isn't, its a moot point.
> Maybe one thing we need is a definition of what Mailman is and
> what it isn't. Some kind of target for the size of lists it wants
> to reasonably support. If it's 5,000 users, it doesn't matter what
> you do. If it's 50,000, or 500,000, you definitely have different
While I really have no say here, were I Barry and Co I'd be
comfortable with targetting Mailman as able to handle a mid/high
6digit subscriber base list on mid-range PC-class hardware given
suitable system configuration. That wouldn't be the target of
course, just the "it must be physically able to work here" metric.
> (being able to handle a moderately busy 25,000 user list, say
> 15-30 messages a day...
Average traffic levels are never the problem. Its the bursts you
have to worry about, especially given the enforced latency of a
moderated list and there resultant likely grouping of broadcasts. I
usually end up moderating/approving messages in groups of 5 making
bursts of ~5K messages to the MTA (current largest list has a little
under 1K subscribers). It is the burst aspect that's possibly the
main reason the MTA delivery process needs to be made asynchronous
from the rest of the list server.
> It'd be nice to be able to say "5 million subscribers in 2
> minutes!", but focus on a solid "do most things for most folks"
> now, and add the high performance/huge list support in 2.5. But
> leave the hooks in, so we don't have to rewrite later....)
Were delivery to the MTA seperated from the receipt or CGI process
(ie mail is received, the RCPT list attached to it, and the tuple
placed on a queue for background processing via forked process or
cron job), we wouldn't be having this discussion. Its a fairly
invasive change to the current Mailman architecture, but making the
whole reciept/broadcast aspect asynchronous offers some really
pleasant future avenues.
>> While its a cheap logic, its easy to note that none of the very
>> high volume commercial email sites out there are based on
>> Sendmail (Critical Path, Hotmail, Onelist, EGroups, etc).
> Valid points.
> Postfix looks like a *real* win, but until I run it through its
> paces, I won't use it.
Exactly where I'm at on it. I'm about to roll my desktops over to
it, and let it stew there for a couple weeks.
> I'm doing 400-500,000 an hour out of my mail system without trying
> too hard, using sendmail 8.9.3, and peaks approaching 900K.
My traffic on Kanga.Nu (hobby lists: http://www.kanga.nu/lists/listinfo/)
is bursty and low enough that I just never get any hours with
solidly active spools. I average around 30K - 40K deliveries per
hour with the MTA sitting idle for much of that hour. 96% of
messages are delivered within 60 seconds of hitting the queue, 98.1%
within the hour -- we're basically talking a pretty idle mail
> So eeking out more performance by swapping MTAs is not a priority)
That's one of the main reasons I've been so lackadaisical about
moving to Postfix -- I don't really need to. The only thing driving
it is my own interest.
>>> As someone who deals with email for a living...
>> I should probably note at this point that I'm working for
>> Critical Path on their mail systems.
NB as a contractor.
> As long as we're into disclosure, I run a bunch of hobby lists at
Just been poking around there and noticed that your archives seem to
be inop (dead disk). If you're interested I've been messing about
with MHonArc and PHP in my spare time and have almost finished
getting a setup that:
-- Allows archived messages to be replied to on the web via the
archive page (replies post to the list).
-- Templates (PHPLIB) the entire archive appearance. All MHonArc
does is the parsing and data extraction.
-- Supports archive searching by MessageID. I've an MTA hack that
inserts a MessageID-based URL into all outgoing Mailman
list traffic so the user can just hit the URL and be taken to
that message in the archives (searches the MHonArc DB, useful
for thread reference etc).
Hopefully I'll get something worth public viewing sometime next
J C Lawrence Home: claw(a)kanga.nu
----------(*) Other: coder(a)kanga.nu
--=| A man is as sane as he is dangerous to his environment |=--
I'd like to do a lot more testing of huge mailing lists. I think the
largest I've seen people try (with not much success) is about 250k
recipients. My goal would be for stock Mailman using a fairly simple
installation on relatively common hardware to easily handle 1M
However, I'm at a slight disadvantage right now with my development
environment. What I think would help would be for some of you to
`donate' large blocks of fake addresses, which I could subscribe to
email lists of varying sizes. I'd like to create lists of 1k, 10k,
100k, 250k, 500k, and 1M recipients. Do you think between us we can
gather 1M fake recipients for a test list?
Best would be to have a mix of addresses on a number of sites, and I
think for now, those addrs can just discard messages they receive. At
some point it might be nice to collate receptions and send back a
summary (e.g. "97k of 100k addresses received the message") -- but
lets not worry about that for now.
What do you think? How many addresses do you think you could donate?
On Wed, 24 May 2000 17:08:18 -0700
Chuq Von Rospach <chuqui(a)plaidworks.com> wrote:
> At 3:41 PM -0700 5/24/2000, J C Lawrence wrote:
>> Yep, its second guessing the MTA, but its a cheap, cost
>> effective, minimal impact guess that has nearly NO punitive
>> effect on mailman itself.
> yes and no. By bunching stuff together, you help the MTA optimize,
> since it's a safe guess that it's going to (at least) sort by
> domain if it does any kind of connection caching at all.
True. My curiosity however is what MTA's do MX sorting, and more
particularly, MX collapsing (eg for two different targets that share
an MX's among their lowest level). The potential gains there are
likely not huge, but could be (guesstimate) noticable for high
volume servers with broad standard deviations in their target lists.
I'll have to check into that some time.
> You could make a good argument that the best way to optimize is to
> create one mail batch per unique hostname, up to SMTP-MAX-RCPTS,
> at which point you split it into num_addrs/SMTP-MAX-RCPTS batches
> for that hostname, and then let the MTA sort if out from there.
True, this would be a useful optimisation for most of the MTA
architectures I know of. Its also quite cheap and easy to do which
makes it even more tempting.
>> > I guess that we need a per MTA tuning/configuration document.
>> Aaaargh. Yes.
> Definitely. Since most of the "performance" issues involve the
> MTA, and the MLM only affects it based on how it stuffs things
> into the MTA.
There gets to be a point however where it really exceeds Mailman's
charter. Mailman is a list server, not a training course on how to
build and configure a high volume mail system. While I don't think
we've crossed or even approached that line, In general I'd rather
spend time on Mailman than high end server considerations which are
adequately (?) documented elsewhere.
>> Without going and re-reading it, about the only thing I can think
>> to add to it would be turning off domain checking for localhost
>> RCPTs as per our recent comments if that's not there already.
> By the way, I suggest that before people *assume* this is an
> improvement that it be tested, because the domain checking has to
> be done somewhere...
I've tested it here under Exim (as of about 2 years ago). The gains
were quite noticable for leaving it to the MTA for connection-time
resolution. Mostly, I suspect, because Exim didn't cache (or
pre-stuff) the DNS results from the validity check for MX delivery.
Actually, I don't think Exim maintains a significant DNS cache
across delivery attempts in the first place, assuming, quite rightly
in the general case, that the local nameserver can be trusted to do
that cacheing for it. I haven't checked this tho, as my need (I had
a 140K member list) disappeared (the company sponsoring the list
>> How about Postfix? Anybody know?
> Postfix is "on the list" for later this summer for me...
I followed Postfix actively in its early days, up till about a year
after first public release when I got distracted elsewhere (I used
to publicly archive all the Postfix lists here at Kanga.Nu). I
figure I'll probably roll everything over to Postfix sometime in a
couple months, tho I'll miss Exim's nice log analysis and queue
> Right now, I generally recommend sites doing a lot of mail-list
I generally recommend heartily against Sendmail for such sites. I
just don't see it as worth the extra effort (or obscurity) when
newer MTAs such as Exim (wot I use currently), QMail or Postfix in
general offer the same or better performance and configurability
with the added benefit of human readable/auditable config files.
While its a cheap logic, its easy to note that none of the very high
volume commercial email sites out there are based on Sendmail
(Critical Path, Hotmail, Onelist, EGroups, etc).
>> Of course not. Everybody knows that Microsoft Exchange is the
>> one true MTA and all else are but pale imitations.
> don't even JOKE about that.
You don't know how many times I've nearly uncommented the Exim rule
that would auto-bounce (during SMTP receipt) any message with an
Exchange entry in the Received headers. It has been tempting.
The only mail software out there that draws more ire from me is
Outlook. Pathetic. Absolutely pathetic. Of course I also have a
still-commented-out procmail rule in place before Mailman that would
auto-bounce messages from Outlook, and the only reason I haven't
uncommented it is that I have too many valued list members who
cannot use anything else (corporate standards).
> As someone who deals with email for a living...
I should probably note at this point that I'm working for Critical
Path on their mail systems.
> ...the only system that comes *close* to Exchange in the braindead
> category is Lotus notes.
Sorry, entirely different orders of magnitude there. Notes is bad,
certainly, and there few things even close to being as bad as Notes
or CC Mail (tho they've gotten a lot better in recent years (which
isn't saying much)), but Exchange/Outlook make them look positively
angelic in comparison.
> And that's not really close. I have seen so much braindamage out
> of Exchange servers I wish I could simply reject any mail that
> ever touched one....
I got some nice filters...
> You might as well drive your computers with a squirrel on a wheel.
Nope. That's Notes. Exchange? Remember the dead parrot skit...?
J C Lawrence Home: claw(a)kanga.nu
----------(*) Other: coder(a)kanga.nu
--=| A man is as sane as he is dangerous to his environment |=--
> I was thinking last night that what would REALLY, REALLY be useful
> here is an extended SMTP protocol that allows the VERPing to be
> introduced by the receiving SMTP server, rather than the delivery
> server or MLM. And after thinking about it, I went and laid down in a
> dark room until I got over it... (snicker). But if you think about
> it, the downside to VERP is you lose the efficiency of batching
> multiple addresses into a single transaction, so the solution is to
> extend SMTP to allow us to maintain that effeciency while building in
> the VERPing data at time of delivery...
However isn't this to some extent reimplementing DSN - which a lot of
people are currently not doing because the spec is complex and opaque.
Exim doesn't DSN right now... I think I will spend time looking at
this since up to now I have been accepting other peoples comments on
> the problem seems to be sites that are running really downrev
> versions of things that nobody's watching or upgrading.
which is the problem with adding mods to SMTP or MTAs - we have to
carry the people who don't follow the upgrade path.
> I realize the last couple of days we've brought forward a lot of neat
> stuff and then decided it's best NOT to do it, but sometimes the best
> thing you can do to make a project work is define what it's NOT, so
> you can focus on what it is. And put everything else down in the TODO
> for a future generation to wonder about.
Definitely. Can we have a NOT_TODO as well with a list of the things
and excerpts of the list discussion so that we don't have the same
discussion set every few months.
[ - Opinions expressed are personal and may not be shared by VData - ]
[ Nigel Metheringham Nigel.Metheringham(a)VData.co.uk ]
[ Phone: +44 1423 850000 Fax +44 1423 858866 ]
> On 18 May 2000, Harald Meland wrote:
> > Try again, but only after you have put
> > LIST_LOCK_DEBUGGING = 1
> > in your Mailman/mm_cfg.py, as current CVS Mailman has debug lock
> > logging turned off by default.
> Did that and got this..
> [mailman@dogpound logs]$ cat locks
> May 18 14:46:29 2000 (9011) davis.lock laying claim
> May 18 14:46:29 2000 (9011) davis.lock unexpected linkcount <> 2: 1
> May 18 14:46:29 2000 (9011) davis.lock lifetime has expired, breaking
> May 18 14:46:30 2000 (9011) davis.lock got the lock
> May 18 14:46:30 2000 (9011) davis.lock laying claim
> May 18 14:46:30 2000 (9011) davis.lock already locked
> [mailman@dogpound logs]$
The above strangeness was probably caused by a buglet in the current
CVS LockFile.py; I believe it gets the order of unlink() calls wrong.
As I see it, it is more important for the lock file to never have a
link count that is neither 0 or 2 than it is to make sure there are no
tempfile turds. This implies that the real lock file should be
unlink()ed before the tempfile, and not the other way around. Here's
a (untested) patch (which also touches on some other issues I noticed
while I was at it :):
I noticed another apparent inconsistency in LockFile.py, too: The
comment at the start of __break() seems to imply that calling
__touch() will totally remove the race condition, while I believe all
it does is make the race condition a little less likely.
However, I don't think that neither the order of unlink calls nor any
race condition is responsible for the problem you're having -- in
fact, I believe everyone running current CVS Mailman will have the
exact same problem. Here's another (untested) patch which tries to
address the problem (these patches are not supposed to depend on the
above patch, BTW):
Using current cvs code (as of May 12 2000 19:50 EST) I get the following
error when I hit the 'view other subscriptions' option with the correct
Sometimes it would take a while (maybe a minute or 2) for it to even
display this. And i know the system is not loaded down.
Bug in Mailman version 2.0beta3
We're sorry, we hit a bug!
If you would like to help us identify the problem, please email a copy of
this page to the webmaster for this site with a description of what
Traceback (innermost last):
File "/home/mailman/scripts/driver", line 89, in run_main
File "/home/mailman/Mailman/Cgi/handle_opts.py", line 80, in main
File "/home/mailman/Mailman/MailList.py", line 864, in Save
File "/home/mailman/Mailman/LockFile.py", line 204, in refresh
Matt Davis - ICQ# 934680
Those are my principles. If you don't like them I have others.
Last night, I added some code to queue messages that fail delivery
when using SMTPDirect. What happens is this:
If a message either totally fails delivery (e.g. the smtp socket
connect fails) or partial delivery fails for some, but not all,
recipients, then the message is stored on the file system for a re-try
For every failed message, two files are created. The base name of
these files is the SHA hexdigest dump of the message text. This
should be nearly guaranteed unique. A new directory contains these
files, called `qfiles'. The first file created is the complete plain
text of the failed message. The second file is a marshal of useful
information related to the failed delivery. This contains the
listname and the failed recip list along with a few other moderately
useful bits of info.
There's a new cron script called `qrunner' which cruise the files in
qfiles. It claims a lock (to prevent multiple qrunner processes) and
then goes through each file it finds, attempting redelivery. If there
are any problems reading a qfile file, it skips it for next time
(assumes it's a transient problem with the file, but logs a message).
When qrunner notices that the message has been handed off the the smtp
daemon for all outstanding recipients, it deletes the two message
I've moderately tested this stuff with total delivery failure by
shutting off my smtp daemon, attempting some deliveries, turning it
back on and running qrunner. I don't have the time right now to test
partial delivery failures, but I still claim that without DSN support,
these will be unlikely. Hopefully some of you can help look at this.
I'm about to check all this stuff in. Let me know what you think.