(2.0.6) pipermail takes >1 minute to rebuild indexes on large lists

On a massive list (Mailman 2.0.6) I run that regularly gets a few hundred or more emails every day, things begin to slow down to molasses after a week or two each month, with the qrunner process taking literally HUNDREDS of megabytes of RAM, and 100% CPU, all the time.
It's gotten so bad that the pipermail databases have gotten massive:
-rw-rw-r-- 1 list list 40866061 Sep 13 19:24 2001-September-article -rw-rw-r-- 1 list list 686273 Sep 13 19:24 2001-September-author -rw-rw-r-- 1 list list 561479 Sep 13 19:24 2001-September-date -rw-rw-r-- 1 list list 796534 Sep 13 19:24 2001-September-subject -rw-rw-r-- 1 list list 569238 Sep 13 19:24 2001-September-thread
And, of course, pipermail takes over a *MINUTE* to rebuild the indexes now. This is with a bunch of syslog() debugging in pipermail.py. Notice the timestamps -- this is how long it takes a *single* message to make it through the ToArchive step of the pipeline..
Sep 13 19:24:02 2001 (29357) in pipermail._update_simple_index() now Sep 13 19:24:02 2001 (29357) opening index as stdout Sep 13 19:24:02 2001 (29357) done opening index, writing index header Sep 13 19:24:02 2001 (29357) getting article 200109010000.JAA26551@admin.interq.or.jp from db Sep 13 19:24:03 2001 (29357) writing index entry 0 Sep 13 19:24:03 2001 (29357) going to next entry
(snip LOTS of index entry writing -- note the timestamp and the count below)
Sep 13 19:24:52 2001 (29357) getting article 3B95AB3EA.4F08HONDA@192.168.1.190 from db Sep 13 19:24:52 2001 (29357) writing index entry 4794 Sep 13 19:24:52 2001 (29357) going to next db Sep 13 19:24:52 2001 (29357) calling self.write_index_footer() Sep 13 19:24:52 2001 (29357) in pipermail._update_thread_index Sep 13 19:24:53 2001 (29357) in pipermail.write_TOC() Sep 13 19:24:53 2001 (29357) done writing TOC Sep 13 19:24:53 2001 (29357) picking archive state into /var/lib/mailman/archives/private/sysadmin/pipermail.pck
While it does this, the size of the python process that's doing the qrunner grows by *megabytes* each second. I've seen it get to 200MB before it finishes, and it's always taking up 99% CPU.
As you can guess, things rapidly get out of hand, and messages start arriving faster than pipermail can deal with them. And since it's first in the pipeline, nothing gets delivered until pipermail archives the mail.. and mail queues up and queues up, until it's taking 7-8 hours to send out each mail -- no joke.
What can I do to help solve this? I know I can use an external archiver, but I don't know anything about that. I've commented out the ToArchive step from the pipeline for now, but I don't know what else I can do. There *MUST* be a way to not have to loop over every single article in the simple index every time a message arrives, right?
Other people must have run into this, and I'd like to know what I can do to only have to write the changed articles in the simple index.
Thanks,
Ben
-- Brought to you by the letters F and O and the number 12. "Johnny! Don't go! It's too dangerous!" "I don't care!" Debian GNU/Linux maintainer of Gimp and GTK+ -- http://www.debian.org/

"BG" == Ben Gertzfield <che@debian.org> writes:
BG> On a massive list (Mailman 2.0.6) I run that regularly gets a
BG> few hundred or more emails every day, things begin to slow
BG> down to molasses after a week or two each month, with the
BG> qrunner process taking literally HUNDREDS of megabytes of RAM,
BG> and 100% CPU, all the time.
FWIW, the combined {zope,python}.org gets many hundreds of messages a day on the python-list and zope mailing lists alone. I'm not online at the moment, so I can't check, but I know I receive several digests every day from both lists -- they're very high traffic. Needless to say, I haven't noticed any such performance problems...
Here are a few things you can do to improve matters. First and foremost, make sure GZIP_ARCHIVE_TXT_FILES is 0 -- this should be the default though. Don't even try to gzip your .txt files in real-time since this will absolutely clobber your system. Use cron/nightly_gzip instead (it doesn't have to be nightly, that's up to your crontab).
If your system still can't handle things, then the next step is to set ARCHIVE_TO_MBOX to 1. This way, Mailman will simply append the message to the .mbox file, which ought to be extremely quick, but it won't attempt to run the Pipermail archiver in real time. Then you can use whatever archiving scheme you want (e.g. bin/arch nightly, or an external archiver).
In MM2.1, you'll be able to lower the priority of your archive qrunner, so that it processes messages, say once per hour. It also won't be in-line with the normal thru-path of messages, so even if the archiver is slow, it won't gum up delivery of list messages.
And yes, the Pipermail stuff is a mess. I don't have time to rewrite it, so it'll have to wait for volunteers.
-Barry

"BAW" == Barry A Warsaw <barry@zope.com> writes: "BG" == Ben Gertzfield <che@debian.org> writes:
BG> On a massive list (Mailman 2.0.6) I run that regularly gets a
BG> few hundred or more emails every day, things begin to slow
BG> down to molasses after a week or two each month, with the
BG> qrunner process taking literally HUNDREDS of megabytes of RAM,
BG> and 100% CPU, all the time.
BAW> FWIW, the combined {zope,python}.org gets many hundreds of
BAW> messages a day on the python-list and zope mailing lists
BAW> alone. I'm not online at the moment, so I can't check, but I
BAW> know I receive several digests every day from both lists --
BAW> they're very high traffic. Needless to say, I haven't
BAW> noticed any such performance problems...
The problem was when the mbox got up to about 200-300 megs; I can send you the traces of the function calls with timestamps, and you can see exactly how slow things get.
BAW> If your system still can't handle things, then the next step
BAW> is to set ARCHIVE_TO_MBOX to 1. This way, Mailman will
BAW> simply append the message to the .mbox file, which ought to
BAW> be extremely quick, but it won't attempt to run the Pipermail
BAW> archiver in real time. Then you can use whatever archiving
BAW> scheme you want (e.g. bin/arch nightly, or an external
BAW> archiver).
Yes, this is probably the right solution. In fact, I'm actually leaning towards suggesting that Mailman just come with or depend upon hypermail for archiving; we're just re-inventing the wheel by trying to modify pipermail over and over, and it's really not going to scale.
Ben
-- Brought to you by the letters D and Z and the number 19. "He's like.. some sort of.. non-giving up.. school guy!" Debian GNU/Linux maintainer of Gimp and GTK+ -- http://www.debian.org/

"BG" == Ben Gertzfield <che@debian.org> writes:
BG> The problem was when the mbox got up to about 200-300 megs; I
BG> can send you the traces of the function calls with timestamps,
BG> and you can see exactly how slow things get.
My biggest lists are python-list at ~280MB followed by the zope mailing list which is at about 150MB, and I've got a dozen in the 10-100MB range.
You're sure you're not gzipping on the fly, right?
It would be interesting to see some profiler output.
BAW> If your system still can't handle things, then the next step
BAW> is to set ARCHIVE_TO_MBOX to 1. This way, Mailman will
BAW> simply append the message to the .mbox file, which ought to
BAW> be extremely quick, but it won't attempt to run the Pipermail
BAW> archiver in real time. Then you can use whatever archiving
BAW> scheme you want (e.g. bin/arch nightly, or an external
BAW> archiver).
BG> Yes, this is probably the right solution. In fact, I'm
BG> actually leaning towards suggesting that Mailman just come
BG> with or depend upon hypermail for archiving; we're just
BG> re-inventing the wheel by trying to modify pipermail over and
BG> over, and it's really not going to scale.
So far, I've resisted this. I've no problem recommending an external archiver for serious sites, and making it as easy as possible to integrate Mailman with external archivers, but I really don't want to distribute one with Mailman.
I feel it'll tie us to closely to some other project, with its own agenda, schedule, compatibility issues, tool chain, etc. etc. I'm under no illusions about making Pipermail a killer archiver, but I also don't think that most sites need much more. I'd rather give folks a moderately useful, bundled archiver and tell them where to go if they're running a high traffic site.
-Barry

"BAW" == Barry A Warsaw <barry@zope.com> writes: "BG" == Ben Gertzfield <che@debian.org> writes:
BG> The problem was when the mbox got up to about 200-300 megs; I
BG> can send you the traces of the function calls with timestamps,
BG> and you can see exactly how slow things get.
BAW> My biggest lists are python-list at ~280MB followed by the
BAW> zope mailing list which is at about 150MB, and I've got a
BAW> dozen in the 10-100MB range.
BAW> You're sure you're not gzipping on the fly, right?
Absolutely.
[ben@yuubin:/usr/lib/mailman/Mailman]% grep -i gzip Defaults.py 2:40PM # Set this to 1 to enable gzipping of the downloadable archive .txt file. # night to generate the txt.gz file. See cron/nightly_gzip for details. GZIP_ARCHIVE_TXT_FILES = 0 [ben@yuubin:/usr/lib/mailman/Mailman]% grep -i gzip mm_cfg.py 2:40PM
BAW> It would be interesting to see some profiler output.
Here's an example. There are megs and megs where this came from..
Sep 13 19:38:02 2001 (29454) pipelining: ToArchive Sep 13 19:38:02 2001 (29454) forking... Sep 13 19:38:02 2001 (29454) forked, pid 29454. calling handler func ToArchive... Sep 13 19:38:04 2001 (29458) in Message.enqueue() now Sep 13 19:38:04 2001 (29458) opening file: 733417dfede9cc5f09bf35f40d6c3d279830f653 Sep 13 19:38:04 2001 (29458) opening db /var/lib/mailman/qfiles/733417dfede9cc5f09bf35f40d6c3d279830f653.db Sep 13 19:38:04 2001 (29458) exception in msg Sep 13 19:38:04 2001 (29458) msgdata.update newdata Sep 13 19:38:04 2001 (29458) msgdata.update kws Sep 13 19:38:04 2001 (29458) writing data file Sep 13 19:38:04 2001 (29458) done writing data file Sep 13 19:38:04 2001 (29458) writing dirty/new msg to disk Sep 13 19:38:04 2001 (29458) done writing dirty/new msg to disk Sep 13 19:38:06 2001 (29462) in Message.enqueue() now Sep 13 19:38:06 2001 (29462) opening file: 4a2589b46405fdf1691bb83cba6d638e718b932a Sep 13 19:38:06 2001 (29462) opening db /var/lib/mailman/qfiles/4a2589b46405fdf1691bb83cba6d638e718b932a.db Sep 13 19:38:06 2001 (29462) exception in msg Sep 13 19:38:06 2001 (29462) msgdata.update newdata Sep 13 19:38:06 2001 (29462) msgdata.update kws Sep 13 19:38:06 2001 (29462) writing data file Sep 13 19:38:06 2001 (29462) done writing data file Sep 13 19:38:06 2001 (29462) writing dirty/new msg to disk Sep 13 19:38:06 2001 (29462) done writing dirty/new msg to disk Sep 13 19:38:59 2001 (29454) done with handler func ToArchive.
I can explain in more detail, but it's pretty obvious that ToArchive starts to thrash pretty badly with a big mbox file.
BAW> I feel it'll tie us to closely to some other project, with
BAW> its own agenda, schedule, compatibility issues, tool chain,
BAW> etc. etc. I'm under no illusions about making Pipermail a
BAW> killer archiver, but I also don't think that most sites need
BAW> much more. I'd rather give folks a moderately useful,
BAW> bundled archiver and tell them where to go if they're running
BAW> a high traffic site.
If we go this route, we must do a big overhaul on pipermail. It tries to do way too much as it is, and fails spectacularly on systems other than mine when the mbox file gets too big.
Ben
-- Brought to you by the letters Y and P and the number 12. "Porcoga daisuki!" Debian GNU/Linux maintainer of Gimp and GTK+ -- http://www.debian.org/

"BG" == Ben Gertzfield <che@debian.org> writes:
BG> The problem was when the mbox got up to about 200-300 megs; I
BG> can send you the traces of the function calls with timestamps,
BG> and you can see exactly how slow things get.
"BAW" == Barry A Warsaw <barry@zope.com> writes:
BAW> My biggest lists are python-list at ~280MB followed by the
BAW> zope mailing list which is at about 150MB, and I've got a
BAW> dozen in the 10-100MB range.
BAW> It would be interesting to see some profiler output.
BG> Here's an example. There are megs and megs where this came
BG> from..
[profiling deleted]
BG> I can explain in more detail, but it's pretty obvious that
BG> ToArchive starts to thrash pretty badly with a big mbox file.
I think you need to investigate this more. I'd like to see exactly how you instrumented ToArchive.py to get these numbers. I think something else is going on with your system.
Here's what I did: I took python-list.mbox from mail.python.org. This is about 280MB. I installed that as the mbox file for a local test list, and ran bin/arch on it to initialize the archive.
Then I instrumented MM2.0.6's ToArchive.py like so:
# TBD: this needs to be converted to the new pipeline machinery
t0 = time.time()
mlist.ArchiveMail(msg, msgdata)
t1 = time.time()
syslog('debug', 'ArchiveMail time: %s seconds' % (t1 - t0))
On an unloaded system, this took 1.08 seconds. Much less than the 53 seconds between these two lines in your output:
-------------------- snip snip -------------------- Sep 13 19:38:06 2001 (29462) done writing dirty/new msg to disk Sep 13 19:38:59 2001 (29454) done with handler func ToArchive. -------------------- snip snip --------------------
When I send 3 or 4 messages into the queue at the same time, the average time in ArchiveMail() is 0.2 seconds. I could try instrumenting ToArchive.py on the live site, but I suspect I'll get very similar numbers.
Also, your output implies there's some forking going on. Where's that happening? The only forking the MM2.0.6 code base does is in the ToUsenet.py handler (oh and the test cases for LockFile.py but that obviously doesn't count).
-Barry

At 9:58 +0900 10/10/2001, Ben Gertzfield wrote:
A problem here is that Hypermail is far from the only game in town. I don't know its current state: hopefully much better than when we tossed it out about 5 years ago.
Should mailman be picking the outside archiver to use, or should it just make it easy to use SOME outside archiver? (If there is some archiver which is "the GNU archiver" in the sense that Mailman is "the GNU mailing list manager, I suppose that one could reasonably be favored.)
--John
-- John Baxter jwblist@olympus.net Port Ludlow, WA, USA

"JWB" == John W Baxter <jwblist@olympus.net> writes:
JWB> Should mailman be picking the outside archiver to use, or
JWB> should it just make it easy to use SOME outside archiver?
It should, and it does, AFAIK. If there are specific problems with external archive integration, let me know.
-Barry

"BG" == Ben Gertzfield <che@debian.org> writes:
BG> On a massive list (Mailman 2.0.6) I run that regularly gets a
BG> few hundred or more emails every day, things begin to slow
BG> down to molasses after a week or two each month, with the
BG> qrunner process taking literally HUNDREDS of megabytes of RAM,
BG> and 100% CPU, all the time.
FWIW, the combined {zope,python}.org gets many hundreds of messages a day on the python-list and zope mailing lists alone. I'm not online at the moment, so I can't check, but I know I receive several digests every day from both lists -- they're very high traffic. Needless to say, I haven't noticed any such performance problems...
Here are a few things you can do to improve matters. First and foremost, make sure GZIP_ARCHIVE_TXT_FILES is 0 -- this should be the default though. Don't even try to gzip your .txt files in real-time since this will absolutely clobber your system. Use cron/nightly_gzip instead (it doesn't have to be nightly, that's up to your crontab).
If your system still can't handle things, then the next step is to set ARCHIVE_TO_MBOX to 1. This way, Mailman will simply append the message to the .mbox file, which ought to be extremely quick, but it won't attempt to run the Pipermail archiver in real time. Then you can use whatever archiving scheme you want (e.g. bin/arch nightly, or an external archiver).
In MM2.1, you'll be able to lower the priority of your archive qrunner, so that it processes messages, say once per hour. It also won't be in-line with the normal thru-path of messages, so even if the archiver is slow, it won't gum up delivery of list messages.
And yes, the Pipermail stuff is a mess. I don't have time to rewrite it, so it'll have to wait for volunteers.
-Barry

"BAW" == Barry A Warsaw <barry@zope.com> writes: "BG" == Ben Gertzfield <che@debian.org> writes:
BG> On a massive list (Mailman 2.0.6) I run that regularly gets a
BG> few hundred or more emails every day, things begin to slow
BG> down to molasses after a week or two each month, with the
BG> qrunner process taking literally HUNDREDS of megabytes of RAM,
BG> and 100% CPU, all the time.
BAW> FWIW, the combined {zope,python}.org gets many hundreds of
BAW> messages a day on the python-list and zope mailing lists
BAW> alone. I'm not online at the moment, so I can't check, but I
BAW> know I receive several digests every day from both lists --
BAW> they're very high traffic. Needless to say, I haven't
BAW> noticed any such performance problems...
The problem was when the mbox got up to about 200-300 megs; I can send you the traces of the function calls with timestamps, and you can see exactly how slow things get.
BAW> If your system still can't handle things, then the next step
BAW> is to set ARCHIVE_TO_MBOX to 1. This way, Mailman will
BAW> simply append the message to the .mbox file, which ought to
BAW> be extremely quick, but it won't attempt to run the Pipermail
BAW> archiver in real time. Then you can use whatever archiving
BAW> scheme you want (e.g. bin/arch nightly, or an external
BAW> archiver).
Yes, this is probably the right solution. In fact, I'm actually leaning towards suggesting that Mailman just come with or depend upon hypermail for archiving; we're just re-inventing the wheel by trying to modify pipermail over and over, and it's really not going to scale.
Ben
-- Brought to you by the letters D and Z and the number 19. "He's like.. some sort of.. non-giving up.. school guy!" Debian GNU/Linux maintainer of Gimp and GTK+ -- http://www.debian.org/

"BG" == Ben Gertzfield <che@debian.org> writes:
BG> The problem was when the mbox got up to about 200-300 megs; I
BG> can send you the traces of the function calls with timestamps,
BG> and you can see exactly how slow things get.
My biggest lists are python-list at ~280MB followed by the zope mailing list which is at about 150MB, and I've got a dozen in the 10-100MB range.
You're sure you're not gzipping on the fly, right?
It would be interesting to see some profiler output.
BAW> If your system still can't handle things, then the next step
BAW> is to set ARCHIVE_TO_MBOX to 1. This way, Mailman will
BAW> simply append the message to the .mbox file, which ought to
BAW> be extremely quick, but it won't attempt to run the Pipermail
BAW> archiver in real time. Then you can use whatever archiving
BAW> scheme you want (e.g. bin/arch nightly, or an external
BAW> archiver).
BG> Yes, this is probably the right solution. In fact, I'm
BG> actually leaning towards suggesting that Mailman just come
BG> with or depend upon hypermail for archiving; we're just
BG> re-inventing the wheel by trying to modify pipermail over and
BG> over, and it's really not going to scale.
So far, I've resisted this. I've no problem recommending an external archiver for serious sites, and making it as easy as possible to integrate Mailman with external archivers, but I really don't want to distribute one with Mailman.
I feel it'll tie us to closely to some other project, with its own agenda, schedule, compatibility issues, tool chain, etc. etc. I'm under no illusions about making Pipermail a killer archiver, but I also don't think that most sites need much more. I'd rather give folks a moderately useful, bundled archiver and tell them where to go if they're running a high traffic site.
-Barry

"BAW" == Barry A Warsaw <barry@zope.com> writes: "BG" == Ben Gertzfield <che@debian.org> writes:
BG> The problem was when the mbox got up to about 200-300 megs; I
BG> can send you the traces of the function calls with timestamps,
BG> and you can see exactly how slow things get.
BAW> My biggest lists are python-list at ~280MB followed by the
BAW> zope mailing list which is at about 150MB, and I've got a
BAW> dozen in the 10-100MB range.
BAW> You're sure you're not gzipping on the fly, right?
Absolutely.
[ben@yuubin:/usr/lib/mailman/Mailman]% grep -i gzip Defaults.py 2:40PM # Set this to 1 to enable gzipping of the downloadable archive .txt file. # night to generate the txt.gz file. See cron/nightly_gzip for details. GZIP_ARCHIVE_TXT_FILES = 0 [ben@yuubin:/usr/lib/mailman/Mailman]% grep -i gzip mm_cfg.py 2:40PM
BAW> It would be interesting to see some profiler output.
Here's an example. There are megs and megs where this came from..
Sep 13 19:38:02 2001 (29454) pipelining: ToArchive Sep 13 19:38:02 2001 (29454) forking... Sep 13 19:38:02 2001 (29454) forked, pid 29454. calling handler func ToArchive... Sep 13 19:38:04 2001 (29458) in Message.enqueue() now Sep 13 19:38:04 2001 (29458) opening file: 733417dfede9cc5f09bf35f40d6c3d279830f653 Sep 13 19:38:04 2001 (29458) opening db /var/lib/mailman/qfiles/733417dfede9cc5f09bf35f40d6c3d279830f653.db Sep 13 19:38:04 2001 (29458) exception in msg Sep 13 19:38:04 2001 (29458) msgdata.update newdata Sep 13 19:38:04 2001 (29458) msgdata.update kws Sep 13 19:38:04 2001 (29458) writing data file Sep 13 19:38:04 2001 (29458) done writing data file Sep 13 19:38:04 2001 (29458) writing dirty/new msg to disk Sep 13 19:38:04 2001 (29458) done writing dirty/new msg to disk Sep 13 19:38:06 2001 (29462) in Message.enqueue() now Sep 13 19:38:06 2001 (29462) opening file: 4a2589b46405fdf1691bb83cba6d638e718b932a Sep 13 19:38:06 2001 (29462) opening db /var/lib/mailman/qfiles/4a2589b46405fdf1691bb83cba6d638e718b932a.db Sep 13 19:38:06 2001 (29462) exception in msg Sep 13 19:38:06 2001 (29462) msgdata.update newdata Sep 13 19:38:06 2001 (29462) msgdata.update kws Sep 13 19:38:06 2001 (29462) writing data file Sep 13 19:38:06 2001 (29462) done writing data file Sep 13 19:38:06 2001 (29462) writing dirty/new msg to disk Sep 13 19:38:06 2001 (29462) done writing dirty/new msg to disk Sep 13 19:38:59 2001 (29454) done with handler func ToArchive.
I can explain in more detail, but it's pretty obvious that ToArchive starts to thrash pretty badly with a big mbox file.
BAW> I feel it'll tie us to closely to some other project, with
BAW> its own agenda, schedule, compatibility issues, tool chain,
BAW> etc. etc. I'm under no illusions about making Pipermail a
BAW> killer archiver, but I also don't think that most sites need
BAW> much more. I'd rather give folks a moderately useful,
BAW> bundled archiver and tell them where to go if they're running
BAW> a high traffic site.
If we go this route, we must do a big overhaul on pipermail. It tries to do way too much as it is, and fails spectacularly on systems other than mine when the mbox file gets too big.
Ben
-- Brought to you by the letters Y and P and the number 12. "Porcoga daisuki!" Debian GNU/Linux maintainer of Gimp and GTK+ -- http://www.debian.org/

"BG" == Ben Gertzfield <che@debian.org> writes:
BG> The problem was when the mbox got up to about 200-300 megs; I
BG> can send you the traces of the function calls with timestamps,
BG> and you can see exactly how slow things get.
"BAW" == Barry A Warsaw <barry@zope.com> writes:
BAW> My biggest lists are python-list at ~280MB followed by the
BAW> zope mailing list which is at about 150MB, and I've got a
BAW> dozen in the 10-100MB range.
BAW> It would be interesting to see some profiler output.
BG> Here's an example. There are megs and megs where this came
BG> from..
[profiling deleted]
BG> I can explain in more detail, but it's pretty obvious that
BG> ToArchive starts to thrash pretty badly with a big mbox file.
I think you need to investigate this more. I'd like to see exactly how you instrumented ToArchive.py to get these numbers. I think something else is going on with your system.
Here's what I did: I took python-list.mbox from mail.python.org. This is about 280MB. I installed that as the mbox file for a local test list, and ran bin/arch on it to initialize the archive.
Then I instrumented MM2.0.6's ToArchive.py like so:
# TBD: this needs to be converted to the new pipeline machinery
t0 = time.time()
mlist.ArchiveMail(msg, msgdata)
t1 = time.time()
syslog('debug', 'ArchiveMail time: %s seconds' % (t1 - t0))
On an unloaded system, this took 1.08 seconds. Much less than the 53 seconds between these two lines in your output:
-------------------- snip snip -------------------- Sep 13 19:38:06 2001 (29462) done writing dirty/new msg to disk Sep 13 19:38:59 2001 (29454) done with handler func ToArchive. -------------------- snip snip --------------------
When I send 3 or 4 messages into the queue at the same time, the average time in ArchiveMail() is 0.2 seconds. I could try instrumenting ToArchive.py on the live site, but I suspect I'll get very similar numbers.
Also, your output implies there's some forking going on. Where's that happening? The only forking the MM2.0.6 code base does is in the ToUsenet.py handler (oh and the test cases for LockFile.py but that obviously doesn't count).
-Barry

At 9:58 +0900 10/10/2001, Ben Gertzfield wrote:
A problem here is that Hypermail is far from the only game in town. I don't know its current state: hopefully much better than when we tossed it out about 5 years ago.
Should mailman be picking the outside archiver to use, or should it just make it easy to use SOME outside archiver? (If there is some archiver which is "the GNU archiver" in the sense that Mailman is "the GNU mailing list manager, I suppose that one could reasonably be favored.)
--John
-- John Baxter jwblist@olympus.net Port Ludlow, WA, USA

"JWB" == John W Baxter <jwblist@olympus.net> writes:
JWB> Should mailman be picking the outside archiver to use, or
JWB> should it just make it easy to use SOME outside archiver?
It should, and it does, AFAIK. If there are specific problems with external archive integration, let me know.
-Barry
participants (3)
-
barry@zope.com
-
Ben Gertzfield
-
John W Baxter