[Mailman-Developers] about qrunner and locking
Barry A. Warsaw
barry@digicool.com
Fri, 8 Dec 2000 10:29:24 -0500
>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:
TW> I'm afraid that there isn't a good solution to your problem,
TW> right now. In all honesty, and I say this with all my
TW> professional years of experience in this area, NFS sucks large
TW> granite elephant testicles through a very thin straw. (To
TW> butcher a Pratchett quote, "NFS is like a vampire; it bites,
TW> it sucks, and it leaves you lifeless")
TW> Which is really a pity, since Barry did a lot of work to get
TW> NFS locking right.
Yeah, sigh.
I've been playing a lot with zodb/zeo lately, and I think this is the
right long term solution to move to.
Background for those who don't know: zodb is the Zope Object Database,
ZEO is Zope Enterprise Objects. Think of them this way: zodb is a
framework for making Python objects transparently persistent. It's an
object database for Python programs with some very nice features, like
transaction support, pluggable backend storages, mountable databases,
etc. zeo is a client/server storage that enables things like
replication, multiple processes, and more.
It's all open source and very cool stuff. It's got support of a
fairly large community, and will likely be supported by folks at
Pythonlabs. We've talked about all this stuff before, but the
question now is: is it better to jump in sooner rather than later?
There are some other short term gains we can make by splitting the
qfiles into three queues: incoming, outgoing, and bounces. We've
talked about that before too. The advantage is that a list's database
does not need to be touched when a message is in outgoing -- except to
handle smtp errors, but we can hack around that I think.
So the idea is that a message is received by Mailman and goes into
incoming. It flows through the pipeline to get prepared for delivery,
and then goes into outgoing. From there qrunner (probably a separate
outgoing-qrunner) simply moves messages from outgoing to the smtpd.
We'd have to handle collisions for multiple qrunner processes,
potentially on separate machines. One way that doesn't involve
locking shenanigans is to divide the hash space up and assign a
segment to each out-qrunner process. Since the messages get hashed
into that space completely randomly, there should be decent coverage
by multiple out-qrunners. You could modify the segments based on
relative performance, nfs delays, and other factors.
-Barry