Load-balancing mailman between two servers

Hi there,
I run Mailman 2.1.9 on two load-balanced RHEL3 servers. The load balancer is an LVS director (http://www.linuxvirtualserver.org), a layer 4 software load balancer. The load balancer balances requests to sendmail on the two RHEL3 servers, and to apache running the Mailman web interface. Because of this, there's no guarantee that a user will hit the Mailman web interface on the same server as the one that has received an email for a Mailman list. I do this load balancing for redundancy, not for load; this way, I can bring down one of the two servers at any time and the Mailman service is still maintained.
By the way, I've been doing this since Mailman 2.1.5, and from that time, I've had the archives, data and lists directories on an NFS-shared disk, like this:
[root@slap mailman-prod]# pwd /usr/local/inst/mailman-2.1.9 [root@slap mailman-prod]# ls -al total 72 drwxrwsr-x 18 mailman mailman 4096 Oct 6 14:26 . drwxr-xr-x 16 root root 4096 Oct 9 10:26 .. lrwxrwxrwx 1 root mailman 35 Oct 6 14:25 archives -> /var/mailman-share/mailman-archives drwxrwsr-x 2 root mailman 4096 Oct 6 14:28 bin drwxrwsr-x 2 root mailman 4096 Oct 6 14:28 cgi-bin drwxrwsr-x 2 root mailman 4096 Oct 9 09:00 cron lrwxrwxrwx 1 root mailman 31 Oct 6 14:25 data -> /var/mailman-share/mailman-data drwxrwsr-x 2 root mailman 4096 Oct 6 14:27 icons lrwxrwxrwx 1 root mailman 32 Oct 6 14:25 lists -> /var/mailman-share/mailman-lists drwxrwsr-x 2 root mailman 4096 Aug 13 2004 local-data drwxrwsr-x 2 root mailman 4096 Nov 23 12:04 locks drwxrwsr-x 2 root mailman 4096 Sep 9 2005 logs drwxrwsr-x 2 root mailman 4096 Oct 6 14:28 mail drwxrwsr-x 11 root mailman 4096 Oct 6 14:27 Mailman drwxrwsr-x 34 root mailman 4096 Oct 6 14:28 messages drwxrwsr-x 6 root mailman 4096 Jun 29 2004 pythonlib drwxrwsr-x 11 root mailman 4096 Jun 30 2004 qfiles drwxrwsr-x 2 root mailman 4096 Oct 6 14:49 scripts drwxrwsr-x 2 root mailman 4096 Jun 29 2004 spam drwxrwsr-x 37 root mailman 4096 Oct 6 14:28 templates drwxrwsr-x 4 root mailman 4096 Oct 6 14:28 tests
I'm wondering if there are any other directories that I should be sharing between the two servers? I'm particularly thinking of the qfiles directory, but basically, I guess I should be NFS-sharing any directories that contain dynamic data that could be required by actions arising from Mailman cron jobs (which only run on one of the two servers) or from actions arising from the web interface.
Anyone suggest which directories I should be NFS sharing?
Thanks, Guy.
-- Guy Waugh Unix System Administrator IT&TS, Southern Cross University Lismore, NSW, Australia Email: gwaugh@scu.edu.au Ph.: +61 2 6620 3196 Fax: +61 2 6620 3033

Guy Waugh wrote:
Hi there,
I run Mailman 2.1.9 on two load-balanced RHEL3 servers. The load balancer is an LVS director (http://www.linuxvirtualserver.org), a layer 4 software load balancer. The load balancer balances requests to sendmail on the two RHEL3 servers, and to apache running the Mailman web interface. Because of this, there's no guarantee that a user will hit the Mailman web interface on the same server as the one that has received an email for a Mailman list. I do this load balancing for redundancy, not for load; this way, I can bring down one of the two servers at any time and the Mailman service is still maintained.
By the way, I've been doing this since Mailman 2.1.5, and from that time, I've had the archives, data and lists directories on an NFS-shared disk, like this:
...
Anyone suggest which directories I should be NFS sharing?
How/where do you share the incoming mail list aliases that sendmail checks?
Also when you create a new list, how to you update the other hosts aliases?
cheers,
Kim
Operating Systems, Services and Operations Information Technology Services, The University of Adelaide kim.hawtin@adelaide.edu.au

Hi Kim, list,
Kim Hawtin wrote:
Guy Waugh wrote:
Hi there,
I run Mailman 2.1.9 on two load-balanced RHEL3 servers. The load balancer is an LVS director (http://www.linuxvirtualserver.org), a layer 4 software load balancer. The load balancer balances requests to sendmail on the two RHEL3 servers, and to apache running the Mailman web interface. Because of this, there's no guarantee that a user will hit the Mailman web interface on the same server as the one that has received an email for a Mailman list. I do this load balancing for redundancy, not for load; this way, I can bring down one of the two servers at any time and the Mailman service is still maintained.
By the way, I've been doing this since Mailman 2.1.5, and from that time, I've had the archives, data and lists directories on an NFS-shared disk, like this:
...
Anyone suggest which directories I should be NFS sharing?
How/where do you share the incoming mail list aliases that sendmail checks?
Also when you create a new list, how to you update the other hosts aliases?
On each server, in the sendmail aliases file. So, when adding or removing a list, I have to do the alias changes on each of the two servers.
One thing I am a bit concerned about is contention on the shared files between the two servers. If, for example, both servers wanted to update the same .pck file at the same time, I'm not sure what would happen... would one server lock the file and the other server wait for the lock to be released, does anyone know? Or would chaos ensue?
cheers,
Kim
Cheers, Guy.
-- Guy Waugh Unix System Administrator IT&TS, Southern Cross University Lismore, NSW, Australia Email: gwaugh@scu.edu.au Ph.: +61 2 6620 3196 Fax: +61 2 6620 3033

At 3:03 PM +1100 11/27/06, Guy Waugh quoted Kim Hawtin:
How/where do you share the incoming mail list aliases that sendmail checks?
Also when you create a new list, how to you update the other hosts aliases?
On each server, in the sendmail aliases file. So, when adding or removing a list, I have to do the alias changes on each of the two servers.
Hmm. With postfix, you can specify multiple alias files, some of which will get auto-rebuilt as necessary by postfix, others which can get manually rebuilt by other processes (like by Mailman, with the standard tools it provides).
It's been a while since I mucked around with sendmail, but I have to believe that the same is possible there. Indeed, I believe that the technique that is currently used to completely automate this process with postfix was adapted from the technique that previously worked only with sendmail.
You could put that second set of aliases in the NFS shared space, so that you shouldn't have to rebuild that separately on the two boxes.
One thing I am a bit concerned about is contention on the shared files between the two servers. If, for example, both servers wanted to update the same .pck file at the same time, I'm not sure what would happen... would one server lock the file and the other server wait for the lock to be released, does anyone know? Or would chaos ensue?
The developers of Mailman have gone to great lengths to make the system as NFS-proof as possible. That is to say, it should be possible to share the entire /usr/local/mailman structure via NFS (or wherever you put all the Mailman files, including archives, queue directories, etc...). It should "just work".
That said, file locking on NFS is problematic during the very best of times, and I believe that the Mailman developers have gone to great lengths to try to work around that. But, by putting that stuff on NFS, you are increasing the chances that you'll run into a situation where you get lock contention, or maybe stale locks that will need to be cleared out. You will increase the amount of system maintenance that you have to do -- that's simply unavoidable.
And a very great deal depends on your NFS server. Putting all this stuff on NFS can cut your throughput that you can handle by a great deal -- orders of magnitude or more, and that's if you've got a high-end mega-expensive dedicated NFS fileserver from the likes of EMC, Auspex, or Network Appliance. Just keep this in mind as you program for redundancy.
I'm not saying that you can't put things on NFS, or that it's even "unwise". I am saying that everything is a set of trade-offs, and you've got to understand what it is that you're trading for what.
-- Brad Knowles, <brad@shub-internet.org>
Trend Micro has announced that they will cancel the stop.mail-abuse.org mail forwarding service as of 15 November 2006. If you have an old e-mail account for me at this domain, please make sure you correct that with the current address.

Brad Knowles wrote:
At 3:03 PM +1100 11/27/06, Guy Waugh quoted Kim Hawtin:
How/where do you share the incoming mail list aliases that sendmail checks?
Also when you create a new list, how to you update the other hosts aliases?
On each server, in the sendmail aliases file. So, when adding or removing a list, I have to do the alias changes on each of the two servers.
Hmm. With postfix, you can specify multiple alias files, some of which will get auto-rebuilt as necessary by postfix, others which can get manually rebuilt by other processes (like by Mailman, with the standard tools it provides).
It's been a while since I mucked around with sendmail, but I have to believe that the same is possible there. Indeed, I believe that the technique that is currently used to completely automate this process with postfix was adapted from the technique that previously worked only with sendmail.
You could put that second set of aliases in the NFS shared space, so that you shouldn't have to rebuild that separately on the two boxes. Yeah, thanks... given I've only got two servers, and we don't create/remove a lot of lists, I've never bothered to think about this 8-)
One thing I am a bit concerned about is contention on the shared files between the two servers. If, for example, both servers wanted to update the same .pck file at the same time, I'm not sure what would happen... would one server lock the file and the other server wait for the lock to be released, does anyone know? Or would chaos ensue?
The developers of Mailman have gone to great lengths to make the system as NFS-proof as possible. That is to say, it should be possible to share the entire /usr/local/mailman structure via NFS (or wherever you put all the Mailman files, including archives, queue directories, etc...). It should "just work".
That said, file locking on NFS is problematic during the very best of times, and I believe that the Mailman developers have gone to great lengths to try to work around that. But, by putting that stuff on NFS, you are increasing the chances that you'll run into a situation where you get lock contention, or maybe stale locks that will need to be cleared out. You will increase the amount of system maintenance that you have to do -- that's simply unavoidable.
And a very great deal depends on your NFS server. Putting all this stuff on NFS can cut your throughput that you can handle by a great deal -- orders of magnitude or more, and that's if you've got a high-end mega-expensive dedicated NFS fileserver from the likes of EMC, Auspex, or Network Appliance. Just keep this in mind as you program for redundancy.
I'm not saying that you can't put things on NFS, or that it's even "unwise". I am saying that everything is a set of trade-offs, and you've got to understand what it is that you're trading for what.
OK, thanks Brad...
I'm still wondering whether I should be NFS-sharing the qfiles directory. I haven't delved into the Mailman source code to try to figure this out, but...
If, for example, a list post is held for (say) moderation in one server's qfiles directory (if this is in fact where held posts are kept?), and a list administrator accesses the Mailman web interface on the other server and approves the post, will the second (web interface) server be able to find the held post on the first server? I can't imagine that it would be able to find the post, if it is the case that posts held for moderation (and for other reasons, like posts by non-members to a member-only list etc. etc.) are held in the qfiles directory.
Maybe I should change my setup like you outline above, such that the *entire* Mailman installation is NFS-shared between both servers? The only anomaly to be dealt with here, AFAIK, is that Mailman writes a master-qrunner.pid file to the data directory, but I can get around that. I'd really prefer to leave it roughly the way I already have it, as I can upgrade Mailman on one server and test it while the old version is still running on the other server.
Thanks, Guy.
-- Guy Waugh Unix System Administrator IT&TS, Southern Cross University Lismore, NSW, Australia Email: gwaugh@scu.edu.au Ph.: +61 2 6620 3196 Fax: +61 2 6620 3033

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Nov 28, 2006, at 6:08 PM, Guy Waugh wrote:
I'm still wondering whether I should be NFS-sharing the qfiles directory. I haven't delved into the Mailman source code to try to figure this out, but...
You should be able to NFS share the qfiles directory, but you want to
be careful about how you set up your qrunners. However, this
probably won't help you with what I think you really want to do
(IIUC), which is load balance the web interface.
First, pending messages are not kept in qfiles -- that's only for
messages that are being processed by the mail delivery subsystem. A
message that's waiting for moderation will get dequeued until it's
approved, at which point it will be re-queued into the appropriate
qfile directory.
Access to the "databases" which manage these pending files are all
protected by Mailman's lockfile implementation, which has had a long
stable history and a high probability of being NFS-safe, modulo bugs
in specific NFS implementations of course. So as long as your web
requests can be completed within the lock timeouts, you should be
able to load-balance admindb management across multiple web servers.
Of course, while one server is accessing a list, no other processing
for that list will occur on any other machine, as those other
machines wait for the first machine's list lock to be released.
However, processing involving other lists can still occur, as can
outgoing mail delivery, which does not need to acquire a list lock.
The story with qfiles is this: every qfile lives in its own little
slice of sha1 hash space and each hash slice is (supposed to be)
owned by exactly one qrunner process. This allows the qrunner to
process the messages in its hash slice without having to deal with
pesky locks which slows things down as contentions are serialized (a
good thing when dealing with databases, a bad thing when you're
trying to churn out a stream of messages). Thus, if you're looking
to load balance qfile directory processing, you can still do that if
you assign each qrunner process on each machine a unique slice of the
hash space -- it must be unique across all machines. IOW, machine 1
could handle the odd slices of qfile/in while machine 2 could handle
the even slices. Or you could have 8 qrunners on each machine and
slice up qfiles/in 16 ways (the implementation requires a factor of 2
in the number of hash slices).
Of course, if machine 1 went down, all the messages in its hash
slices would sit unprocessed, but it would be a fairly simple matter
to reconfigure machine 2 to handle machine 1's slices, or to bring up
a fallback machine to handle those slices in the meantime.
That's the intent anyway <wink>. I hope this makes sense and helps
you better plan your operational environment.
- -Barry
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin)
iQCVAwUBRWzGlXEjvBPtnXfVAQLBkQP9FWfWoEo7AoTkXdvpoj5pdeX+OWMbJ8kX n7oTthTmkULjmtqMjhKL0XT7wdy/5iYNaFRCJrCq2YYmwQBok4VyBZA0vQ/aHJKN 9RN6lxWQKIzBvm7nBRgIdGq4gw9THRCbjg2H9HpJjy5KunLbdE1Zi6MVzH5ag05J VncWRKYCCPU= =mejc -----END PGP SIGNATURE-----

Barry Warsaw wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Nov 28, 2006, at 6:08 PM, Guy Waugh wrote:
I'm still wondering whether I should be NFS-sharing the qfiles directory. I haven't delved into the Mailman source code to try to figure this out, but...
You should be able to NFS share the qfiles directory, but you want to be careful about how you set up your qrunners. However, this probably won't help you with what I think you really want to do (IIUC), which is load balance the web interface.
First, pending messages are not kept in qfiles -- that's only for messages that are being processed by the mail delivery subsystem. A message that's waiting for moderation will get dequeued until it's approved, at which point it will be re-queued into the appropriate qfile directory.
Access to the "databases" which manage these pending files are all protected by Mailman's lockfile implementation, which has had a long stable history and a high probability of being NFS-safe, modulo bugs in specific NFS implementations of course. So as long as your web requests can be completed within the lock timeouts, you should be able to load-balance admindb management across multiple web servers. Of course, while one server is accessing a list, no other processing for that list will occur on any other machine, as those other machines wait for the first machine's list lock to be released. However, processing involving other lists can still occur, as can outgoing mail delivery, which does not need to acquire a list lock.
The story with qfiles is this: every qfile lives in its own little slice of sha1 hash space and each hash slice is (supposed to be) owned by exactly one qrunner process. This allows the qrunner to process the messages in its hash slice without having to deal with pesky locks which slows things down as contentions are serialized (a good thing when dealing with databases, a bad thing when you're trying to churn out a stream of messages). Thus, if you're looking to load balance qfile directory processing, you can still do that if you assign each qrunner process on each machine a unique slice of the hash space -- it must be unique across all machines. IOW, machine 1 could handle the odd slices of qfile/in while machine 2 could handle the even slices.
Or you could have 8 qrunners on each machine and slice up qfiles/in 16 ways (the implementation requires a factor of 2 in the number of hash slices).Of course, if machine 1 went down, all the messages in its hash slices would sit unprocessed, but it would be a fairly simple matter to reconfigure machine 2 to handle machine 1's slices, or to bring up a fallback machine to handle those slices in the meantime.
That's the intent anyway <wink>. I hope this makes sense and helps you better plan your operational environment.
- -Barry Great, thanks Barry...
So, it sounds like all information about pending posts etc. is held in the databases in the directories I'm already NFS-sharing (i.e. archives, data and lists), and the qfiles directory is specific to the qrunner running on that machine, so I don't need to worry about NFS-sharing any of the other Mailman directories.
Yeehoo!
Thanks again, Guy.
-- Guy Waugh Unix System Administrator IT&TS, Southern Cross University Lismore, NSW, Australia Email: gwaugh@scu.edu.au Ph.: +61 2 6620 3196 Fax: +61 2 6620 3033

At 6:30 PM -0500 11/28/06, Barry Warsaw wrote:
Of course, if machine 1 went down, all the messages in its hash slices would sit unprocessed, but it would be a fairly simple matter to reconfigure machine 2 to handle machine 1's slices, or to bring up a fallback machine to handle those slices in the meantime.
Ahh, okay. Cool. I knew that there was a hashing scheme, but I had thought the intent was to use that for allowing multiple queue runners for each queue, on a single machine. I wasn't aware that the same mechanism would be used for splitting the queues across servers via NFS -- allowing you to avoid the locking problems I mentioned earlier.
Cool.
-- Brad Knowles, <brad@shub-internet.org>
Trend Micro has announced that they will cancel the stop.mail-abuse.org mail forwarding service as of 15 November 2006. If you have an old e-mail account for me at this domain, please make sure you correct that with the current address.

Is this in the FAQ anywhere?
On Tue, 28 Nov 2006, Brad Knowles wrote:
At 6:30 PM -0500 11/28/06, Barry Warsaw wrote:
Of course, if machine 1 went down, all the messages in its hash slices would sit unprocessed, but it would be a fairly simple matter to reconfigure machine 2 to handle machine 1's slices, or to bring up a fallback machine to handle those slices in the meantime.
Ahh, okay. Cool. I knew that there was a hashing scheme, but I had thought the intent was to use that for allowing multiple queue runners for each queue, on a single machine. I wasn't aware that the same mechanism would be used for splitting the queues across servers via NFS -- allowing you to avoid the locking problems I mentioned earlier.
Cool.
-- Brad Knowles, <brad@shub-internet.org>
Trend Micro has announced that they will cancel the stop.mail-abuse.org mail forwarding service as of 15 November 2006. If you have an old e-mail account for me at this domain, please make sure you correct that with the current address.
Mailman-Users mailing list Mailman-Users@python.org http://mail.python.org/mailman/listinfo/mailman-users Mailman FAQ: http://www.python.org/cgi-bin/faqw-mm.py Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/ Unsubscribe: http://mail.python.org/mailman/options/mailman-users/ge%40linuxbox.org
Security Policy: http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq01.027.htp

At 6:10 PM -0600 11/28/06, Gadi Evron wrote:
Is this in the FAQ anywhere?
Searching the FAQ wizard for "NFS" doesn't turn up any hits, so I would venture a guess to say that this issue is not addressed anywhere. I'll summarize Barry's response and put in a link.
-- Brad Knowles, <brad@shub-internet.org>
Trend Micro has announced that they will cancel the stop.mail-abuse.org mail forwarding service as of 15 November 2006. If you have an old e-mail account for me at this domain, please make sure you correct that with the current address.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Nov 28, 2006, at 7:14 PM, Brad Knowles wrote:
At 6:10 PM -0600 11/28/06, Gadi Evron wrote:
Is this in the FAQ anywhere?
Searching the FAQ wizard for "NFS" doesn't turn up any hits, so I
would venture a guess to say that this issue is not addressed
anywhere. I'll summarize Barry's response and put in a link.
Thanks Brad! Oh and please do let us know how the NFS arrangement
works for y'all. Some of this will change in the next release in
that I'm hoping we'll be able to remove the quotes around "database"
in my previous message. :) The idea being that if you wanted to,
you'd stick most, if not all, the stuff currently being protected by
a list lock into A Real Client/Server Database.
- -Barry
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin)
iQCVAwUBRWzSpnEjvBPtnXfVAQL6xQP/UIvYF1A6sh0JE7XT8YaX9psUqZfQJmGS WLpz0bmsWXWHihFxW2cgo9r1j4Xc8jLs4lW4xMwV+PvuS8eZfEu3T1W6XCUf1WG5 xrhLVjfRW4aXJp8LVwvQjCtrqh3hbPa2ouSMWXi/xm/m0TYOLlYYUO7VuSOSQUfJ GcX44NoGXbM= =O7tr -----END PGP SIGNATURE-----

At 6:14 PM -0600 11/28/06, Brad Knowles quoted Gadi Evron:
Is this in the FAQ anywhere?
Searching the FAQ wizard for "NFS" doesn't turn up any hits, so I would venture a guess to say that this issue is not addressed anywhere. I'll summarize Barry's response and put in a link.
Okay, FAQ 4.75 has been created for this subject, see <http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq04.075.htp>.
I would appreciate any feedback you may have.
-- Brad Knowles, <brad@shub-internet.org>
Trend Micro has announced that they will cancel the stop.mail-abuse.org mail forwarding service as of 15 November 2006. If you have an old e-mail account for me at this domain, please make sure you correct that with the current address.

At 10:08 AM +1100 11/29/06, Guy Waugh wrote:
I'm still wondering whether I should be NFS-sharing the qfiles directory. I haven't delved into the Mailman source code to try to figure this out, but...
In theory, it should work. I wouldn't do it myself, because of the contention and locking issues, but it should work.
If, for example, a list post is held for (say) moderation in one server's qfiles directory (if this is in fact where held posts are kept?), and a list administrator accesses the Mailman web interface on the other server and approves the post, will the second (web interface) server be able to find the held post on the first server? I can't imagine that it would be able to find the post, if it is the case that posts held for moderation (and for other reasons, like posts by non-members to a member-only list etc. etc.) are held in the qfiles directory.
I believe that you are correct -- if the post is held on only one server, and you happen to log into the other server to approve the post, then the second machine would not see that post to approve it.
Maybe I should change my setup like you outline above, such that the *entire* Mailman installation is NFS-shared between both servers?
My understanding is that it should work, modula the additional problems caused by putting things like shared queues on NFS (e.g., file contention, locking, etc...).
The only anomaly to be dealt with here, AFAIK, is that Mailman writes a master-qrunner.pid file to the data directory, but I can get around that. I'd really prefer to leave it roughly the way I already have it, as I can upgrade Mailman on one server and test it while the old version is still running on the other server.
To be honest, I've never done it myself, and we don't use it on python.org (the home of the mailman-* lists).
So, I can't tell you how well it will or will not work, or what the precise little quirks will be. I can tell you about the typical types of issues that I know about in general with NFS, and how I would expect those to apply with this type of usage.
That said, I believe that there are some people on the list who are doing this sort of thing (I mean, this topic has come up a few times before), and I believe that most of them are doing the simple thing of just sharing the entire /usr/local/mailman/ hierarchy between the target systems.
-- Brad Knowles, <brad@shub-internet.org>
Trend Micro has announced that they will cancel the stop.mail-abuse.org mail forwarding service as of 15 November 2006. If you have an old e-mail account for me at this domain, please make sure you correct that with the current address.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Nov 28, 2006, at 6:32 PM, Brad Knowles wrote:
I believe that you are correct -- if the post is held on only one server, and you happen to log into the other server to approve the post, then the second machine would not see that post to approve it.
Brad thanks, that reminds me of one other thing! Say you and I both
hit the admindb page at the same time and we each get the same list
of all pendings, but you're quicker on the draw than I am in
submitting your approvals, discards, etc. When I click submit, any
message you didn't defer will be gone by the time my web request is
processed. Mailman should be robust enough (now ;) to just ignore
those and move on.
This isn't really any different when you're load balancing though.
- -Barry
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin)
iQCVAwUBRWzQcnEjvBPtnXfVAQKVKQQAl/o6crB7J5BD/0M1SbCKYq0pr044yS84 1rYQR4N+UCzJ4BZCYWA9xh6th7igAcNm5m88dLrxhK03FYVGkhqui6mdRdp3DSeF /l946AQ5n5++6/IbkHfVtUUH0ltvMH9CsXZTUWf4PUTgwnSrfQeGp6vbUp+2zuQD Ls4tCh+dZ1k= =hWWZ -----END PGP SIGNATURE-----

Hello,
On Mon, 27 Nov 2006, Brad Knowles wrote:
At 3:03 PM +1100 11/27/06, Guy Waugh quoted Kim Hawtin:
How/where do you share the incoming mail list aliases that sendmail checks?
Also when you create a new list, how to you update the other hosts aliases?
On each server, in the sendmail aliases file. So, when adding or removing a list, I have to do the alias changes on each of the two servers.
Hmm. With postfix, you can specify multiple alias files, some of which will get auto-rebuilt as necessary by postfix, others which can get manually rebuilt by other processes (like by Mailman, with the standard tools it provides).
It's been a while since I mucked around with sendmail, but I have to believe that the same is possible there. Indeed, I believe that the technique that is currently used to completely automate this process with postfix was adapted from the technique that previously worked only with sendmail.
i'm doing this with Sendmail - you can add something like the following
to your .mc file (E.G. sendmail.mc):
define(ALIAS_FILE',
/etc/mail/aliases,/mail/mailman/sendmail/sendmail-aliases')dnl
The first file mentioned (/etc/mail/aliases) is typically where
sendmail looks for it's aliases. The second (/mail/mailman/sendmail/mailman-aliases) lives on shared storage (your NFS server). When you run newaliases, or sendmail -bi (same thing) on either machine, the shared sendmail-aliases.db file will be rebuilt.
You would of course have to move your mailman aliases to the other
aliases file, and change your procedure for creating new lists so that the new aliases also end up in the new aliases file.
I believe you're running RHEL? If you're wanting to give this a try
with sendmail, you need to have the sendmail-cf RPM installed. Then you can backup sendmail.mc and sendmail.cf, change sendmail.mc, and run make sendmail.cf to recreate the .cf file.
Thanks,
Ivan.
participants (6)
-
Barry Warsaw
-
Brad Knowles
-
Gadi Evron
-
Guy Waugh
-
Ivan Fetch
-
Kim Hawtin