I become sometimes irriting error messages cannot unlink....... by queue processing. So I sugest this modification of dequeue function
def dequeueMessage(msg): import os try: os.unlink(msg) except: from Logging.StampedLogger import StampedLogger l = StampedLogger("queue", "DequeueMessage", immediate=1) l.write("Cannot remove:\t %s\n" % msg) l.flush()
I currently dont know why this error happens, I still searching.
cheers dan
________________________________________
DDDDDD
DD DD Dan Ohnesorg, supervisor on POWER
DD OOOO Dan@feld.cvut.cz
DD OODDOO Dep. of Power Engineering
DDDDDD OO CTU FEL Prague, Bohemia
OO OO work: +420 2 24352785;+420 2 24972109
OOOO home: +420 311 679679;+420 311 679311
________________________________________
Pocitac se od televizniho divaka lisi tim,
ze ma vlastni program.
On Fri, 25 Sep 1998, Dan Ohnesorg, admin of POWER wrote:
I become sometimes irriting error messages cannot unlink....... by queue processing. So I sugest this modification of dequeue function
def dequeueMessage(msg): import os try: os.unlink(msg) except: from Logging.StampedLogger import StampedLogger l = StampedLogger("queue", "DequeueMessage", immediate=1) l.write("Cannot remove:\t %s\n" % msg) l.flush()
I currently dont know why this error happens, I still searching.
Oh, I know why this happens. It's related to the duplicate
delivery problem. Another proccess has delivered the queued message out from under the first one. (and deleted the queue file, which is why the first process cannot delete the message. )
I've actually fixed this problem, by the simple expediant of
making all of the Mailman programs simply queue, and not try to deliver, outgoing messages. Then I have a modified run_queue program that runs continuously, de-queueing and delivering messages. Works great, sofar. If anyone's interested in the diffs. let me know. (I mentioned this whence I first did this, bout a month ago, but noone seemed interested then. )
-The Dragon De Monsyne
On 25 Sep 98, at 16:16, The Dragon De Monsyne wrote:
On Fri, 25 Sep 1998, Dan Ohnesorg, admin of POWER wrote:
I become sometimes irriting error messages cannot unlink....... by queue processing. So I sugest this modification of dequeue function
def dequeueMessage(msg): import os try: os.unlink(msg) except: from Logging.StampedLogger import StampedLogger l = StampedLogger("queue", "DequeueMessage", immediate=1) l.write("Cannot remove:\t %s\n" % msg) l.flush()
I currently dont know why this error happens, I still searching.
Oh, I know why this happens. It's related to the duplicate delivery problem. Another proccess has delivered the queued message out from under the first one. (and deleted the queue file, which is why the first process cannot delete the message. )
Yes so it is probably done, but I think some messages, which are afected with this error aren't send duplicate.
I've actually fixed this problem, by the simple expediant of making all of the Mailman programs simply queue, and not try to deliver, outgoing messages. Then I have a modified run_queue program that runs continuously, de-queueing and delivering messages. Works great, sofar. If anyone's interested in the diffs. let me know. (I mentioned this whence I first did this, bout a month ago, but noone seemed interested then. )
Send me this, please, but I am working on another sugestion. I will made this: when one process begins with delivery, it makes lock file mm_l.1 and so on. Locked files are skipped by another processes. Sucessfull delivery removes both lock and data files, unsuccesfull only lock file. Danger on my solution is, that when mailman dies, the file is locked forever (or probalby I can delete lock files older than X from crond).
Some opinion to this?
cheers dan
________________________________________
DDDDDD
DD DD Dan Ohnesorg, supervisor on POWER
DD OOOO Dan@feld.cvut.cz
DD OODDOO Dep. of Power Engineering
DDDDDD OO CTU FEL Prague, Bohemia
OO OO work: +420 2 24352785;+420 2 24972109
OOOO home: +420 311 679679;+420 311 679311
________________________________________
Pesimista vidi v ementalskem syru jen ty diry.
I personally like the idea of locking the files more than I like the idea of a daemon delivering queued mail because there will always be the worry that something accidentally kills the daemon process while no admin is around.
Lock timeouts sound like a good solution and are already available in the file locking module.
scott
On Sat, Sep 26, 1998 at 09:38:30PM +0200, Dan Ohnesorg, admin of POWER wrote:
| On 25 Sep 98, at 16:16, The Dragon De Monsyne wrote:
|
| > On Fri, 25 Sep 1998, Dan Ohnesorg, admin of POWER wrote:
| >
| > > I become sometimes irriting error messages cannot unlink....... by
| > > queue processing. So I sugest this modification of dequeue
| > > function
| > >
| > > def dequeueMessage(msg):
| > > import os
| > > try:
| > > os.unlink(msg)
| > > except:
| > > from Logging.StampedLogger import StampedLogger
| > > l = StampedLogger("queue", "DequeueMessage", immediate=1)
| > > l.write("Cannot remove:\t %s\n" % msg)
| > > l.flush()
| > >
| > > I currently dont know why this error happens, I still searching.
| >
| > Oh, I know why this happens. It's related to the duplicate
| > delivery problem. Another proccess has delivered the queued message out
| > from under the first one. (and deleted the queue file, which is why the
| > first process cannot delete the message. )
|
| Yes so it is probably done, but I think some messages, which are
| afected with this error aren't send duplicate.
|
|
| >
| > I've actually fixed this problem, by the simple expediant of
| > making all of the Mailman programs simply queue, and not try to deliver,
| > outgoing messages. Then I have a modified run_queue program that runs
| > continuously, de-queueing and delivering messages. Works great, sofar.
| > If anyone's interested in the diffs. let me know. (I mentioned
| > this whence I first did this, bout a month ago, but noone seemed
| > interested then. )
|
| Send me this, please, but I am working on another sugestion. I
| will made this: when one process begins with delivery, it makes
| lock file mm_l.1 and so on. Locked files are skipped by another
| processes. Sucessfull delivery removes both lock and data files,
| unsuccesfull only lock file.
| Danger on my solution is, that when mailman dies, the file is
| locked forever (or probalby I can delete lock files older than X
| from crond).
|
| Some opinion to this?
|
| cheers
| dan
|
| ________________________________________
| DDDDDD
| DD DD Dan Ohnesorg, supervisor on POWER
| DD OOOO Dan@feld.cvut.cz
| DD OODDOO Dep. of Power Engineering
| DDDDDD OO CTU FEL Prague, Bohemia
| OO OO work: +420 2 24352785;+420 2 24972109
| OOOO home: +420 311 679679;+420 311 679311
| ________________________________________
| Pesimista vidi v ementalskem syru jen ty diry.
|
|
| _______________________________________________
| Mailman-Developers maillist - Mailman-Developers@python.org
| http://www.python.org/mailman/listinfo/mailman-developers
|
On Sat, 26 Sep 1998, Scott wrote:
I personally like the idea of locking the files more than I like the idea of a daemon delivering queued mail because there will always be the worry that something accidentally kills the daemon process while no admin is around.
This is why I have the daemon check to see if an instance of
itself is already running before starting. If it's already running it dosen't start another copy of itself. That way you just run the daemon program from cron periodicaly and it automatically restarts itself if it gets killed. (I haven't had to worry about it since I installed it a month ago. )
I'm including the daemon version of run_queue attatched to this
message. (to use it you just eliminate the delivery attempts (i.e. any calls to OutgoingQueue.processQueue) elsewhere in the mailman code (my own mods to do this are a bit of a hack. If I have time to fix them up to be presentable I'll post a diff)
-The Dragon De Monsyne
#! /usr/bin/env python # # Copyright (C) 1998 by the Free Software Foundation, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. #
import sys,os,stat,string,time import paths from Mailman import OutgoingQueue from Mailman import mm_cfg
PIDFILE=os.path.join(mm_cfg.DATA_DIR,"q.running") STALL_TIME = (60*15) SLEEP_TIME = 15
def CheckLock():
"""Makes sure only one of these runs at a time."""
pid=None
try:
pid=string.atoi(string.strip(open(PIDFILE,'r').read()))
except (IOError,ValueError):
pass
if pid:
#let's see if it's really out there. -ddm
try:
os.kill(pid,0)
age = time.time() - os.stat(PIDFILE)[stat.ST_MTIME]
if age > STALL_TIME:
#it's hanging... Zap it. -ddm
os.kill(pid,9)
else:
#it's really running.. -ddm
return 0
except os.error:
# Nope it aint. -ddm
pass
return 1
def TouchLock():
"tweak the pid file"
open(PIDFILE,'w').write("%i" % os.getpid())
#def Do_DeQ(): # print "foo" # #OutgoingQueue.processQueue()
def main(): if not CheckLock(): # print "already running" sys.exit()
TouchLock()
while 1:
q=OutgoingQueue.processQueue()
if not q:
time.sleep(SLEEP_TIME)
TouchLock()
if __name__ == '__main__': main()
On 26 Sep 98, at 18:02, Scott wrote:
I personally like the idea of locking the files more than I like the idea of a daemon delivering queued mail because there will always be the worry that something accidentally kills the daemon process while no admin is around.
Lock timeouts sound like a good solution and are already available in the file locking module.
So I have made it as follows. I haven't used flock module, becouse I think it is not good for this situacion. We can use it when there is reorganized code in Utis and OutgoinQueue, so it is too distributed. I haven't made backup before modifing my copy, so I cannot send diffs. My mailman is very different from oficial becouse it containst features like DSN, administrativ via filter, character set conversion .... and so I cannot made diff to clean distribution.
Modification is very short. First in Utils.py
in def TrySMTPDelivery(recipient, sender, text, queue_entry): is on end added
if failure:
from Logging.StampedLogger import StampedLogger
l = StampedLogger("smtp-failures", "TrySMTPDelivery", immediate=1)
l.write("To %s:\n" % recipient)
l.write("\t %s / %s\n" % (failure[0], failure[1]))
l.flush()
import os, re
lock = re.sub('mm_q\.','mm_l\.',queue_entry)
os.unlink(lock)
And in OutgoingQueue
def dequeueMessage(msg):
- import os, re
- lock = re.sub('mm_q\.','mm_l\.',msg) try: os.unlink(msg)
except: from Logging.StampedLogger import StampedLogger l = StampedLogger("queue", "DequeueMessage", immediate=1) l.write("Cannot remove:\t %s\n" % msg) l.flush()os.unlink(lock)
def processQueue():
- import os, re files = os.listdir(mm_cfg.DATA_DIR) for file in files: if TEMPLATE <> file[:len(TEMPLATE)]: continue full_fname = os.path.join(mm_cfg.DATA_DIR, file)
lock = re.sub('mm_q\.','mm_l\.',full_fname)
existence = os.stat(lock)
if not(S_ISREG(existence)):
continue
l = open(lock,"a+")
l.write(os.getpid())
DDDDDDl.close() f = open(full_fname,"r") recip,sender,text = marshal.load(f) f.close() import Utils Utils.TrySMTPDelivery(recip,sender,text,full_fname) ________________________________________
DD DD Dan Ohnesorg, supervisor on POWER
DD OOOO Dan@feld.cvut.cz DD OODDOO Dep. of Power Engineering DDDDDD OO CTU FEL Prague, Bohemia OO OO work: +420 2 24352785;+420 2 24972109 OOOO home: +420 311 679679;+420 311 679311 ________________________________________ Neodkladej na zitrek, co dnes mohou udelat jini.
participants (3)
-
Dan Ohnesorg, admin of POWER
-
Scott
-
The Dragon De Monsyne