[Python-bugs-list] [ python-Bugs-417845 ] Python 2.1: SocketServer.ThreadingMixIn
noreply@sourceforge.net
noreply@sourceforge.net
Wed, 11 Jul 2001 14:22:35 -0700
Bugs item #417845, was opened at 2001-04-21 08:28
You can respond by visiting:
http://sourceforge.net/tracker/?func=detail&atid=105470&aid=417845&group_id=5470
Category: Python Library
Group: None
Status: Closed
Resolution: Fixed
Priority: 5
Submitted By: Nobody/Anonymous (nobody)
Assigned to: Guido van Rossum (gvanrossum)
Summary: Python 2.1: SocketServer.ThreadingMixIn
Initial Comment:
SocketServer.ThreadingMixIn does not work properly
since it tries to close the socket of a request two
times.
Workaround for using SocketServer.ThreadingMixIn under
Python 2.1:
class MyThreadingHTTPServer(
SocketServer.ThreadingMixIn,
MyHTTPServer
):
def close_request(self, request):
pass
----------------------------------------------------------------------
>Comment By: Guido van Rossum (gvanrossum)
Date: 2001-07-11 14:22
Message:
Logged In: YES
user_id=6380
Gustavo, yes it's fixed in CVS (just read on and you;ll see
my announcement). We're going to release a bugfix update
release for Python 2.1, Python 2.1.1 soon, and it will be in
that release too.
----------------------------------------------------------------------
Comment By: Gustavo Niemeyer (niemeyer)
Date: 2001-07-11 14:19
Message:
Logged In: YES
user_id=7887
Just wondering, is this bug fixed somewhere (cvs, etc)??
Python 2.1 + SOAP.py seem to have this problem, when using
the ThreadingTCPServer.
----------------------------------------------------------------------
Comment By: Guido van Rossum (gvanrossum)
Date: 2001-07-10 06:59
Message:
Logged In: YES
user_id=6380
Yes, the thread is responsible for calling
self.close_request() in self.finish_request(). (Although if
you just rely on reference counting, it should go away all
by itself too.)
I don't read the newsgroup frequently, but you can try email
if you have more problems.
----------------------------------------------------------------------
Comment By: Nobody/Anonymous (nobody)
Date: 2001-07-10 05:47
Message:
Logged In: NO
I tried the fix and it seems to work. I'm wondering if the
thread is responsible for calling self.close_request() in
self.finish_request?
Note:
Can we discuss it in news:comp.lang.python? I hate that web
interface with Cookies! That's the reason why I was too lazy
to login when submitting this bug report.
----------------------------------------------------------------------
Comment By: Guido van Rossum (gvanrossum)
Date: 2001-07-10 04:59
Message:
Logged In: YES
user_id=6380
I've developed a proper fix for this, I hope. Check out the
latest SocketServer.py from the CVS tree.
----------------------------------------------------------------------
Comment By: Xavier Lagraula (xlagraula)
Date: 2001-07-02 09:05
Message:
Logged In: YES
user_id=198402
There is still a problem with your solution, Mr glchapman:
you make a copy to ensure that the server won't close the
copy of the socket that is used in the child thread. Right.
But WHO will close this very copy then?
The thread still has to close it if one wants to
be "clean". So as long as the child thread must close its
copy in the threading case, why not make it a thumb rule?
Why should we bother about closing anything in the main
thread? I still think the solution that I provide is best:
make the request handler always responsible for closing the
socket from which the request comes.
----------------------------------------------------------------------
Comment By: Greg Chapman (glchapman)
Date: 2001-06-29 08:49
Message:
Logged In: YES
user_id=86307
Since the request socket object is only a lightweight
wrapper around the real socket, why not simply pass
request.dup() to the new thread? Then the server's call to
close_request affects only its copy of request, not the
copy being used in the thread. For example, the following
change to ThreadingMixIn fixed this bug for me in a (very
simple) test program:
def process_request(self, request, client_address):
"""Start a new thread to process the request."""
import threading
t = threading.Thread(target = self.finish_request,
args = (request.dup(),
client_address))
t.start()
----------------------------------------------------------------------
Comment By: Xavier Lagraula (xlagraula)
Date: 2001-05-28 05:20
Message:
Logged In: YES
user_id=198402
I have now started a project here concerning a SOCKS proxy
written in python (PySocks). It is aimed mostly at people
who use a windows box to share their internet connection
and uses the threading server from the SocketServer module.
So it becomes VERY important to me to know of what will be
done about this bug, in the next release/patch of Python
library.
Could Mr Guido Vanrossum tell us about it? As for now I am
forced to provide my patched version of SocketServer.py
with my releases, what is not quite satisfactory.
SocketServer is provided in the python distribution, so I'd
rather tell "there is a patch for python..."
Well... In fact I forgot to put it in my first release, but
I'll correct this this evening :)
----------------------------------------------------------------------
Comment By: Xavier Lagraula (xlagraula)
Date: 2001-05-20 12:06
Message:
Logged In: YES
user_id=198402
Well I was wrong. We do need a "try" block to ensure the
request is always correctly closed:
def finish_request(self, request, client_address):
"""Finish one request by instantiating
RequestHandlerClass."""
try:
self.RequestHandlerClass(request,
client_address, self)
finally:
self.close_request(request)
This works better.
----------------------------------------------------------------------
Comment By: Xavier Lagraula (xlagraula)
Date: 2001-05-13 09:09
Message:
Logged In: YES
user_id=198402
I forgot to tell: I can not test if it does not break the
forking server. I only have a windows platform available
for now, and forking doesn't work in the python/win32
environment for now as far as I know.
----------------------------------------------------------------------
Comment By: Xavier Lagraula (xlagraula)
Date: 2001-05-13 08:51
Message:
Logged In: YES
user_id=198402
What I propose can be applied without any compatibility
issue.
I have tried something that seems to work, at least under
windows (but it does need to be more fully tested though).
Only 2 small modifications are required:
-1- In BaseServer, modification of the last line of
handle_request:
<PRE>
def handle_request(self):
"""Handle one request, possibly blocking."""
#import time
try:
request, client_address = self.get_request()
except socket.error:
return
if self.verify_request(request, client_address):
try:
print 'handle 1'
self.process_request(request,
client_address)
print 'handle 2'
except:
self.handle_error(request, client_address)
self.close_request(request)
</PRE>
Note that only the indentation of the last has been
modified so that the close_request is executed only if an
exception occur.
Still we need to close the request after it had been
processed, so here comes the second modification:
-2- Still in BaseServer:
<PRE>
def finish_request(self, request, client_address):
"""Finish one request by instantiating
RequestHandlerClass."""
self.RequestHandlerClass(request, client_address,
self)
self.close_request(request)
</PRE>
There is already a try/except block in handle_request, so I
thought it was not mandatory here to ensure the request was
always closed.
Oh... I don't know how <PRE> tags are supported by
the bugtracking system of sourceforge, so the samples I
give may not appear as I want. This is quite a problem with
python :/
----------------------------------------------------------------------
Comment By: Luke Kenneth Casson Leighton (lkcl)
Date: 2001-05-12 15:46
Message:
Logged In: YES
user_id=80200
hi there mr xlagraula,
yes, it would be a lot simpler... _if_ it wasn't
for the fact that this code is likely to already
be quite extensively used. a possible 'upgrade'
path could be done by providing a... RequestHandler2
class, or some-such.
it would be neater to do this, or similar:
for t in self.thread_list:
t.join(timeout=0.1)
which would join all threads, or you do if
stopped(), close_request. i looked into
threads a bit more: join has a timeout,
and there is a stopped-detection function.
easy :)
----------------------------------------------------------------------
Comment By: Xavier Lagraula (xlagraula)
Date: 2001-05-12 14:18
Message:
Logged In: YES
user_id=198402
Another solution could be to modify the behaviour of the
server so that it would be the responsibility of
the "child" thread/process to close the socket (except for
the forking/threading error cases). Wouldn't it be simplier
than child process tracking and thread tracking?
----------------------------------------------------------------------
Comment By: Luke Kenneth Casson Leighton (lkcl)
Date: 2001-05-04 04:17
Message:
Logged In: YES
user_id=80200
okay. the forkingmixin code does a fork, records how many
children there are, and waits for one of them to exit,
before
proceeding - in particular, before proceeding to close
the request, etc.
... so why is not something similar done in ThreadingMixIn?
this kinda- tells me that thread-tracking is really needed,
in a similar way to that in forkingmixin.
----------------------------------------------------------------------
Comment By: Gregory P. Smith (greg)
Date: 2001-05-03 16:26
Message:
Logged In: YES
user_id=413
Just a note of another casualty of this bug: I had to add
the mentioned dummy close_request method hack to our own
ThreadingMixIn class in mojo nation (in the sourceforge
mojonation project's evil module, see the
common/MojoNationHTTPServer.py file). Without it, python
2.1 would always raise an exception in the request handler
as soon as it tried to call self.connection.makefile()
because self.connection had apparently already been closed!
(its fd was always -1)
----------------------------------------------------------------------
Comment By: Luke Kenneth Casson Leighton (lkcl)
Date: 2001-05-02 01:41
Message:
Logged In: YES
user_id=80200
hi there mr jrielh,
thank you very much for the details. what i am having a
little difficulty with is, what's the difference between
this and python 2.0 SocketServer.py?
more specifically, i'm looking at python 2.0 SocketServer.py
and, whilst i'm not a Threads expert, i see a t.start() but
no t.join(). i've been looking at the Queue example code in
the test method of threads.py, and start() is called on
every thread, followed by join() on every thread. join
waits for the thread to finish, yes?
so... if that's the case, then python 2.0 SocketServer.py
should suffer from
exactly the same behaviour, yes? unless python behaves
ever-so-slightly differently (timing issues?) when you have
an extra base class like this,
with the consequence that close_request() is more likely to
be called
before ThreadingMixIn.process_request().
?
----------------------------------------------------------------------
Comment By: Jon Riehl (jriehl)
Date: 2001-05-01 14:20
Message:
Logged In: YES
user_id=22448
This is related to bug #419873.
The problem is not specifically in the ThreadingMixin
specifically, but where BaseServer calls close_request()
after calling process_request(). In the threading mixin,
process_request() spins the thread and returns, causing the
request socket to be invalidated while the thread is still
running. The fix given above will keep the socket valid
while the thread is running, but may cause the socket to not
close properly (my threads generally close the socket when
they are done anyway.)
----------------------------------------------------------------------
Comment By: Luke Kenneth Casson Leighton (lkcl)
Date: 2001-04-26 05:41
Message:
Logged In: YES
user_id=80200
follow-up. i took a look at the differences between
SocketServer.py
in 2.0 and 2.1. there is one small change by guido to the
ThreadingMixIn.process_request() function that calls
self.server_close() instead of explicitly calling
self.socket.close(),
where TCPServer.server_close() calls self.socket.close().
if mr anonymous (hi!) has over-ridden server_close() and
explicitly closes the *request* socket, then of course the
socket will get closed twice.
the rest of the code-mods is a straightforward code-shuffle
moving code from TCPServer into BaseServer: from
examining the diff, i really don't see how bypassing
close_request(), as shown above with the Workaround
in the original bug-report, will help: that will in fact
cause the request _never_ to be closed!
the rest of this report is part of an email exchange with
guido, quoted here:
"the bug-report doesn't state whether python 2.0 worked and
2.1 didn't:
it also doesn't give enough info.
for all we know, he's calling close_request() himself or
request.close()
directly somewhere in his code, and hasn't told anybody,
which
is why he has to over-ride close_request() and tell it to do
nothing.
or he's closing the socket in the HandlerClass, in finish(),
or
something.
we just don't know.
either that, or his HandlerClass creates a socket once and
only
once, with the result that close_request() closes the one
socket, and he's _completely_ stuffed, then :)"
----------------------------------------------------------------------
Comment By: Luke Kenneth Casson Leighton (lkcl)
Date: 2001-04-26 04:11
Message:
Logged In: YES
user_id=80200
hi there,
i'm the person who wrote the BaseServer class. guido
contacted me about it: could you please send me or post here
a working test example that demonstrates the problem.
i assume, but you do not state, that you have tested your
MyHTTPServer with python 2.0, please let us know, here, if
that is a correct assumption.
thanks!
luke
----------------------------------------------------------------------
You can respond by visiting:
http://sourceforge.net/tracker/?func=detail&atid=105470&aid=417845&group_id=5470