Python Asynchronous Services

Peter Hansen peter at
Thu Jul 17 15:20:25 CEST 2003

Graeme Matthew wrote:
> while 1:
>     data = recv(1024)
>     if not data: break
> surely in a thread this will block all other threads until all data is
> recieved. 

Why "surely"?  It's actually not the case.  A blocking call such as to
socket.recv() means that the *calling* thread is blocked until the call
returns, not that other threads are blocked.  In fact, the other threads
will immediately be free to run since the calling thread will release
the Global Interpreter Lock before it blocks in the OS socket call.

> I have also been told that one cannot rely on this mechanism as
> the socket might end up in a continious loop, i.e you need to send a header
> included in the data that states the length (bytes) that is been sent, or
> you need some terminator, what happens if the terminator is within the
> requests data and not at the end ?

Now you're getting into a higher level.  You certainly *can* "rely" on
sockets, or receiving data in chunks like the above, but it's certainly
not the easiest way to approach things at this point.  You know about
asyncore, and it would work, and Moshe has mentioned Twisted (and I have
to agree that looking there is your best bet); no point reinventing the
wheel, and even if you're trying to learn, you'll probably learn a lot
by examining the source to either of those packages.

> please any help or examples as I have now been on this for 2 days without
> any luck. I have looked at asyncore and asynchat, problem is once the server
> socket input is completed then their will be an area that is CPU bound where
> it will need to call the model, database etc and produce the html request,
> this means that control will only be handed back to IO operations when this
> is finished, effectively blocking all other requests  ....

Knowing about how to use a Queue to communicate with a pool of threads, or
a single worker thread, for CPU bound requests, is a good skill...  

> I suppose another question is can python handle large files and requests at
> the same time or is one better off simply forking the process even though
> its costly ?

Python is quite capable of this... and forking is definitely not required.
On the other hand, you're better off in general not worrying about performance
issues until you've figured out how to make something work properly.  Forking
is not the best approach, but mainly because of its relative complexity, not
because of "cost".


More information about the Python-list mailing list