[Python-ideas] Python and Concurrency
jcarlson at uci.edu
Wed Mar 28 03:14:13 CEST 2007
"Richard Oudkerk" <r.m.oudkerk at googlemail.com> wrote:
> On 27/03/07, Josiah Carlson <jcarlson at uci.edu> wrote:
> > Those numbers look pretty good. Would I be correct in assuming that
> > there is a speedup sending blocks directly between processes? (though
> > perhaps not the 6x that integer sending gains)
> Yes, sending blocks directly between processes is over 3 times faster
> for 1k blocks, and twice as fast for 4k blocks, but after that it makes
> little difference. (This is using the 'processing.connection'
> sub-package which is partly written in C.)
I'm surprised that larger objects see little gain from the removal of an
encoding/decoding step and transfer.
> Of course since these blocks are string data you can avoid the pickle
> translation which makes things get faster still: the peak bandwidth I
> get is 40,000 x 16k blocks / sec = 630 Mb/s.
> PS. It would be nice if the standard library had support for sending
> message oriented data over a connection so that you could just do
> 'recv()' and 'send()' without worrying about whether the whole message
> was successfully read/written. You can use 'socket.makefile()' for
> line oriented text messages but not for binary data.
Well, there's also the problem that sockets, files, and pipes behave
differently on Windows.
If one is only concerned about sockets, there are various lightly
defined protocols that can be simply implemented on top of
asyncore/asynchat, among them is the sending of a 32 bit length field in
network-endian order, followed by the data to be sent immediately
Taking some methods and tossing them into a synchronous sockets package
wouldn't be terribly difficult (I've done a variant of this for a
commercial project). Doing this generally may not find support, as my
idea of sharing encoding/decoding/internal state transition/etc in
sync/async servers was shot down at least a year ago.
More information about the Python-ideas