[Python-ideas] Python and Concurrency
jcarlson at uci.edu
Tue Mar 27 06:09:04 CEST 2007
"Richard Oudkerk" <r.m.oudkerk at googlemail.com> wrote:
> On 26/03/07, Josiah Carlson <jcarlson at uci.edu> wrote:
> > But really, transferring little bits of data back and forth isn't what
> > is of my concern in terms of speed. My real concern is transferring
> > nontrivial blocks of data; I usually benchmark blocks of sizes: 1k, 4k,
> > 16k, 64k, 256k, 1M, 4M, 16M, and 64M. Those are usually pretty good to
> > discover the "sweet spot" for a particular implementation, and also
> > allow a person to discover whether or not their system can be used for
> > nontrivial processor loads.
> The "20,000 fetches/sec" was just for retreving a "small"
> object (an integer), so it only really reflects the server
> overhead. (Sending integer objects directly between processes
> is maybe 6 times faster.)
That's a positive sign.
> Fetching string objects of particular sizes from a shared dict gives
> the following results on the same computer:
Those numbers look pretty good. Would I be correct in assuming that
there is a speedup sending blocks directly between processes? (though
perhaps not the 6x that integer sending gains)
I will definitely have to dig deeper, this could be the library that
we've been looking for.
More information about the Python-ideas