[Python-ideas] Python and Concurrency

Richard Oudkerk r.m.oudkerk at googlemail.com
Tue Mar 27 01:24:37 CEST 2007


On 26/03/07, Josiah Carlson <jcarlson at uci.edu> wrote:
> But really, transferring little bits of data back and forth isn't what
> is of my concern in terms of speed.  My real concern is transferring
> nontrivial blocks of data; I usually benchmark blocks of sizes: 1k, 4k,
> 16k, 64k, 256k, 1M, 4M, 16M, and 64M. Those are usually pretty good to
> discover the "sweet spot" for a particular implementation, and also
> allow a person to discover whether or not their system can be used for
> nontrivial processor loads.

The "20,000 fetches/sec" was just for retreving a "small"
object (an integer), so it only really reflects the server
overhead.  (Sending integer objects directly between processes
is maybe 6 times faster.)

Fetching string objects of particular sizes from a shared dict gives
the following results on the same computer:

string size  fetches/sec  throughput
-----------  -----------  ----------
1 kb          15,000        15 Mb/s
4 kb          13,000        52 Mb/s
16 kb          8,500       130 Mb/s
64 kb          1,800       110 Mb/s
256 kb           196        49 Mb/s
1 Mb              50        50 Mb/s
4 Mb              13        52 Mb/s
16 Mb              3.2      51 Mb/s
64 Mb              0.84     54 Mb/s



More information about the Python-ideas mailing list