Shared memory python between two separate shell-launched processes
Charles Fox (Sheffield)
charles.fox at gmail.com
Thu Feb 10 09:30:18 EST 2011
Hi guys,
I'm working on debugging a large python simulation which begins by
preloading a huge cache of data. I want to step through code on many
runs to do the debugging. Problem is that it takes 20 seconds to
load the cache at each launch. (Cache is a dict in a 200Mb cPickle
binary file).
So speed up the compile-test cycle I'm thinking about running a
completely separate process (not a fork, but a processed launched form
a different terminal) that can load the cache once then dunk it in an
area of shareed memory. Each time I debug the main program, it can
start up quickly and read from the shared memory instead of loading
the cache itself.
But when I look at posix_ipc and POSH it looks like you have to fork
the second process from the first one, rather than access the shared
memory though a key ID as in standard C unix shared memory. Am I
missing something? Are there any other ways to do this?
thanks,
Charles
More information about the Python-list
mailing list