a.mueller at icrf.icnet.uk
Mon Jun 7 17:33:37 CEST 1999
I've switched from 'threading' to 'fork' (remember the discussion some
days ago) since that gives me real parallel processing. But (off course)
there are other problems when forking a process ;-( :
1. The parent process reads in a discitonary with about 250000 keys,
this is a sort of static database which never changes.
2. A bidirectional pipe is created for each child process
3. The parent forks
4. The children claculate some for which they've to access the large
5. All the children are able to send pickled objects to the parent (via
All that woks fine now (after I spend a long time fiddling with the pipe
and select stuff to synchronize I/O). However the forking is a waste of
time and memory because all the children get their own large dictionary!
Is there a way to use shared memory in python, I mean how can different
processes access (for read only) one object without passing the keys of
the dictionary object through a pipe (the children need _quick_ access
to that dictionary)?
Thanks alot (again) for your suggestions,
Biomolecular Modelling Laboratory
Imperial Cancer Research Fund
44 Lincoln's Inn Fields
London WC2A 3PX, U.K.
phone : +44-(0)171 2693405 | Fax : +44-(0)171 269 3258
email : a.mueller at icrf.icnet.uk | http://www.icnet.uk/bmm/
More information about the Python-list