Sharing objects between processes

Aaron Brady castironpi at gmail.com
Mon Mar 9 22:57:48 CET 2009


On Mar 9, 4:21 pm, ET <p... at 2drpg.org> wrote:
> On Mon, 2009-03-09 at 13:58 -0700, Aaron Brady wrote:
> > On Mar 9, 2:17 pm, ET <p... at 2drpg.org> wrote:
> > > On Mon, 2009-03-09 at 11:04 -0700, Aaron Brady wrote:
snip
> > Here's what we have to work with from you:
>
> > > > > > > The problem, as briefly as possible:
> > > > > > > I have three processes which need to safely read and update two objects.
>
> > Can they be subprocesses of eachother?  That is, can one master spawn
> > the others as you desire?  Can you have one process running, and
> > connect to it with sockets, pipes, (mailslots,) etc., and just give
> > and get the information to it?  Then, synch. is a lot easier.
>
> > Do you need MROW multiple-reader one-writer synchro., or can they all
> > go one at a time?  Is deadlock a concern?  Can you use OBL(OE) one big
> > lock over everything, or do you need individual locks on elements of
> > the data structure?
>
> > Can you use fire-and-forget access, or do you need return values from
> > your calls?  Do you need to wait for completion of anything?
>
> > 'xmlrpc': remote procedure calls might pertain.
> > --
> >http://mail.python.org/mailman/listinfo/python-list
>
> My intention is to have one object which is updated by two threads, and
> read by a third... one updating thread takes user input, and the other
> talks to a remote server.  Which are children of the others really
> doesn't matter to me; if there's a way to handle that to make this work
> more efficiently, excellent.
>
> Yes, I can use a single lock over the entire object; finer control isn't
> necessary.  I also don't need to avoid blocking on reads; reading during
> a write is something I also want to avoid.
>
> I was hoping to end up with something simple and readable (hence trying
> 'with' and then regular locks first), but if pipes are the best way to
> do this, I'll have to start figuring those out.  If I have to resort to
> XMLRPC I'll probably ditch the process idea and leave it at threads,
> though I'd rather avoid this.
>
> Is there, perhaps, a sensible way to apply queues and/or managers to
> this?  Namespaces also seemed promising, but having no experience with
> these things, the doc did not get me far.

Here is some code and the output.  There's a risk that the 'read'
thread drops the last input from the writers.  It's not tested well
(only once).

One thread writes a key in range 1-100, and value in range 'a-j' to a
dictionary, every second.  Another thread waits for user input, and
writes it as a value with successively higher keys, starting at 101,
also to the dictionary.  The reader thread obtains a representation of
the dictionary, and appends it to a file, without sorting, also every
second.

The output is this:

{}
{87: 'd'}
{76: 'd', 87: 'd'}
{80: 'e', 76: 'd', 87: 'd'}
{80: 'e', 12: 'e', 76: 'd', 87: 'd'}
{80: 'e', 12: 'e', 71: 'h', 76: 'd', 87: 'd'}
{99: 'f', 101: 'abcd', 71: 'h', 76: 'd', 12: 'e', 80: 'e', 87: 'd'}
{99: 'f', 101: 'abcd', 102: 'efgh', 71: 'h', 76: 'd', 10: 'd', 12:
'e', 80: 'e', 87: 'd'}
{99: 'f', 101: 'abcd', 102: 'efgh', 71: 'h', 76: 'd', 10: 'd', 11:
'g', 12: 'e', 80: 'e', 87: 'd', 103: 'ijklm'}

Which is highly uninteresting.  You can see my inputs at 101, 102, and
103.  The code is this.

import threading
import time


class Globals:
	cont= True
	content= { }
	lock_content= threading.Lock( )

def read( ):
	out= open( 'temp.txt', 'w' )
	while Globals.cont:
		with Globals.lock_content:
			rep= repr( Globals.content )
		out.write( rep )
		out.write( '\n' )
		time.sleep( 1 )

def write_random( ):
	import random
	while Globals.cont:
		num= random.randint( 1, 100 )
		letter= random.choice( 'abcedfghij' )
		with Globals.lock_content:
			Globals.content[ num ]= letter
		time.sleep( 1 )

def write_user_inp( ):
	next_key= 101
	while Globals.cont:
		us_in= input( '%i--> '% next_key )
		if not us_in:
			Globals.cont= False
			return
		us_in= us_in[ :10 ]
		print( 'You entered: %s'% us_in )
		with Globals.lock_content:
			Globals.content[ next_key ]= us_in
		next_key+= 1

read_thr= threading.Thread( target= read )
read_thr.start( )
wri_rand_thr= threading.Thread( target= write_random )
wri_rand_thr.start( )
wri_user_thr= threading.Thread( target= write_user_inp )
wri_user_thr.start( )

read_thr.join( )
wri_rand_thr.join( )
wri_user_thr.join( )

Which is about the complexity of what you asked for.  It wasn't tested
with multiprocessing.  Ver 3.0.1.



More information about the Python-list mailing list