[Python-Dev] Active Objects in Python
Michael Sparks (home address)
ms at cerenity.org
Fri Sep 30 23:13:55 CEST 2005
[ I don't post often, but hopefully the following is of interest in this
discussion ]
Bruce Eckel wrote:
> Yes, defining an class as "active" would:
> 1) Install a worker thread and concurrent queue in each object of that
> class.
> 2) Automatically turn method calls into tasks and enqueue them
> 3) Prevent any other interaction other than enqueued messages
Here I think the point is an object that has some form of thread with
enforced communications preventing (or limiting) shared data. This
essentially describes the project we've been working on with Kamaelia.
Kamaelia is divided into two halves:
* A simple framework (Axon) for building components with the following
characteristics:
* Have a main() method which is a generator.
* Have inbound and outbound named queues implemented using lists.
(akin to stdin/out)
* Access to an environmental system which is key/value lookup (similar
purpose to the unix environment). (This could easily grow into a
much more linda like system)
* A collection of components which form a library.
We also have a threaded component which uses a thread and queues rather than
a generator & lists.
With regard to the criteria you listed in a later post:
1) Has been tested against (so far) 2 novice programmers, who just picked
up the framework and ran with in. (learnt python one week, axon the
next, then implemented interesting things rapidly)
2) We don't deal with distribution (yet), but this is on the cards.
Probably before EuroOSCON. (I want to have a program with two top
level pygame windows)
3) We use python generators and have experimented initially with several
tens of thousands of them, which works suprisingly well, even on an
old 500Mhz machine.
4) Self guarding is difficult. However generators are pretty opaque, and
getting at values inside their stack frame is beyond my python skills.
5) Deadlock can be forced in all scenarios, but you would actually have
to work at it. In practice if we yield at all points or use the
threaded components for everything then we'd simply livelock.
6) Regarding an actor. Our pygame sprite component actually is very,
very actor like. This is to the extent we've considered implementing a
script reading and director component for orchestrating them. (Time
has prevented that though)
7) Since our target has been novices, this has come by default.
8) We don't do mobility of components/tasks as yet, however a colleague
at work interested in mobility & agent software has proposed looking
at that using Kamaelia for the coming year. We also ran the framework
by people at the WoTUG (*) conference recently, and the general
viewpoint came back that it's tractable from a formal analysis
and mobility perspective(if we change x,y,z).
(*) http://www.wotug.org/
I wrote a white paper based on my Python UK talk, which is here:
* http://www.bbc.co.uk/rd/pubs/whp/whp11.shtml
(BBC R&D white papers are aimed at imparting info in a such a way that it's
useful and readable, unlike some white papers :)
So far we have components for building things from network servers through
to audio/video decoders and players, and presentation tools. (Using pygame
and tkinter) Also we have a graphical editting tool for putting together
systems and introspecting systems.
An aim in terms of API was to enable novices to implement a microcosm
version of the system. Specifically we tested this idea on a pre-university
trainee who built a fairly highly parallel system which took video from an
mpg file (simulating a PVR), and sent periodic snapshots to a mobile phone
based client. He had minimal programming experience at the start, and
implemented this over a 3 months period.
We've since repeated that experience with a student who has had 2 years at
university, but no prior concurrency, python or networks experience. In his
6 weeks with us he was able to do some interesting things. (Such as
implement a system for joining multicast islands and sending content over a
simple reliable multicast protocol).
The tutorial we use to get people up to speed rapidly is here:
* http://kamaelia.sourceforge.net/MiniAxon/
However that doesn't really cover the ideas of how to write components, use
pipelines, graphlines and services.
At the moment we're adding in the concept of a chassis into which you plug
in existing components. The latest concept we're bouncing round is this:
# (This was fleshed out in respect to a question by a novice on c.l.p -
# periodically send the the same message to all connected clients)
# start
from Kamaelia.Chassis.ConnectedServer import SimpleServer
from Kamaelia.Chassis.Splitter import splitter, publish, Subscriber
class message_source(component):
[ ... snippety ... ]
def main(self):
last = self.scheduler.time
while 1:
yield 1
if self.scheduler.time - last > 1:
self.send(self.generateMessage, "outbox")
last = self.scheduler.time
splitter("client_fanout").activate()
source = message_source().activate()
publish("client_fanout", source) # perhaps something else?
def client_protocol(): return Subscriber("client_fanout")
SimpleServer(protocol=client_protocol, port=1400).run()
# end
>From the perspective of using multiple CPUs, currently we're thinking
along the lines of this. I'lll use existing examples to make things more
concrete.
Currently to build a player to decode and playback dirac [1] encoded video
in python, we're doing this:
[1] http://dirac.sf.net/
pipeline(
ReadFileAdaptor(file, readmode="bitrate", bitrate = 300000*8/5),
DiracDecoder(),
RateLimit(framerate),
VideoOverlay(),
).run()
Sometimes, however, it would be nice to take a YUV file, encode, decode and
view (since it simplifies the process of seeing changes):
pipeline( ReadFileAdaptor(FILENAME, readmode="bitrate", bitrate= 1000000),
RawYUVFramer( size=SIZE ),
DiracEncoder(preset=DIRACPRESET, encParams=ENCPARAMS ),
DiracDecoder(),
VideoOverlay()
).run()
Clearly on a multiple CPU machine, it would be useful to have the processing
cross processes, so we're intending on using a process chassis (a chassis
is a thing with holes or can have things attached) to run the components in
a different python process. The likely syntax we'll be playing with would
be:
pipeline( ReadFileAdaptor(FILENAME, readmode="bitrate", bitrate= 1000000),
RawYUVFramer( size=SIZE ),
SubProcess(
DiracEncoder(preset=DIRACPRESET, encParams=ENCPARAMS ),
),
SubProcess( DiracDecoder()),
VideoOverlay()
).run()
This is intended to run over 3 processes. (One parent, two children)
For the degenerate case:
pipeline( SubProcess( ReadFileAdaptor(FILENAME, readmode="bitrate", bitrate=
1000000)),
SubProcess( RawYUVFramer( size=SIZE ),)
SubProcess(
DiracEncoder(preset=DIRACPRESET, encParams=ENCPARAMS ),
),
SubProcess( DiracDecoder()),
SubProcess( VideoOverlay())
).run()
We're actively considering the possible sugar:
DistributedProcessPipeline(
... original list of components ...
)
But we're looking to walk before running there. Much like we implemented the
underlying functionality before pipelines and that before implementing
graphines.
One issue you potentially raise in this scenario is how you find existing
components that may exist without having tight coupling. The way we're
working on this is to have named services which you look up in a linda-type
way. One nasty thing we found there is that the code can start looking
snarled up. That was until we realised because we have a scheduler that we
can implementing almost-but-not-quite-greenlet style switching using
generators.
Specifically a pygame based component can do a lookup for an actve display
thus:
def main(self):
yield WaitComplete(
self.requestDisplay(DISPLAYREQUEST=True,
callback = (self,"control"),
size = (self.render_area.width,
self.render_area.height),
position = self.position
)
)
display = self.display
Where requestDisplay does this:
def requestDisplay(self, **argd):
displayservice = PygameDisplay.getDisplayService()
self.link((self,"signal"), displayservice)
self.send(argd, "signal")
for _ in self.waitBox("control"): yield 1
display = self.recv("control")
self.display = display
That said, having seen your post further down the thread about a set
of requirements you post, I think on many fronts we hit much of the aims
you list. This is largely for one simple reason - our primary aim is that of
making concurrency __easy__ to deal with, whilst not sacrificing the
portable scalability. This is also why we've worked on graphical tools
for putting together these pipelines, as well as looking inside.
We have a deliberately naive C++ implementation based on the mini-axon
tutorial which uses Duffs device to implement generators for example. I'd
love to see that implementation expand since I think it'd generate a whole
set of questions that need answering and would probably benefit the python
implementation. Mind you hopefully pypy and shedskin will mitigate /our/
need to do that translation manually :-)
If anyone's interested further, I'm happy to talk more (I'm going to Euro
OSCON is anyone wants to discuss this there), but this feels like spamming
the list so I'll leave things at that.
However, so far we're finding the approach is a good one for our needs.
The reason I'm posting about it here is because I think Kamaelia directly
hits every point you made, but mainly in the hope it's of use to others
(either directly or for cherry picking ideas).
Best Regards,
Michael.
--
Michael Sparks, Senior R&D Engineer, Digital Media Group
Michael.Sparks at rd.bbc.co.uk, http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP
This e-mail may contain personal views which are not the views of the BBC.
More information about the Python-Dev
mailing list