Winter Madness - Passing Python objects as Strings
Hendrik van Rooyen
mail at microcorp.co.za
Sat Jun 6 06:47:39 EDT 2009
"Gabriel Genellina" <g--.. at yahoo.com.ar> wrote:
> From your description of the problem, it seems you are acting upon
>messages received from a serial port. You have to process the message
>*before* the next one arrives -- but you gain nothing doing that much
>faster. In other words, even with a blazingly fast processor, you can't do
>things faster than the rate of incoming messages.
This is what I said, but it is only half of the truth. The actual code
is a little bit better. What I have been describing here for
simplicity's sake, is the path taken when there is actually something
in the queue. The non blocking queue.get() sits in a try except,
and if the queue is empty, it actually gathers the inputs and then
examines them for change. If there are changes, it makes up
response messages to the various clients and puts them on the
output queues. If there are no changes, and if it has been a
while since the last responses were sent, it sends responses as
well. Else it sleeps for a very short time and then examines the
incoming queue again.
This arrangement minimises the network usage to what is essential,
while also reassuring the clients that all is well when nothing is
happening. It also uses the time when there is no new output to
scan the inputs, and it is this that I was talking about when I
defined the responsiveness.
>In any case, you have to test-and-branch in the code. Either with
>isinstance(rec,str), either with rec_list=="B", or something. I'm
Yes - but this is the second test, that is only reached in the very
seldom case that the first one is not true, so it does not add to
the normal critical timing path.
>unsure if this is the fastest approach - intuition doesn't play well with
>timings in Python, better to actually measure times.
You have never said a truer thing - at one stage I was using a loop
in C to scan the inputs and put them in memory, but I went back to
python because it was faster to use a small time.sleep than the
equivalent usleep in C. I still do not understand how that could be,
but my oscilloscope does not lie.
>If it works and you feel it's adequate, that's fine. I cannot count how
>many times I've used an integer property to attach a pointer to another
>object (in other languages), but I felt "dirty" doing that. And never did
>something like that in Python...
I think that the dirty feeling is caused by the FUD that has been created
by C's pointer types, and the magic that happens when you add 1 to
a pointer and it actually increments by the size of a whole structure,
invisibly. It makes one forget that the pointer is just a number representing
a memory address. And we have all been bitten in some way or another
by pointer arithmetic. And if you look at this thread's title, it is not
called Winter Madness by accident... :-)
>Ok, what if you split *before* putting in the queue? In the receiving side
>of the queue, just remove the split(",") part (it's already done). This
>does not add anything to the critical path - the split must be done anyway.
This is good thinking, thank you. It cuts the knot.
>Now, instead of building a fake string "B,cannedobject" and uncanning on
>the other side, you can simply put a tuple in the queue ("B", the_object)
>and just get the object on the other side. If I'm not misunderstanding
>your problem, I think this is the cleanest way to do that.
You are spot on, and you are not misunderstanding anything.
This will work, and work well. Thank you again.
So now I have two viable alternatives to the can -
yours, and Scott's.
So it looks like it is time to can the Can.
So I can have a productive day throwing code away.
More information about the Python-list