Is Stackless Python DEAD?
Christian Tismer
tismer at tismer.com
Tue Jan 1 17:21:12 EST 2002
Michael Hudson wrote:
> Christian Tismer <tismer at tismer.com> writes:
>>Well, it is a little late to answer this, but...
> Hey, I don't care, I'm just glad to see you're still paying attention
> :)
Hmmtja, late but yes.
>>Michael Hudson wrote:
>>
> [...]
>
>> >> * Just to implement map, 150 lines of builtin_map had to be
>> >> rewritten into 350 lines (builtin_map, make_stub_code,
>> >> make_map_frame, builtin_map_nr, builtin_map_loop). The author
>> >> indicates that the same procedure still needs to be done for
>> >> apply and filter. Just what is the "same procedure"? Isn't there
>> >> some better way?
>> >>
>> >
>> > This is where implementing stackless in C really, really hurts.
>>
>>
>>[great explanation of stackless techniques skipped.]
>>
>>I agree this is not easy to understand and to implement.
>>I always was thinking of a framework which makes this
>>easier, but I didn'tcome up with something suitable.
>>
> I've had similar thoughts, but likewise fell short. I think you could
> probably do things with m4 that took something readable and spat out
> stack-neutral C, but it would be a Major Project.
There must be a simple path. The scheme is always the same.
See the split of functions in stackless map. I hope to find
a macro set that can create this mess from a couple of fragments.
[understanding prepare macroes}
>>This is really just an optimization.
>>The PREPARE macros were used to limit code increase, and to
>>gove me some more options to play with.
>>Finally, the PREPARE macros do an optimized opcode prefetch
>>which turns out to be a drastical speedup for the interpreter loop.
>>Standard Python does an increment for every byte code and then
>>one for the optional argument, and the argument is picked bytewise.
>>What I do is a single add to the program counter, dependent of the
>>opcode/argument size which is computed in the PREPARE macro.
>>Then, on intel machines, I use a short word access to the argument
>>which gives a considerable savings. (Although this wouldn't be
>>necessary if the compilers weren't that dumb).
>>
>
> This could/should be split off from stackless, right?
Yes. And if you have a look in the last (dusty) release, you see that
it already is. There is a python script that applies all these
optimizations automagically.
[integration rumor again]
>>I'm at a redesign for Stackless 2.2. I hope to make it simpler,
>>split apart Stackless and optimization,
>>
>
> Ah :)
Jaah :)
>>and continuations are no longer my primary target, but built-in
>>microthreads.
>>
>
> Fair enough. Glad to hear you've found some time for your baby!
Well, thanks. I had a lot of trouble with my living babies, now after
that, the virtual ones get their attention again.
ciao - chris
--
Christian Tismer :^) <mailto:tismer at tismer.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Kaunstr. 26 : *Starship* http://starship.python.net/
14163 Berlin : PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
where do you want to jump today? http://www.stackless.com/
More information about the Python-list
mailing list