[OT] code is data

Diez B. Roggisch deets at nospam.web.de
Tue Jun 20 13:45:47 CEST 2006

>> While the _result_ of a transformation might be a less efficient piece of
>> code (e.g. introducing a lock around each call to enable concurrent
>> access), the transformation itself is very - if not totally - static -
> really ?

See below.
> Nope, it's runned each time the module is loaded (with 'loaded' distinct
> from 'imported') - which can make a real difference in some execution
> models...

I already mentioned that latency. If it for whatever reason really becomes
important, it would be the best to cache the result of the transformation.
Which would BTW eliminate any complexity driven runtime penalty -
regardless of the tool used. So - loading time is _not_ an issue. And I
spare you the premature optimization babble... :)

>> So except from a start up latency, it has no impact.
> Having a high startup latency can be a problem in itself.

See above.

> But the problem may not be restricted to startup latency. If for example
> you use a metaclasse and a function that *dynamically* creates new
> classes using this metaclass, then both the class statement and the
> metaclass code transformation will be executed on each call to this
> function.

This is an assumption I don't agree upon. The whole point of the OPs post
was about creating DSLs or alter the syntax of python itself. All that to
enhance expressiveness.

But we are still talking about CODE here - things that get written by
programmers. Even if that is piped through so many stages, it won't grow

Runtime (runtime meaning here not on a startup-phase, but constantly/later)
feeding of something that generates new code - I wouldn't say that is
unheard of, but I strongly doubt it occurs so often that it rules out tree
transformations that don't try and squeeze the latest bit of performance
out themselves. Which, BTW, would rule out python in itself as nothing
beats runtime assembly generation BY assembly. Don't you think?
> The whole point of a code transformation mechanism like the one Anton is
> talking about is to be dynamic. Else one just needs a preprocessor...

No, it is not the whole point. The point is 

The idea is that we now have a fast parser (ElementTree) with a 
reasonable 'API' and a data type (XML or JSON) that can be used as an 
intermediate form to store parsing trees. Especially statically typed 
little languages seem to be very swallow-able. Maybe I will be able to 
reimplement GFABasic (my first love computer language, although not my 
first relationship) someday, just for fun.

No on-the-fly code generation here. He essentially wants lisp-style-macros
with better parsing. Still a programming language. Not a data-monger.


More information about the Python-list mailing list