An idiom for code generation with exec

bruno.desthuilliers at gmail.com bruno.desthuilliers at gmail.com
Fri Jun 20 23:39:33 CEST 2008


On 20 juin, 21:44, eliben <eli... at gmail.com> wrote:
> On Jun 20, 3:19 pm, George Sakkis <george.sak... at gmail.com> wrote:
>
(snip)

> > It's still not clear why the generic version is so slower, unless you
> > extract only a few selected fields, not all of them. Can you post a
> > sample of how you used to write it without exec to clarify where the
> > inefficiency comes from ?
>
> > George
>
> The generic version has to make a lot of decisions at runtime, based
> on the format specification.
> Extract the offset from the spec, extract the length.

import operator

transformers = []
transformers.append(operator.itemgetter(slice(format.offset,format.offset
+format.length)))

> Is it msb-
> first ? Then reverse.

if format.msb_first:
    transformer.append(reverse)

> Are specific bits required ? If so, do bit
> operations.

etc.... Python functions are objects, you can define your own callable
(ie: function like) types, you can define anonymous single-expression
functions using lambda, functions are closures too so they can carry
the environment they were defined in, implementing partial application
(using either closures or callable objects) is trivial (and is in the
stdlib functools module since 2.5 FWIW), well... Defining a sequence
of transormer functionals is not a problem neither. And applying it to
your data bytestring is just trivial:

def apply_transformers(data, transormers) :
    for transformer in transformers:
        data = transformer(data)
    return data

... and is not necessarily that bad performance-wide (here you'd have
to benchmark both solutions to know for sure).

> A dynamically generated function doesn't have to make any decisions -

No, but neither does a sequence of callable objects. The decisions are
taken where you have the necessary context, and applied somewhere
else. Dynamically generating/compiling code is one possible solution,
but not the only one.


> I guess this is not much different from Lisp macros
The main difference is that Lisp macro are not built as raw string,
but as first class objects. I've so found this approach more flexible
and way easier to maintain, but here again, YMMV.

Anyway, even while (as you may have noticed by now) I'm one of these
"there's-a-better-way-than-eval-exec" peoples, I'd think you may
(depending on benchmarks with both solutions and real-life data) have
a valid use case here - and if you encapsulate this part correctly,
you can alway start with your current solution (so you make it work),
then eventually switch implementation later if it's worth the extra
effort...


Just my 2 cents. Truth is that as long as it works and is
maintainable, then who cares...



More information about the Python-list mailing list