On Feb 28, 2014, at 19:46, Steven D'Aprano firstname.lastname@example.org wrote:
Something perhaps like a thunk might be appropriate? We can *almost* do that now, since Python has a compile function:
To save Haoyi Li the trouble of saying it: we _can_ do this now, since Python has import hooks, so you can use MacroPy to quote an expression, giving you an AST to evaluate later, or automatically wrapping it up in a function (like quote and function, respectively, in Lisp). There's also an in-between option of compiling it to a code object but not wrapping that in a function object.
There's one other crazy option for representing quoted expressions/thunks: as lazy futures. This is how Alice does it, and it's also implicitly what dataflow languages like Oz are doing (every variable is a lazy future). In Python, this would really just be a wrapper around a function call, so I don't think it would but anything.
Which one of those do people want? I'm not sure. Since you did a great job listing the problems of the string-quoting solution, we can compare the options for each one.
- You have to write the expression as a string, which means you lose any
possibility of syntax highlighting.
Obviously not a problem here.
- The temptation is to pass some arbitrary untrusted string, which leads
to serious security implications. The argument here should be limited to an actual expression, not a string containing an expression which might have come from who knows where.
Again, not a problem.
- It should be as lightweight as possible. The actual compilation of the
expression should occur at compile-time, not run-time. That implies some sort of syntax for making thunks, rather than a function call.
Not a problem for the function or code versions.
For the AST version, you're doing part of the compilation at compile-time, and the rest at runtime.
- Likewise actually evaluating the thunk should be really lightweight,
which may rule out a function call to eval.
A function is obviously no better or worse than using lambda today.
A code object has to be evaluated by passing it to eval, or wrapping it in a function and calling it. I suspect the former may be lighter weight than calling a function, but I really don't know. The latter, or the other hand, is obviously heavier than calling a function, but not by that much.
An AST has to be compiled to a code object, after which you do the same as the above. Obviously this is heavier than not having to compile.
- How should scoping work? I can see use-cases for flat scoping, static
scoping, dynamic scoping, and the ability to optionally provide custom globals and locals, but I have no idea how practical any of them would be or what syntax they should use.
This, I think, is the big question.
A function is clearly a normal, static-scoped closure.
An AST or code object, you could do almost any form of scoping you want _except_ static, either by building a function around it or by calling eval on it. (You can do additional tricks with an AST, but I don't think most programs would want to. For statements, this could be useful for optional hygienic variables, but for expressions that isn't an issue.)
If we want static scoping, we really need functions. At least we need some kind of object that has some form of code plus a closure mechanism--and that's pretty much all functions are. On the other hand, if we want dynamic, flat, or customizable scoping, we need something we can either build a function object out of dynamically, or evaluate dynamically--and that's pretty much what code objects are.