On 03/30/2015 03:03 AM, Andrew Barnert wrote:
Break and continue are harder. Probably give an error the same as what would happen if they are used outside a loop. A break or return would need to be local to the loop. So you can't have a break in a non_local_block unless the block also has the loop in it. That keeps the associated parts local to each other.
That definitely makes things simpler--and, I think, better. It will piss off people who expect (from Lisp or Ruby) to be able to do flow control across the boundaries of a non_local_block, but too bad for them.
Or in my own language I wrote to test ideas like this. In it, the keywords are objects so you can return a statement/keyword from a function and have it execute at that location. So for example a function can increment a counter and return a "nil" keyword, (a no-op like pythons pass keyword), and once a limit is reached, return a "break" keyword. Which breaks the loop. The memory model is by static hash-map lookup. Each defined function object gets a reference to the parents hash-map. So name resolution walks the hash-map tree until it finds the name. There are a number of ways to make that more efficient, but for now it keeps the language simple.
Python on the other hand does many more checks at compile time, and pull's the name references into the code object. That allows for smaller and faster byte code execution. This seems to be where the main issues are.
But as you noted, many of the pieces are already there. So it shouldn't be that difficult to come up with a working test implementation. For now, experimenting with exec may be the best way to test the idea. And possibly come back with some real code to discuss and see where that goes. I think until some real code is written we will go in circles pointing out things wrong with the idea. So lets wait a bit for some real code and examples.
When people think of macro's they may be thinking of several different concepts.
One is to redefine a statement or expression in a way that makes it easier to use. In that case the expression is transposed to another form at compile time. Lisp macro's work in this way.
Another is to in-line a block of code defined in a single location to numerous other locations. Generally this is done with pre-processors.
I think in-lining python functions has been discussed before and that idea would overlap this one.
My interest in functions that can be taken apart and reused with signature/scope objects is a bit different. The idea of mutating the name space by applying code to it rather than calling code by applying values to it. (A normal function call.) This of course is what objects do, they have a state and methods are used to alter the state. But there are other things I think may be of interest in this idea that may relate back to other areas of python.
(NOTE: Just read Stevens message about Raymond's chainmap. So I'm going to see it is useful.)
And the reason for bringing this idea up here was I think it could be used to implement the lite macro behaviour that was suggested with a bit of added syntax.
On the other hand it appears to me, that Python is going in the direction of making it easier to compile to C code. More dynamic features may not be helpful in the long run.
It was an incomplete example. It should have been...
add_1_to_x = code(lambda: x + 1)
and then later you could use it in the same way as above.
x = ^^ add_1_to_x
OK, it sounds like what you're really looking for here is that code(spam) returns a function that's just like spam, but all of its variables (although you still have to work out what that means--remember that Python has already decided local vs. cell vs. global at compile time, before you even get to this code function) will use dynamic rather than lexical scoping. All of the other stuff seems to be irrelevant.
In fact, maybe it would be simpler to just do what Lisp does: explicitly define individual_variables_ as dynamically scoped, effectively the same way we can already define variables as global or nonlocal, instead of compiling a function and then trying to turn some of its variables into dynamic variables after the fact.
It's something to try, but if a code block needs boiler plate to work, or the function it's put in needs it, it really isn't go to be very nice.
And the good news is, I'm 99% sure someone already did this and wrote a blog post about it. I don't know where, and it may be a few years and versions out of date, but it would be nice if you could look at what he did, see that you're 90% of the way to what you want, and just have to solve the last 10%.
Plus, you can experiment with this without hacking up anything, with a bit of clumsiness. It's pretty easy to create a class whose instances dynamically scope their attributes with an explicit stack. (If it isn't obvious how, let me know and I'll write it for you.) Then you just instantiate that class (globally, if you want), and have both the caller and the callee use an attribute of that instance instead of a normal variable whenever you want a dynamically-scoped variable, and you're done. You can write nice examples that actually work in Python today to show how this would be useful, and then compare to how much better it would look with real dynamic variable support.
I'm going to play with the idea a bit over the next few days. :-)
This is just an example to show how the first option above connects tothe examples below. with "^^: x + 1" being equivalent to "code(lambda: x + 1)".
Which would also be equivalent to ...
@code def add_1_to_x(): return x + 1
x = ^^ add_1_to_x
>>>And a bit of sugar to shorten the common uses if needed. >>> >>>spam(x + 1, code(lambda : x + 1)) >>> >>>spam(x + 1, ^^: x + 1)