On 06/22/2013 08:26 PM, Nick Coghlan wrote:
This is potentially worth pursuing, as Python currently only supports "operator.itemgetter" and comprehensions/generator expressions as a mechanism for retrieving multiple items from a container in a single expression. A helper function in collections, or a new classmethod on collections.Mapping aren't outside the realm of possibility.
For the question of "How do I enlist the compiler's help in ensuring a string is an identifier?", you can actually already do that with a simple helper class:
>class Identifiers: ... def __getattr__(self, attr): ... return attr ... >ident = Identifiers() >ident.a 'a' >ident.b 'b' >ident.c 'c'
Combining the two lets you write things like:
>submap(locals(), ident.a, ident.b, ident.c) {'a': 1, 'b': 2, 'c': 3}
That's interesting. I noticed there isn't a way to easily split a dictionary with a list of keys. Other than using a loop. Any syntax that reduces name=name to a single name would look to me like a pass by reference notation. I don't think that would be good. (?) What I see in examples like above is the amount of additional work the CPU needs to do. I think that is more of an issue than name=name.
Personally, I'd like to see more exploration of what the language *already supports* in this area, before we start talking about adding dedicated syntax.
It may be easier to experiment with stuff like this if we could make the following possible. (*Not taking into account some things, like closures, etc... to keep the idea clear.) Wnere. result = f(...) # normal function call is the same as: args, kwds = func_parse_signature(f, ...) name_space = func_make_name_space(f, args, kwds) result = func_call_with_args(f, args, kwds) result = func_call_code(f, name_space) These functions create a way to reuse a functions parts in new ways. But it doesn't go the full step of being able to take functions apart and reassemble them. That's harder to do and keep everything working. Some of this is currently doable, but it involved hacking a function object/type or using exec. For example, the signature could be parsed by a decorator, with added features. Then the decorator could skip the functions normal signature parsing step, and call the function with the args and kwds instead. And possibly a name space could be reused with a function, which would have the effect of it having static variables, or a class with attributes used for that purpose. (Yes, care would be needed.) These functions might also be usable as stacked decorators. (It'll need some experimenting to make this work.) @func_call_code @func_make_name_space @func_parse_signatire def foo(...): ... Would just break up calling a function into sub steps. Additional decorators could be stuck between those to check or alter the intermediate results. Something like that might be useful for some types of testing. It's probably easier to check the values in the created name space before it's used than it is to check the arguments before they are parsed by the signature. If these could be C functions and corresponded to their own byte codes, they might enable some interesting internal optimisations. Just a few initial thoughts to follow up on, Ron