
Jeremy Hylton <jeremy@alum.mit.edu> writes:
[chomp]
I'd be interested in looking at it.
Random idea that occurred while answering a post on comp.lang.python: How about dumping the CALL_FUNCTION* opcodes, and replacing them with two non-argumented opcodes, called for the sake of argument NCALL_FUNC and NCALL_FUNC_KW. NCALL_FUNC would pop a function object and a tuple off the stack and apply the function to the tuple. NCALL_FUNC_KW would do the same, then pop a dictionary and then do the moral equivalent of f(*args,**kw). As a preliminary it would be sensible to rework BUILD_MAP so that it built dictionaries off the stack (a bit like BUILD_LIST, and like CALL_FUNCTION now does with keyword arguments...) (and extend the compiler to use this for literal dictionaries). This would add an opcode or so per function call, but would probably make life much simpler. No time for implementation tonight, but could probably knock something up tomorrow (depending how hard it turns out to be). Thoughts? Is that like what you did, Marc? M. -- Those who have deviant punctuation desires should take care of their own perverted needs. -- Erik Naggum, comp.lang.lisp

My first impression is that this sounds like a nice simplifcation. One question is how expensive this is for the common case. Right now arguments are pushed on the interpreter stack before CALL_FUNCTION is executed, which is just a pointer assignment. The pointers on the stack are then assigned into the fast locals of the function after the call. Your scheme sounds like it would increase all function calls by the cost of a tuple allocation. It certainly wouldn't hurt to implement this, as it would provide some practical implementation experience that would inform a PEP on the subject. On a related note, I have proposed a pep to add nested lexical scopes for Python 2.1. Barry's away for the moment, so it hasn't been assigned a number yet. It's just a proposal, not sure what Guido will say in the end, but it also involves revising the function call architecture. I'll send a copy of the current draft (just notes) under a separate subject. Jeremy

If it solves the mess with supporting extended call syntax, adding these opcodes might be a good idea. But as I said, for the normal (not extended) case, the existing CALL_FUNCTION opcode is the right thing to use unless you want things to slow down significantly. --Guido van Rossum (home page: http://www.python.org/~guido/)

Michael Hudson wrote:
No, I just cleaned up the intertwine calling scheme currently implemented in ceval.c. This allows a few improvments, one of them being the possibility to inline C function calls in the main loop (anyone ever trace the path Python takes when calling a builtin function or method... you'd be surprised). About your idea with the new opcodes: you could be touching a performance relevant section there -- a ceval round-trip may cost more than the added if()s in the CALL_FUNCION opcode. -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/

No, this is a bad idea. Long, long ago, all calls requird building a tuple for the arguments first. This tuple creation turned out to be a major bottleneck. That's why the current call opcode exists. --Guido van Rossum (home page: http://www.python.org/~guido/)

My first impression is that this sounds like a nice simplifcation. One question is how expensive this is for the common case. Right now arguments are pushed on the interpreter stack before CALL_FUNCTION is executed, which is just a pointer assignment. The pointers on the stack are then assigned into the fast locals of the function after the call. Your scheme sounds like it would increase all function calls by the cost of a tuple allocation. It certainly wouldn't hurt to implement this, as it would provide some practical implementation experience that would inform a PEP on the subject. On a related note, I have proposed a pep to add nested lexical scopes for Python 2.1. Barry's away for the moment, so it hasn't been assigned a number yet. It's just a proposal, not sure what Guido will say in the end, but it also involves revising the function call architecture. I'll send a copy of the current draft (just notes) under a separate subject. Jeremy

If it solves the mess with supporting extended call syntax, adding these opcodes might be a good idea. But as I said, for the normal (not extended) case, the existing CALL_FUNCTION opcode is the right thing to use unless you want things to slow down significantly. --Guido van Rossum (home page: http://www.python.org/~guido/)

Michael Hudson wrote:
No, I just cleaned up the intertwine calling scheme currently implemented in ceval.c. This allows a few improvments, one of them being the possibility to inline C function calls in the main loop (anyone ever trace the path Python takes when calling a builtin function or method... you'd be surprised). About your idea with the new opcodes: you could be touching a performance relevant section there -- a ceval round-trip may cost more than the added if()s in the CALL_FUNCION opcode. -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/

No, this is a bad idea. Long, long ago, all calls requird building a tuple for the arguments first. This tuple creation turned out to be a major bottleneck. That's why the current call opcode exists. --Guido van Rossum (home page: http://www.python.org/~guido/)
participants (4)
-
Guido van Rossum
-
Jeremy Hylton
-
M.-A. Lemburg
-
Michael Hudson