
hello pypy, with the builtinrefactor branch i am at the point where i want to eliminate the 'appfile' concept alltogether in favour of intermingling app-level and interp-level code directly. And before i start to fix the stdobjspace (which still isn't working) i'd like to get rid of 'opcode_app.py' so that the interpreter is 'appfile'-free. It's easier to do this when the implementation of the opcodes (interp- or applevel) lives in methods of a to-be-created "Opcode" class. That's because a class can naturally be instantiated (with a space argument) and all app-level functions can be processed so they end up beeing an 'InterpretedFunction' instance which you can seemlessly/natively invoke at interp-level. e.g. def BUILD_CLASS(self, f): w_meths = f.valuestack.pop() w_bases = f.valuestack.pop() w_name = f.valuestack.pop() w_newclass = self.build_class(w_name, w_bases, w_meths, f.w_globals) f.valuestack.push(w_newclass) # not callable as is, takes no self-argument def app_build_class(name, bases, meths, globals): ... return metaclass(name, bases, namespace) I also think that our internal 'InterpretedFunction' or 'Code' object should be responsible for delivering an 'opcode' class specific to its code and space. This would also allow using a different opcode-set (and implementation) for certain functions. Eventually it might be possible to implement or invent bytecodes at app-level. The key task here is to have a compiler package that produces code objects with a specific 'Opcode' class indication. However, is anybody against putting the opcodes/helpers in a class? Note that the 'visilibity' feature that Armin mentioned ealier is currently not done. But there already is a concept called 'AppVisibleModule' also living in the gateway module. It wraps itself into an app-visible module. E.g. the 'builtin' or 'sys' module are classes that inherit from 'AppVisibleModule'. At wrapping-time all 'w_*' attributes are made visible on the corresponding wrapped module instance. Thus you can access the same (wrapped) object from App-Level or Interp-Level. There is some wrappings taking place but just look at the docstring :-) cheers, holger

Hello Holger, On Mon, Jul 28, 2003 at 02:26:18PM +0200, holger krekel wrote:
(...) However, is anybody against putting the opcodes/helpers in a class?
As we discussed the other day on #pypy, there are in my opinion two different issues in your mail: how to get rid of the *_app.py files (including how to expose interpreter structures to app-level), and whether the opcodes should be moved into a class. Currently the second issue is (also) motivated by the fact that the code in the builtinrefactor branch can only mix app-level code as methods of an instance that have a 'space' attribute -- hence the push to make opcodes into a class. (Let me discuss the *_app issue in the next mail.) Independently of this problem I guess it is a good idea to make a class with the opcodes, instead of just a module like it is now. I'd even say that it should simply be a subclass of PyFrame, with PyFrame being an abstract class that has the logic to load and dispatch opcodes from a bytecode string but not the actual implementation of individual opcodes. In my point of view *code objects* are interpreter-level classes which are not tied to a particular object space (they are "lexical objects", i.e. essentially the same as the source code they represent); then a *function object* is a code object bound to a particular environment, i.e. with an object space, with default arguments, maybe with a closure for nested scopes, and so on. The function object, when called, creates a *frame object* and execute it. In PyPy there is only one function type, but several code object types: standard Python code object (app-level source code), built-in (i.e. implemented at interpreter-level), and possibly others like "using my own special opcodes". In this point of view, you can only call a function object (not a code object), but the function object needs to call a method build_frame() on the code object to create the appropriate kind of frame object. The resulting frame object should then have an eval() method to be actually run. I'm not sure where exactly the ExecutionContext comes into play; its role is to store the frames in the frame stack, but in Python built-in functions do not register frames there. Maybe it should be the role of the specific PyFrame class (implementing an app-level frame) to register and unregister itself into the frame stack. And what about generators ? I guess it would be nice if built-in functions could act as generators as well. A bientot, Armin.

Hi Armin, [Armin Rigo Mon, Aug 04, 2003 at 10:48:27AM +0200]
Please let me clarify the concepts behind the current builtinrefactor branch. The main point of the refactoring was to channel all calls to app-level code mainly through two classes: ScopedCode: a code object with a global and closure scope. you can execute ala c = ScopedCode(space, cpycodeobj, w_globals, closure_w) c.eval_frame(w_locals) this is currently needed in many places. Maybe it should be renamed to 'InterpretedCode' or so. InterpretedFunction: a function which will be interpreted at app-level. it derives from ScopedCode and you can use it ala: f = InterpretedFunction(space, cpyfunc, w_globals, closure_w) f.create_frame(w_args, w_kwargs) executioncontext.eval_frame(f) In fact there is a third in-between one, InterpretedFunctionFromCode, which provides initialization from a cpy-code object rather than from a cpy-function object. Therefore, the current builtinrefactor is not fundamentally requiring a 'space' attribute and the 'app2interp' definition is trivial to change to your suggestion (passing a space explicitely into app2interp-converted functions): class app2interp: def __init__(self, appfunc): self.appfunc = appfunc def __call__(self, space, w_args, w_kwargs): f = InterpretedFunction(space, self.appfunc) return f.eval_frame(w_args, w_kwargs) with which you have from def app_f(x): return x+1 f = app2interp(app_f) the desired explicit f(space, w_args, w_kwargs) interface. IOW, app2interp is just a thin wrapper around the lower-level machinery which was the real target of my refactoring efforts and reduced the LOC's of the interpreter-directory by 10% (more to follow).
Yes, something like this. Also see the other compiler-discussion, lately. (i am currently a bit in a rush ...). cheers, holger

Hello Holger, On Mon, Aug 04, 2003 at 12:08:02PM +0200, holger krekel wrote:
I still find this confusing. I believe that the distinction between code objects and functions in pretty fundamental in Python. A code object has no closure whatsoever (reference to global, to default args...). It is a lexical object. It can be stored in .pyc files or created by compile() out of pure text. In some other programming languages, Python's code objects are blocks of code (maybe whole function bodies, maybe smaller); and Python's function objects are what they call "closures", and as with Python's "def" statement they are created by capturing a block of code and some environment. (Python's "closure" only means the tuple of nested-scope variables going from the parent to the child function; in its broader sense "closure" refers to the whole captured environment, which includes in Python the globals and the default arguments). Armin

[Armin Rigo Mon, Aug 04, 2003 at 03:18:44PM +0200]
right before you interpret it you must place it into a context of global(, closure) and local scopes.
It is a lexical object.
'ScopedCode' was not meant to be such a plain code object. It's rather wrapping a plain lexical code object (the cpy one) and tying it into a context of scopes. Maybe the name 'ScopedCode' is not perfect but without this concept you have a lot of boilerplate at various places dealing with code objects and scopes. To me it seemed that the steps 1. tying a code object to certain scopes 2. creating the frame / possibly parsing arguments into locals 3. executing the frame are a useful separation of concerns. cheers, holger

Hello Holger, On Mon, Aug 04, 2003 at 07:47:27PM +0200, holger krekel wrote:
'ScopedCode' was not meant to be such a plain code object. It's rather wrapping a plain lexical code object (the cpy one) and tying it into a context of scopes.
Then it is a function object...
Indeed. As far as I understand it, (1.) was the reason why Python has separate notions of code objects and function objects. It is even clearer in CPython, where function objects are not built-in functions but really nothing more than a code object tied to a certain environment. It is essentially just a question of terminology so far, but then if 'ScopedCode' is actually a function object (which I guess you knew from the start, but I never actually understood that), you should consider that we thought about defining only a single function type in PyPy and varying just the code object types to reflect the different kinds of implementation. In gateway.py, on the other hand, you have created a whole hierarchy of subclasses of 'ScopedCode' (some with 'Function' in their name -- I really should have understood it earlier). All these classes' instances have a 'cpycode' attributes, but some subclasses expect it to be a code object with an interpretable bytecode. (BTW these ones have the 'cpycode' attribute repeated under another name, 'func_code'.) When we talked about having only one function type, the goal was to have a hierarchy similar to the one in gateway.py but with code objects instead. The (single) function class would only have the issues that do not depend on the kind of code object used for the implementation, mainly argument parsing. A bientot, Armin.

[Armin Rigo Tue, Aug 05, 2003 at 12:00:30AM -0700]
really only if you pass arguments. There are code objects that are simply executed in a context of scopes - no argument parsing stuff whatsoever.
Hmmm. i think we have two cases for interpretation: 1. code objects, tied to a scope, executed (modules, exec statements ...) 2. function objects, tied to a scope, parsing arguments, executed/interpreted the design in gateway.py was meant to fit this model and because they share functionality 'InterpretedFunction' derives from 'ScopedCode'. Of course we could declare 1. as beeing a degenerated function (with no argument parsing) but somehow i felt that 'scoped code' objects are the real "one" building block. I wouldn't think of 'module code' execution as executing a 'function'. However, Armin, please feel free to just modify the branch so that it more suits your view. I don't think that we are far away from each other [*]. The real challenge probably is how to handle globals: how do app-level functions/data see each other? I would (try) approach this in a static way (rather than the very dynamic way i understood from our IRC discussions), e.g. construct a w_globals for every module and put all app name-bindings in there once and for all. For builtin-modules the globals should be on the class instance. Again, feel free to design this differently. cheers, holger [*] please note that 'InterpretedMethod' is just a hack for unittest_w because all the app-level unit-tests still have this (currently) useless 'self' argument. I didn't want to change all the tests unless there is a app/global/visibility concept.

Hello Holger, Ok, I am working on refactoring things to our "almost-common" view :-) On Tue, Aug 05, 2003 at 11:08:13AM +0200, holger krekel wrote:
Yes, you are right here. I am now putting argument parsing code in a new Function class, where it seems to belong. Statements like 'exec code' should not create a function object at all, nor involve argument parsing. We have a potential difference with CPython here, however, because argument parsing is always done in CPython: e.g.
In the model corresponding to the refactoring, the 'exec' statement would work slightly differently. All frame objects now have getxxx() and setxxx() accessors to read or write data in various ways, including setlocaldict(d) and setlocalvar(index, value). The former sets the locals dictionary (and calls locals2fast if needed, to keep the fast variables in sync), which the latter directly sets the indexth local variable. The argument parsing stuff in Function calls setlocalvar() to initialize the frame, while the 'exec' statement calls setlocaldict(). It means that the following would work in PyPy but not in CPython :
I cannot see a real problem in allowing this to work in PyPy, as it is somehow a more regular behavior; in CPython it explicitely doesn't work because of the fast locals optimization. Note that by extension I expect the following behavior in PyPy:
Note that the error only shows up on the 'print x' statement. Anyone has a reason to forbid this ? The 'exec' statement seems quite low-level anyway, and I don't expect it to be normally used for code objects from 'func_code' attributes. Armin

Armin Rigo wrote: ... [stuff agreed]
Maybe I've read not enough of context, but if I only can call a function object, how are then classes and complete modules executed? In standard Python, a frame is created for a code object, together with parameters and additional info, which may, but needn't come from a function object (speaking of closures). Why not stick with this? ciao - chris -- Christian Tismer :^) <mailto:tismer@tismer.com> Mission Impossible 5oftware : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/

Hello Christian, On Tue, Aug 05, 2003 at 02:14:09AM +0200, Christian Tismer wrote:
Point taken. In Python a code object can be executed either with PyEval_EvalCode(), which takes few arguments, or PyEval_EvalCodeEx(), which has tons of arguments.
I suggest then the following interface: 1. code objects have a create_frame(w_globals) that creates a new empty frame; 2. function objects have a parse_args(frame, w_args, w_kwargs) that does the argument decoding dance and fills the given frame; 3. frame objects can be executed with eval(). So if we have no particular arguments to give to the code object, as in the case of class bodies and modules, we only do steps 1 and 3. The step 2 replaces the PyEval_EvalCodeEx() functionality by avoiding the need for a routine with hundreds of arguments (these arguments are exactly the fields of the function object). A bientot, Armin.

Hello again, On Mon, Jul 28, 2003 at 02:26:18PM +0200, holger krekel wrote:
Some issues we discussed (but couldn't conclude) over IRC with Holger about the details of: def app_f(x): return x+1 # to be run at app-level, i.e. interpreted by PyPy f = app2interp(app_f) ** Currently, this only works if app_f and f are methods of a class, whose instances must have a 'space' attribute. You can then call 'self.f(w_x)', where w_x is a wrapped object, and it automatically interprets app_f at app-level in the object space given by 'self.space'. My objection to this (beside the fact that it doesn't work for global functions) is that you must tie 'self' to a particular object space in the first place, which is not conceptually required for all interpreter-level objects. For example, I tend to think about code objects as "literal objects" representing some source code independently of any particular object space. So if we later want to run several object spaces in a single PyPy instance it will probably come in the way at some point. The alternative is to ask that an interpreter-level call should *always* specify 'space' explicitely (but still there is no 'space' argument in 'app_f', because it is implicit for all app-level code). So in the above example we would not change the definition of 'app_f(x)' or 'f', but call it with 'self.f(space, w_x)'. ** Visibility issues. Should anything that starts with 'app_' be visible at app-level ? It means that even "internal" helpers would become visible to user code (at least if the user can obtain a reference to the internal object). I tend to think that it is not a problem; in Python itself everything is visible by default (who uses the trick to make attributes and methods private to a class?). If really we want to hide an app-level helper, we can use name tricks: def hidden_app_f(x): return x+1 f = app2interp(hidden_app_f) This is invisible because it doesn't begin with 'app_', but still it is app-level code because of the app2interp() wrapper. ** app2interp() wrappers vs. attribute magic: it would be easy (with a __getattr__ or with metaclasses) to avoid the use of app2interp altogether: for any attribute 'app_foo' there would automatically be a corresponding interpreter-level attribute 'foo = app2interp(app_foo)'. Against this idea are arguments ranging from "this is magic" to "explicit is better than implicit". Also, an explicit call to app2interp() is the perfect place to give additional information if needed, e.g. "the argument 'x' must be an integer". This is something that we will need anyway at least for the converse wrapper interp2app(): class X: def f(self, space, w_obj, i): ... app_f = interp2app(f, typeof_i=int) (just tentative syntax) publishing an app-level method 'f' that can be called with two arguments (three including 'self'), saying that the interp-level implementation expects the last argument to be an unwrapped integer (thus the app-level call 'self.f("foo", "bar")' would give a TypeError). A bientot, Armin.

Hello Holger, On Mon, Jul 28, 2003 at 02:26:18PM +0200, holger krekel wrote:
(...) However, is anybody against putting the opcodes/helpers in a class?
As we discussed the other day on #pypy, there are in my opinion two different issues in your mail: how to get rid of the *_app.py files (including how to expose interpreter structures to app-level), and whether the opcodes should be moved into a class. Currently the second issue is (also) motivated by the fact that the code in the builtinrefactor branch can only mix app-level code as methods of an instance that have a 'space' attribute -- hence the push to make opcodes into a class. (Let me discuss the *_app issue in the next mail.) Independently of this problem I guess it is a good idea to make a class with the opcodes, instead of just a module like it is now. I'd even say that it should simply be a subclass of PyFrame, with PyFrame being an abstract class that has the logic to load and dispatch opcodes from a bytecode string but not the actual implementation of individual opcodes. In my point of view *code objects* are interpreter-level classes which are not tied to a particular object space (they are "lexical objects", i.e. essentially the same as the source code they represent); then a *function object* is a code object bound to a particular environment, i.e. with an object space, with default arguments, maybe with a closure for nested scopes, and so on. The function object, when called, creates a *frame object* and execute it. In PyPy there is only one function type, but several code object types: standard Python code object (app-level source code), built-in (i.e. implemented at interpreter-level), and possibly others like "using my own special opcodes". In this point of view, you can only call a function object (not a code object), but the function object needs to call a method build_frame() on the code object to create the appropriate kind of frame object. The resulting frame object should then have an eval() method to be actually run. I'm not sure where exactly the ExecutionContext comes into play; its role is to store the frames in the frame stack, but in Python built-in functions do not register frames there. Maybe it should be the role of the specific PyFrame class (implementing an app-level frame) to register and unregister itself into the frame stack. And what about generators ? I guess it would be nice if built-in functions could act as generators as well. A bientot, Armin.

Hi Armin, [Armin Rigo Mon, Aug 04, 2003 at 10:48:27AM +0200]
Please let me clarify the concepts behind the current builtinrefactor branch. The main point of the refactoring was to channel all calls to app-level code mainly through two classes: ScopedCode: a code object with a global and closure scope. you can execute ala c = ScopedCode(space, cpycodeobj, w_globals, closure_w) c.eval_frame(w_locals) this is currently needed in many places. Maybe it should be renamed to 'InterpretedCode' or so. InterpretedFunction: a function which will be interpreted at app-level. it derives from ScopedCode and you can use it ala: f = InterpretedFunction(space, cpyfunc, w_globals, closure_w) f.create_frame(w_args, w_kwargs) executioncontext.eval_frame(f) In fact there is a third in-between one, InterpretedFunctionFromCode, which provides initialization from a cpy-code object rather than from a cpy-function object. Therefore, the current builtinrefactor is not fundamentally requiring a 'space' attribute and the 'app2interp' definition is trivial to change to your suggestion (passing a space explicitely into app2interp-converted functions): class app2interp: def __init__(self, appfunc): self.appfunc = appfunc def __call__(self, space, w_args, w_kwargs): f = InterpretedFunction(space, self.appfunc) return f.eval_frame(w_args, w_kwargs) with which you have from def app_f(x): return x+1 f = app2interp(app_f) the desired explicit f(space, w_args, w_kwargs) interface. IOW, app2interp is just a thin wrapper around the lower-level machinery which was the real target of my refactoring efforts and reduced the LOC's of the interpreter-directory by 10% (more to follow).
Yes, something like this. Also see the other compiler-discussion, lately. (i am currently a bit in a rush ...). cheers, holger

Hello Holger, On Mon, Aug 04, 2003 at 12:08:02PM +0200, holger krekel wrote:
I still find this confusing. I believe that the distinction between code objects and functions in pretty fundamental in Python. A code object has no closure whatsoever (reference to global, to default args...). It is a lexical object. It can be stored in .pyc files or created by compile() out of pure text. In some other programming languages, Python's code objects are blocks of code (maybe whole function bodies, maybe smaller); and Python's function objects are what they call "closures", and as with Python's "def" statement they are created by capturing a block of code and some environment. (Python's "closure" only means the tuple of nested-scope variables going from the parent to the child function; in its broader sense "closure" refers to the whole captured environment, which includes in Python the globals and the default arguments). Armin

[Armin Rigo Mon, Aug 04, 2003 at 03:18:44PM +0200]
right before you interpret it you must place it into a context of global(, closure) and local scopes.
It is a lexical object.
'ScopedCode' was not meant to be such a plain code object. It's rather wrapping a plain lexical code object (the cpy one) and tying it into a context of scopes. Maybe the name 'ScopedCode' is not perfect but without this concept you have a lot of boilerplate at various places dealing with code objects and scopes. To me it seemed that the steps 1. tying a code object to certain scopes 2. creating the frame / possibly parsing arguments into locals 3. executing the frame are a useful separation of concerns. cheers, holger

Hello Holger, On Mon, Aug 04, 2003 at 07:47:27PM +0200, holger krekel wrote:
'ScopedCode' was not meant to be such a plain code object. It's rather wrapping a plain lexical code object (the cpy one) and tying it into a context of scopes.
Then it is a function object...
Indeed. As far as I understand it, (1.) was the reason why Python has separate notions of code objects and function objects. It is even clearer in CPython, where function objects are not built-in functions but really nothing more than a code object tied to a certain environment. It is essentially just a question of terminology so far, but then if 'ScopedCode' is actually a function object (which I guess you knew from the start, but I never actually understood that), you should consider that we thought about defining only a single function type in PyPy and varying just the code object types to reflect the different kinds of implementation. In gateway.py, on the other hand, you have created a whole hierarchy of subclasses of 'ScopedCode' (some with 'Function' in their name -- I really should have understood it earlier). All these classes' instances have a 'cpycode' attributes, but some subclasses expect it to be a code object with an interpretable bytecode. (BTW these ones have the 'cpycode' attribute repeated under another name, 'func_code'.) When we talked about having only one function type, the goal was to have a hierarchy similar to the one in gateway.py but with code objects instead. The (single) function class would only have the issues that do not depend on the kind of code object used for the implementation, mainly argument parsing. A bientot, Armin.

[Armin Rigo Tue, Aug 05, 2003 at 12:00:30AM -0700]
really only if you pass arguments. There are code objects that are simply executed in a context of scopes - no argument parsing stuff whatsoever.
Hmmm. i think we have two cases for interpretation: 1. code objects, tied to a scope, executed (modules, exec statements ...) 2. function objects, tied to a scope, parsing arguments, executed/interpreted the design in gateway.py was meant to fit this model and because they share functionality 'InterpretedFunction' derives from 'ScopedCode'. Of course we could declare 1. as beeing a degenerated function (with no argument parsing) but somehow i felt that 'scoped code' objects are the real "one" building block. I wouldn't think of 'module code' execution as executing a 'function'. However, Armin, please feel free to just modify the branch so that it more suits your view. I don't think that we are far away from each other [*]. The real challenge probably is how to handle globals: how do app-level functions/data see each other? I would (try) approach this in a static way (rather than the very dynamic way i understood from our IRC discussions), e.g. construct a w_globals for every module and put all app name-bindings in there once and for all. For builtin-modules the globals should be on the class instance. Again, feel free to design this differently. cheers, holger [*] please note that 'InterpretedMethod' is just a hack for unittest_w because all the app-level unit-tests still have this (currently) useless 'self' argument. I didn't want to change all the tests unless there is a app/global/visibility concept.

Hello Holger, Ok, I am working on refactoring things to our "almost-common" view :-) On Tue, Aug 05, 2003 at 11:08:13AM +0200, holger krekel wrote:
Yes, you are right here. I am now putting argument parsing code in a new Function class, where it seems to belong. Statements like 'exec code' should not create a function object at all, nor involve argument parsing. We have a potential difference with CPython here, however, because argument parsing is always done in CPython: e.g.
In the model corresponding to the refactoring, the 'exec' statement would work slightly differently. All frame objects now have getxxx() and setxxx() accessors to read or write data in various ways, including setlocaldict(d) and setlocalvar(index, value). The former sets the locals dictionary (and calls locals2fast if needed, to keep the fast variables in sync), which the latter directly sets the indexth local variable. The argument parsing stuff in Function calls setlocalvar() to initialize the frame, while the 'exec' statement calls setlocaldict(). It means that the following would work in PyPy but not in CPython :
I cannot see a real problem in allowing this to work in PyPy, as it is somehow a more regular behavior; in CPython it explicitely doesn't work because of the fast locals optimization. Note that by extension I expect the following behavior in PyPy:
Note that the error only shows up on the 'print x' statement. Anyone has a reason to forbid this ? The 'exec' statement seems quite low-level anyway, and I don't expect it to be normally used for code objects from 'func_code' attributes. Armin

Armin Rigo wrote: ... [stuff agreed]
Maybe I've read not enough of context, but if I only can call a function object, how are then classes and complete modules executed? In standard Python, a frame is created for a code object, together with parameters and additional info, which may, but needn't come from a function object (speaking of closures). Why not stick with this? ciao - chris -- Christian Tismer :^) <mailto:tismer@tismer.com> Mission Impossible 5oftware : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/

Hello Christian, On Tue, Aug 05, 2003 at 02:14:09AM +0200, Christian Tismer wrote:
Point taken. In Python a code object can be executed either with PyEval_EvalCode(), which takes few arguments, or PyEval_EvalCodeEx(), which has tons of arguments.
I suggest then the following interface: 1. code objects have a create_frame(w_globals) that creates a new empty frame; 2. function objects have a parse_args(frame, w_args, w_kwargs) that does the argument decoding dance and fills the given frame; 3. frame objects can be executed with eval(). So if we have no particular arguments to give to the code object, as in the case of class bodies and modules, we only do steps 1 and 3. The step 2 replaces the PyEval_EvalCodeEx() functionality by avoiding the need for a routine with hundreds of arguments (these arguments are exactly the fields of the function object). A bientot, Armin.

Hello again, On Mon, Jul 28, 2003 at 02:26:18PM +0200, holger krekel wrote:
Some issues we discussed (but couldn't conclude) over IRC with Holger about the details of: def app_f(x): return x+1 # to be run at app-level, i.e. interpreted by PyPy f = app2interp(app_f) ** Currently, this only works if app_f and f are methods of a class, whose instances must have a 'space' attribute. You can then call 'self.f(w_x)', where w_x is a wrapped object, and it automatically interprets app_f at app-level in the object space given by 'self.space'. My objection to this (beside the fact that it doesn't work for global functions) is that you must tie 'self' to a particular object space in the first place, which is not conceptually required for all interpreter-level objects. For example, I tend to think about code objects as "literal objects" representing some source code independently of any particular object space. So if we later want to run several object spaces in a single PyPy instance it will probably come in the way at some point. The alternative is to ask that an interpreter-level call should *always* specify 'space' explicitely (but still there is no 'space' argument in 'app_f', because it is implicit for all app-level code). So in the above example we would not change the definition of 'app_f(x)' or 'f', but call it with 'self.f(space, w_x)'. ** Visibility issues. Should anything that starts with 'app_' be visible at app-level ? It means that even "internal" helpers would become visible to user code (at least if the user can obtain a reference to the internal object). I tend to think that it is not a problem; in Python itself everything is visible by default (who uses the trick to make attributes and methods private to a class?). If really we want to hide an app-level helper, we can use name tricks: def hidden_app_f(x): return x+1 f = app2interp(hidden_app_f) This is invisible because it doesn't begin with 'app_', but still it is app-level code because of the app2interp() wrapper. ** app2interp() wrappers vs. attribute magic: it would be easy (with a __getattr__ or with metaclasses) to avoid the use of app2interp altogether: for any attribute 'app_foo' there would automatically be a corresponding interpreter-level attribute 'foo = app2interp(app_foo)'. Against this idea are arguments ranging from "this is magic" to "explicit is better than implicit". Also, an explicit call to app2interp() is the perfect place to give additional information if needed, e.g. "the argument 'x' must be an integer". This is something that we will need anyway at least for the converse wrapper interp2app(): class X: def f(self, space, w_obj, i): ... app_f = interp2app(f, typeof_i=int) (just tentative syntax) publishing an app-level method 'f' that can be called with two arguments (three including 'self'), saying that the interp-level implementation expects the last argument to be an unwrapped integer (thus the app-level call 'self.f("foo", "bar")' would give a TypeError). A bientot, Armin.
participants (3)
-
Armin Rigo
-
Christian Tismer
-
holger krekel