To -I(nclude) or not to -I(nclude), that is the question...

In installing mxBase 2.0.4 on my MacOS 10.2.1 system I get warnings like the following: gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -Wno-long-double -no-cpp-precomp -Imx/Queue/mxQueue -I/Users/skip/local/include/python2.3 -I/usr/include -I/usr/local/include -c mx/Queue/mxQueue/mxQueue.c -o build/temp.darwin-6.1-Power Macintosh-2.3/mx/Queue/mxQueue/mxQueue/mxQueue.o cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory cc1: warning: changing search order for system directory "/usr/include" cc1: warning: as it has already been specified as a non-system directory This warning bothers me a bit, as it suggests I'm screwing up the compiler's notions about header file search order. Has anyone else seen this and investigated how to get rid of this problem? This is related to bug http://python.org/sf/589427 (which was assigned to me). It's due to the gen_preprocess_options() function in distutils/ccompiler.py. This warning seems to be related to gcc version >= 3.1. I have a quick hack in my local copy of distutils/ccompiler.py. At the bottom of gen_preprocess_options() I replaced for dir in include_dirs: pp_opts.append ("-I%s" % dir) with pp_opts.extend (gen_preprocess_includes(include_dirs)) and added these two functions to the file: def gen_preprocess_includes_macosx_gcc(dirs): """GCC on MacOSX complains if /usr/include or /usr/local/include are mentioned in -I. """ pp_opts = [] for dir in dirs: if dir not in ("/usr/include", "/usr/local/include"): pp_opts.append ("-I%s" % dir) return pp_opts def gen_preprocess_includes(dirs): """Generate the -I flags for a compile command.""" if sys.platform == "darwin": return gen_preprocess_includes_macosx_gcc(dirs) pp_opts = [] for dir in dirs: pp_opts.append ("-I%s" % dir) return pp_opts This is an obscure solution, at best. I'd prefer to at least test the compiler and version. How can I tell what compiler and version will be used to compile files, or can't I can this level? (Seems to me that gen_preprocess_options begs to be a method of the CCompiler class.) Thx, Skip

"SM" == Skip Montanaro <skip@pobox.com> writes:
SM> This warning bothers me a bit, as it suggests I'm screwing up SM> the compiler's notions about header file search order. Has SM> anyone else seen this and investigated how to get rid of this SM> problem? This is related to bug http://python.org/sf/589427 SM> (which was assigned to me). It's due to the SM> gen_preprocess_options() function in distutils/ccompiler.py. SM> This warning seems to be related to gcc version >= 3.1. I seem to remember that Martin said the gcc folks agreed that this behavior was a bug on their part. I tried building Python on Linux with gcc 3.1 several months ago and had the same problems. I dropped back to 2.96 and haven't looked back. :) Search around in python-dev for a discussion on this, but unfortunately I don't remember the exact time frame. -Barry

1. Is there any reason why builtin methods cannot be proxied? 2. It would be handy for my application if a callback could be triggered when an object has no more weak references attached to it. It seems like my application could be a fairly common one: # C library pseudocode def c_library_func(): # C code while 1: o = create_complex_object() user_call_back(o) del o # Python bindings pseudocode def python_bindings_user_call_back (o): # C code py_o = create_python_wrapper_object(o) proxy = PyWeakref_NewProxy(py_o) python_function(py_o) Py_DECREF(proxy) Py_DECREF(py_o) # This will kill the proxy # Evil Python user code evil = None def python_function(o): global evil o.foo() evil = o start(python_function) evil.foo() # Nice exception because evil is a dead proxy # More evil Python user code more_evil = None def python_function(o): global more_evil o.foo() more_evil = o.foo start(python_function) more_evil() # Crash because the underlying data structures that # the Python wrapper object depends on are dead My current solution to this problem is to create my own callable object type that supports weakrefs. That object is then used to wrap the real bound method object e.g. def getattr(self, name): # This is C code callable = MyCallAble_New(Py_FindMethod(...); objects_to_kill_after_py_func_call.add(callable); return PyWeakref_NewProxy(callable); Avoiding this hassle is the reason for my question. Pruning callable objects that the user is done with is the reason for my request. Cheers, Brian

1. Is there any reason why builtin methods cannot be proxied?
The instances of a type need to have a pointer added if the type is to support weak refs to its instances. We've only done this for the obvious candidates like user-defined class instances and a few others. You must weigh the cost of the extra pointer vs. the utility of being able to use weak refs for a particular type.
I'm sorry, but I don't understand why you're using weak references here at all. Is it really the proxy function that you're after? If you're worried about the complex object 'o' being referenced after the C library has killed it (a ligitimate concern), while the Python wrapper can be kept alive using the scenario you show, the Python wrapper py_o should set its pointer to 'o' to NULL when o's lifetime ends (or a little before) and make sure that methods on the wrapper raise an exception when the reference is NULL. This is how all well-behaved wrapper objects behave; e.g. file and socket objects in Python check if the file is closed in each method implementation. --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum wrote:
Fair enough.
I'm sorry, but I don't understand why you're using weak references here at all. Is it really the proxy function that you're after?
I don't understand the question.
That is definitely one possible way to do it. However, I am wrapping a complete DOM, with dozens of objects containing, collectively, hundreds of methods. Adding an explicit check to each method seemed like a lot more pain than using proxies. Avoiding explicit checks also offers a potential performance advantage because sometimes the objects are owned and no checking is required. In that case I simply return the object directly without using a proxy.
They contain far fewer methods. Also, there is more centralization is file and socket objects i.e. they know themselves if they are in a valid state or not, this is not true of my DOM objects. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
You don't have to add it to every method. You can perform the check in tp_getattro before performing the method lookup. Alternatively, you can change the ob_type of the object to simply drop the methods that are not available anymore. Regards, Martin

Martin wrote:
You don't have to add it to every method. You can perform the check in tp_getattro before performing the method lookup.
That would be dangerous! See my original "more evil" example.
Alternatively, you can change the ob_type of the object to simply drop
the methods that are not available anymore.
I like this strategy! But I still think that this is more painful/less elegant than using proxies. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
Yeah, doesn't work for that case.
I don't see how that works either. If you have two objects of the same type, they may die at different times. If the type drops its methods all the objects become disabled. Furthermore, I think it still doesn't help with "more evil", since nobody's touching the type at that point - the method has already been looked up and kept alive by binding it to the underlying object. -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Building C/C++ Extensions for Python: Dec 9-11, Austin, TX http://www.enthought.com/training/building_extensions.html

Brian Quinlan wrote:
Alternatively, you can change the ob_type of the object to simply drop the methods that are not available anymore.
I like this strategy! But I still think that this is more painful/less elegant than using proxies.
I'm an idiot. This won't work either for the same reason that I gave before. Cheers, Brian

[Brian]
[Martin]
No, his second example defeated that: def evil_python_callback(o): global evil_method evil_method = o.meth def called_later(): evil_method() # SegFault This does the getattr when it's allowed, but makes the call later. To Brian: I really don't want to make bound methods proxyable. (And it wouldn't help you anyway until Python 2.3 is released.) If you really don't want to modify each method, you can write your own callable type and use that to wrap the methods; this can then check whether the object is still alive before passing on the call. If you think this is too expensive, go back to checking in each method -- since a check is needed anyway, you can't get cheaper than that. (A weakref proxy isn't free either.) To signal invalidity, the Python wrapper would have to set the 'o' pointer to NULL. I also agree with Martin's observation that weakref proxies aren't intended to make the original object undiscoverable. --Guido van Rossum (home page: http://www.python.org/~guido/)

Brian Quinlan <brian@sweetapp.com> writes:
What's the point of the proxy? You never use it for anything. I guess you must've meant: python_function(proxy) above.
Hum. In Boost.Python the default is to copy the C++ object when passing it to a Python callback, though it's possible for the user to explicitly say "just build a Python object around a reference to the C++ object -- Python code beware of lifetime issues". I guess I like your idea, though, as a safer alternative for non-copyable or for very expensive-to-copy C++ objects.
Hum.
How does this differ from what's happening for you?
Pruning callable objects that the user is done with is the reason for my request.
Hmm, it kinda seems like you want to prune some of them before the user's done with them. Don't users expect objects to stay alive when they hold references? -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Building C/C++ Extensions for Python: Dec 9-11, Austin, TX http://www.enthought.com/training/building_extensions.html

David Abrahams:
What's the point of the proxy? You never use it for anything. I guess you must've meant:
python_function(proxy)
above.
<blush> Nice catch.
DOMs, the data structures in question, can be obscenely expensive to copy. And I firmly believe that Python code should never be able to cause a crash. [nice demonstration deleted]
How does this differ from what's happening for you?
When you created the bound method p.append, you caused the reference count of p to be increased. In my case, since Python is not capable of altering the lifetime of p (it's dead with the C library says that it is dead), this is a bad thing. So I proxy the bound methods and kill them when p dies.
Hmm, it kinda seems like you want to prune some of them before the user's done with them.
I actually presented two different desires at once and know I'm paying the price. There are two reasons that I might want to delete a proxied object: 1. Its lifetime is over and I don't want anyone to access it 2. There are no proxy objects that refer to it, making it inaccessible (since I hold the only actual reference) Reason #1 is why I could like builtin methods to support weakrefs. Reason #2 is why I would like a callback when the last weakref dies.
Don't users expect objects to stay alive when they hold references?
Probably. But that is not always an option. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
This may not be possible for you, but in C++ That would generally be a "handle" class with reference semantics (using a reference-counted smart pointer, for example) so that "copying" the DOM really just amounted to bumping a reference count somewhere.
And I firmly believe that Python code should never be able to cause a crash.
Me too, but some of my users insist on being able to trade safety for performance.
I think I caused the reference count of l to be increased. AFAIK, p isn't keeping the list alive.
Yeah, I see that now.
So I proxy the bound methods and kill them when p dies.
I don't quite understand what "proxy the bound methods" means.
OK.
So you want a weakrefref. Well, I think this problem can be solved by applying the Fundamental Theorem of Software Engineering: apply an extra level of indirection. But Guido's approach of having the wrapper methods check for a null DOM* seems like a reasonable one to me. Incidentally, that's what you automatically get from Boost.Python when your object is held by std::auto_ptr<T>, which is a smart pointer class that can release ownership. I think I want to do something similar for boost::weak_ptr, but I haven't gotten around to it yet.
Don't users expect objects to stay alive when they hold references?
Probably. But that is not always an option.
That's clear to me now. Very interesting thread; thanks for posting it here! -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Building C/C++ Extensions for Python: Dec 9-11, Austin, TX http://www.enthought.com/training/building_extensions.html

David Abrahams:
So I proxy the bound methods and kill them when p dies.
I don't quite understand what "proxy the bound methods" means.
Most extension types have a function like this: PyObject * MyObject_getattr(MyObject* self, char * name) { /* check for attributes */ ... /* ok, not an attribute, now check the methods return Py_FindMethod( MyObject_methods, (PyObject *) self, name); } Py_FindMethod() returns a PyCFunctionObject. The PyCFunctionObject will own a reference to the MyObject "self". Since "self" has a limited lifetime this would be bad. So we could do this: PyObject * MyObject_getattr(MyObject* self, char * name) { /* check for attributes */ ... /* ok, not an attribute, now check the methods method = Py_FindMethod( MyObject_methods, (PyObject *) self, name); if (method != NULL) { add_to_list_of_objects_to_kill(method) return PyWeakref_NewProxy(method); } return NULL; } But we can't quite do this because builtin functions are not proxyable.
That's what I do. I was just wondering if there is any reason not to make C functions weakref/proxyable.
It's not unreasonable and it is a bit simpler. But it is more work and reduces performance slightly in the case where no check need be performed i.e. the object is owned by Python.
That's clear to me now. Very interesting thread; thanks for posting it here!
Yeah, my first non-stupid post to python-dev. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
I notice that you don't need the Weakref property of those proxies at all: You know precisely the set of all objects to consider when the underlying C object has gone away. So you merely want a proxy, not a weak proxy: both for the entire object, and for the methods. So for this code, you can save the proxy, and return your callable object. Make the null pointer check in its tp_call slot, and don't kill it after py func call, but merely clear the pointer. I also notice that you rely on the fact that Python code has no way to find out the underlying object of a weak proxy. I think this is a weak assumption - there is no guarantee that this is not possible, or might not be possible in the future. Regards, Martin

Martin wrote:
As you say, I need a proxy for my methods and a proxy for my objects. Creating my own proxy type for methods is easy, since the only interesting thing that must be proxied is __call__. Creating my own proxy type for arbitrary objects is harder because I must create a proxy for every slot that I use. That requires duplication a lot of the work that already lives in weakrefobject.c Also, for both method and object proxies, I have to invent a mechanism to signal that the original object is dead. weakrefobject.c already defines a nice mechanism.
This is a concern. If I am mistaken in my assumption, please let me know and I will either invent my own proxy type or use the check-on-every-method-call technique. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
I'm not exactly sure what your assumption is. If it is "in Python 2.3, there is no way to unwrap a proxy except by writing an extension module", then your assumption is correct. If your assumption is "there is a guarantee that there never will be such mechanism", your assumption is incorrect. Regards, Martin

Skip Montanaro wrote:
I've had a few bug reports related to this, but all of them were from Solaris users. The -I/usr/include causes the GCC compiler to pick up a system stdarg.h header file which causes a compile error (GCC ships with its own stdarg.h files). The report for MacOS is new, though. Perhaps this is a generic GCC problem ? (#include <stdarg.h> should look in the compiler dirs first and only then scan the additional -I paths)
-- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/

"SM" == Skip Montanaro <skip@pobox.com> writes:
SM> This warning bothers me a bit, as it suggests I'm screwing up SM> the compiler's notions about header file search order. Has SM> anyone else seen this and investigated how to get rid of this SM> problem? This is related to bug http://python.org/sf/589427 SM> (which was assigned to me). It's due to the SM> gen_preprocess_options() function in distutils/ccompiler.py. SM> This warning seems to be related to gcc version >= 3.1. I seem to remember that Martin said the gcc folks agreed that this behavior was a bug on their part. I tried building Python on Linux with gcc 3.1 several months ago and had the same problems. I dropped back to 2.96 and haven't looked back. :) Search around in python-dev for a discussion on this, but unfortunately I don't remember the exact time frame. -Barry

1. Is there any reason why builtin methods cannot be proxied? 2. It would be handy for my application if a callback could be triggered when an object has no more weak references attached to it. It seems like my application could be a fairly common one: # C library pseudocode def c_library_func(): # C code while 1: o = create_complex_object() user_call_back(o) del o # Python bindings pseudocode def python_bindings_user_call_back (o): # C code py_o = create_python_wrapper_object(o) proxy = PyWeakref_NewProxy(py_o) python_function(py_o) Py_DECREF(proxy) Py_DECREF(py_o) # This will kill the proxy # Evil Python user code evil = None def python_function(o): global evil o.foo() evil = o start(python_function) evil.foo() # Nice exception because evil is a dead proxy # More evil Python user code more_evil = None def python_function(o): global more_evil o.foo() more_evil = o.foo start(python_function) more_evil() # Crash because the underlying data structures that # the Python wrapper object depends on are dead My current solution to this problem is to create my own callable object type that supports weakrefs. That object is then used to wrap the real bound method object e.g. def getattr(self, name): # This is C code callable = MyCallAble_New(Py_FindMethod(...); objects_to_kill_after_py_func_call.add(callable); return PyWeakref_NewProxy(callable); Avoiding this hassle is the reason for my question. Pruning callable objects that the user is done with is the reason for my request. Cheers, Brian

1. Is there any reason why builtin methods cannot be proxied?
The instances of a type need to have a pointer added if the type is to support weak refs to its instances. We've only done this for the obvious candidates like user-defined class instances and a few others. You must weigh the cost of the extra pointer vs. the utility of being able to use weak refs for a particular type.
I'm sorry, but I don't understand why you're using weak references here at all. Is it really the proxy function that you're after? If you're worried about the complex object 'o' being referenced after the C library has killed it (a ligitimate concern), while the Python wrapper can be kept alive using the scenario you show, the Python wrapper py_o should set its pointer to 'o' to NULL when o's lifetime ends (or a little before) and make sure that methods on the wrapper raise an exception when the reference is NULL. This is how all well-behaved wrapper objects behave; e.g. file and socket objects in Python check if the file is closed in each method implementation. --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum wrote:
Fair enough.
I'm sorry, but I don't understand why you're using weak references here at all. Is it really the proxy function that you're after?
I don't understand the question.
That is definitely one possible way to do it. However, I am wrapping a complete DOM, with dozens of objects containing, collectively, hundreds of methods. Adding an explicit check to each method seemed like a lot more pain than using proxies. Avoiding explicit checks also offers a potential performance advantage because sometimes the objects are owned and no checking is required. In that case I simply return the object directly without using a proxy.
They contain far fewer methods. Also, there is more centralization is file and socket objects i.e. they know themselves if they are in a valid state or not, this is not true of my DOM objects. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
You don't have to add it to every method. You can perform the check in tp_getattro before performing the method lookup. Alternatively, you can change the ob_type of the object to simply drop the methods that are not available anymore. Regards, Martin

Martin wrote:
You don't have to add it to every method. You can perform the check in tp_getattro before performing the method lookup.
That would be dangerous! See my original "more evil" example.
Alternatively, you can change the ob_type of the object to simply drop
the methods that are not available anymore.
I like this strategy! But I still think that this is more painful/less elegant than using proxies. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
Yeah, doesn't work for that case.
I don't see how that works either. If you have two objects of the same type, they may die at different times. If the type drops its methods all the objects become disabled. Furthermore, I think it still doesn't help with "more evil", since nobody's touching the type at that point - the method has already been looked up and kept alive by binding it to the underlying object. -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Building C/C++ Extensions for Python: Dec 9-11, Austin, TX http://www.enthought.com/training/building_extensions.html

Brian Quinlan wrote:
Alternatively, you can change the ob_type of the object to simply drop the methods that are not available anymore.
I like this strategy! But I still think that this is more painful/less elegant than using proxies.
I'm an idiot. This won't work either for the same reason that I gave before. Cheers, Brian

[Brian]
[Martin]
No, his second example defeated that: def evil_python_callback(o): global evil_method evil_method = o.meth def called_later(): evil_method() # SegFault This does the getattr when it's allowed, but makes the call later. To Brian: I really don't want to make bound methods proxyable. (And it wouldn't help you anyway until Python 2.3 is released.) If you really don't want to modify each method, you can write your own callable type and use that to wrap the methods; this can then check whether the object is still alive before passing on the call. If you think this is too expensive, go back to checking in each method -- since a check is needed anyway, you can't get cheaper than that. (A weakref proxy isn't free either.) To signal invalidity, the Python wrapper would have to set the 'o' pointer to NULL. I also agree with Martin's observation that weakref proxies aren't intended to make the original object undiscoverable. --Guido van Rossum (home page: http://www.python.org/~guido/)

Brian Quinlan <brian@sweetapp.com> writes:
What's the point of the proxy? You never use it for anything. I guess you must've meant: python_function(proxy) above.
Hum. In Boost.Python the default is to copy the C++ object when passing it to a Python callback, though it's possible for the user to explicitly say "just build a Python object around a reference to the C++ object -- Python code beware of lifetime issues". I guess I like your idea, though, as a safer alternative for non-copyable or for very expensive-to-copy C++ objects.
Hum.
How does this differ from what's happening for you?
Pruning callable objects that the user is done with is the reason for my request.
Hmm, it kinda seems like you want to prune some of them before the user's done with them. Don't users expect objects to stay alive when they hold references? -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Building C/C++ Extensions for Python: Dec 9-11, Austin, TX http://www.enthought.com/training/building_extensions.html

David Abrahams:
What's the point of the proxy? You never use it for anything. I guess you must've meant:
python_function(proxy)
above.
<blush> Nice catch.
DOMs, the data structures in question, can be obscenely expensive to copy. And I firmly believe that Python code should never be able to cause a crash. [nice demonstration deleted]
How does this differ from what's happening for you?
When you created the bound method p.append, you caused the reference count of p to be increased. In my case, since Python is not capable of altering the lifetime of p (it's dead with the C library says that it is dead), this is a bad thing. So I proxy the bound methods and kill them when p dies.
Hmm, it kinda seems like you want to prune some of them before the user's done with them.
I actually presented two different desires at once and know I'm paying the price. There are two reasons that I might want to delete a proxied object: 1. Its lifetime is over and I don't want anyone to access it 2. There are no proxy objects that refer to it, making it inaccessible (since I hold the only actual reference) Reason #1 is why I could like builtin methods to support weakrefs. Reason #2 is why I would like a callback when the last weakref dies.
Don't users expect objects to stay alive when they hold references?
Probably. But that is not always an option. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
This may not be possible for you, but in C++ That would generally be a "handle" class with reference semantics (using a reference-counted smart pointer, for example) so that "copying" the DOM really just amounted to bumping a reference count somewhere.
And I firmly believe that Python code should never be able to cause a crash.
Me too, but some of my users insist on being able to trade safety for performance.
I think I caused the reference count of l to be increased. AFAIK, p isn't keeping the list alive.
Yeah, I see that now.
So I proxy the bound methods and kill them when p dies.
I don't quite understand what "proxy the bound methods" means.
OK.
So you want a weakrefref. Well, I think this problem can be solved by applying the Fundamental Theorem of Software Engineering: apply an extra level of indirection. But Guido's approach of having the wrapper methods check for a null DOM* seems like a reasonable one to me. Incidentally, that's what you automatically get from Boost.Python when your object is held by std::auto_ptr<T>, which is a smart pointer class that can release ownership. I think I want to do something similar for boost::weak_ptr, but I haven't gotten around to it yet.
Don't users expect objects to stay alive when they hold references?
Probably. But that is not always an option.
That's clear to me now. Very interesting thread; thanks for posting it here! -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Building C/C++ Extensions for Python: Dec 9-11, Austin, TX http://www.enthought.com/training/building_extensions.html

David Abrahams:
So I proxy the bound methods and kill them when p dies.
I don't quite understand what "proxy the bound methods" means.
Most extension types have a function like this: PyObject * MyObject_getattr(MyObject* self, char * name) { /* check for attributes */ ... /* ok, not an attribute, now check the methods return Py_FindMethod( MyObject_methods, (PyObject *) self, name); } Py_FindMethod() returns a PyCFunctionObject. The PyCFunctionObject will own a reference to the MyObject "self". Since "self" has a limited lifetime this would be bad. So we could do this: PyObject * MyObject_getattr(MyObject* self, char * name) { /* check for attributes */ ... /* ok, not an attribute, now check the methods method = Py_FindMethod( MyObject_methods, (PyObject *) self, name); if (method != NULL) { add_to_list_of_objects_to_kill(method) return PyWeakref_NewProxy(method); } return NULL; } But we can't quite do this because builtin functions are not proxyable.
That's what I do. I was just wondering if there is any reason not to make C functions weakref/proxyable.
It's not unreasonable and it is a bit simpler. But it is more work and reduces performance slightly in the case where no check need be performed i.e. the object is owned by Python.
That's clear to me now. Very interesting thread; thanks for posting it here!
Yeah, my first non-stupid post to python-dev. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
I notice that you don't need the Weakref property of those proxies at all: You know precisely the set of all objects to consider when the underlying C object has gone away. So you merely want a proxy, not a weak proxy: both for the entire object, and for the methods. So for this code, you can save the proxy, and return your callable object. Make the null pointer check in its tp_call slot, and don't kill it after py func call, but merely clear the pointer. I also notice that you rely on the fact that Python code has no way to find out the underlying object of a weak proxy. I think this is a weak assumption - there is no guarantee that this is not possible, or might not be possible in the future. Regards, Martin

Martin wrote:
As you say, I need a proxy for my methods and a proxy for my objects. Creating my own proxy type for methods is easy, since the only interesting thing that must be proxied is __call__. Creating my own proxy type for arbitrary objects is harder because I must create a proxy for every slot that I use. That requires duplication a lot of the work that already lives in weakrefobject.c Also, for both method and object proxies, I have to invent a mechanism to signal that the original object is dead. weakrefobject.c already defines a nice mechanism.
This is a concern. If I am mistaken in my assumption, please let me know and I will either invent my own proxy type or use the check-on-every-method-call technique. Cheers, Brian

Brian Quinlan <brian@sweetapp.com> writes:
I'm not exactly sure what your assumption is. If it is "in Python 2.3, there is no way to unwrap a proxy except by writing an extension module", then your assumption is correct. If your assumption is "there is a guarantee that there never will be such mechanism", your assumption is incorrect. Regards, Martin

Skip Montanaro wrote:
I've had a few bug reports related to this, but all of them were from Solaris users. The -I/usr/include causes the GCC compiler to pick up a system stdarg.h header file which causes a compile error (GCC ships with its own stdarg.h files). The report for MacOS is new, though. Perhaps this is a generic GCC problem ? (#include <stdarg.h> should look in the compiler dirs first and only then scan the additional -I paths)
-- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/
participants (7)
-
barry@python.org
-
Brian Quinlan
-
David Abrahams
-
Guido van Rossum
-
M.-A. Lemburg
-
martin@v.loewis.de
-
Skip Montanaro