Guido van Rossum wrote:
Summary: Chistian is right after all. instancemethod_getattro should always prefer bound method attributes over function attributes.
Guido, I'm very happy with your decision, which is most probably a wise decision (without any relation to me).
The point is, that I didn't know what's right or wrong, so basically I was asking for advice on a thing I felt unhappy with. So I asked you to re-think if the behavior is really what you indented, or if you just stopped, early.
Thanks a lot!
That's the summary and all about it, you can skip the rest if you like.
Sure I'm fiddling internaly, but simply by installing some __reduce__ methids, hoping that they work.
OK, so you *could* just make the change you want, but you are asking why it isn't like that in the first place. Good idea...
I actually hacked a special case for __reduce__, to see whether it works at all, but then asked, of course. Most of my pickling stuff might be of general interest, and changing semantics is by no means what I ever would like to do without following the main path.
I added __reduce__ to the PyMethod type and tried to figure out why it didn't take it.
OK. Stating that upfront would have helped...
Sorry about that. I worked too long on these issues already and had the perception that everybody knows that I'm patching __reduce__ into many objects like a bozo :-)
In other words, shouldn't things that are only useful as bound things, always be bound?
This question doesn't address the real issue, which is the attribute delegation to the underlying function object.
Correct, I misspelled things. Of course there is binding, but the chain back to the instance is lost.
The *intention* was for the 2.2 version to have the same behavior: only im_func, im_self and im_class would be handled by the bound method, other attributes would be handled by the function object.
Ooh, I begin to understand!
This is what the IsData test is attempting to do -- the im_* attributes are represented by data descriptors now. The __class__ attribute is also a data descriptor, so that C().x.__class__ gives us <type 'instancemethod'> rather than <type 'function'>.
IsData is a test for having a write method, too, so we have the side effect here that im_* works like I expect, since they happen to be writable? Well, I didn't look into 2.3 for this, but in 2.2 I get
Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: __class__ must be set to new-style class, not 'int' object [9511 refs]
which says for sure that this is a writable property, while
Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: readonly attribute [9511 refs]
seems to be handled differently.
I only thought of IsData in terms of accessing the getter/setter wrappers.
But for anything else, including the various methods that all objects inherit from 'object' unless they override them, the choice was made to let the function attribute win.
That's most probably right to do, since most defaults from object are probably just surrogates.
But when we look at the attributes where both function and bound method provide a value, it seems that the bound method's offering is always more useful! You've already established this for __reduce__; the same is true for __call__ and __str__, and there I stopped. (Actually, I also looked at __setattr__, where delegation to the function also seems a mistake: C().x.foo = 42 is refused, but C().x.__setattr__('foo', 42) sets the attribute on the function, because this returns the (bound) method __setattr__ on functions.)
Your examples are much better than mine.
The pickling machinery gives me an __reduce__ interface, and I'm expecting that this is able to pickle everything.
I don't think you'd have a chance of pickle classes if you only relied on __reduce__. Fortunately there are other mechanisms. :-)
I don't need to pickle classes, this works fine in most cases, and behavior can be modified by users. They can use copy_reg, and that's one of my reasons to avoid copy_reg. I want to have the basics built in, without having to import a Python module.
(I wonder if the pickling code shouldn't try to call x.__class__.__reduce__(x) rather than x.__reduce__() -- then none of these problems would have occurred... :-)
That sounds reasonable. Explicit would have been better than implicit (by hoping for the expected bound chain).
__reduce__ as a class method would allow to explicitly spell that I want to reduce the instance x of class C.
While, in contrast
would spell that I want to reduce the "thing" property of the x instance of C.
x.__class__.__reduce__(C.thing) # would be the same as C.__reduce__(C.thing)
which would reduce the class method "thing" of C, or the class property of C, or whatsoever of class C.
I could envision a small extension to the __reduce__ protocol, by providing an optional parameter, which would open these new ways, and all pickling questions could be solved, probably. This is so, since we can find out whether __reduce__ is a class method or not. If it is just an instance method (implictly bound), it behaves as today. If it is a class method, is takes a parameter, and then it can find out whether to pickle a class, instance, class property or an instance property.
Well, I hope. The above was said while being in bed with 39° Celsius, so don't put my words on the assay-balance.
[trying to use __reduce__, only]
Or you could change the pickling system. Your choice of what to change and what not to change seems a bit arbitrary. :-)
Not really. I found __reduce__ very elegant. It gave me the chance to have almost all patches in a single file, since I didn't need to patch most of the implementation files. Just adding something to the type objects was sufficient, and this keeps my workload smaller when migrating to the next Python. Until now, I only had to change traceback.c and iterator.c, since these don't export enough of their structures to patch things from outside. If at some point somebody might decide that some of this support code makes sense for the main distribution, things should of couzrse move to where they belong.
Adding to copy_reg, well, I don't like to modify Python modules from C so much, and even less I like to add extra Python files to Stackless, if I can do without it.
Changing the pickling engine: Well, I'm hesitant, since it has been developed so much more between 2.2 and 2.3, and I didn't get my head into that machinery, now. What I want to do at some time is to change cPickle to use a non-recursive implementation. (Ironically, the Python pickle engine *is* non-recursive, if it is run under Stackless). So, if I would hack at cPickle at all, I would probably do the big big change, and that would be too much to get done in reasonable time. That's why I decided to stay small and just chime a few __reduce__ thingies in, for the time being. Maybe this was not the best way, I don't know.
OK, so you *are* messing with internals after all (== changing C code), right? Or else how do you accomplish this?
Yessir, I'm augmenting all things-to-be-pickled with __reduce__ methods. And this time is the first time that it doesn't work.
But not necessarily the last time. :-)
Right. probably, I will get into trouble with pickling unbound class methods. Maybe I would just ignore this. Bound class methods do appear in my Tasklet system and need to get pickled. Unbound methods are much easier to avoid and probably not worth the effort. (Yes, tomorrow I will be told that it *is* :-)
I agree. The bound method's attributes should always win, since bound methods only have a small, fixed number of attributes, and they are all special for bound methods.
This *is* a change in functionality, even though there appear to be no unit tests for it, so I'm reluctant to fix it in 2.3. But I think in 2.4 it should definitely change.
That means, for Py 2.2 and 2.3, my current special case for __reduce__ is exactly the way to go, since it doesn't change any semantics but for __reduce__, and in 2.4 I just drop these three lines? Perfect!
sincerely - chris