On Fri, Jun 24, 2022 at 10:05 PM Chris Angelico <rosuav@gmail.com> wrote:

> Hmmm, I think possibly you're misunderstanding the nature of class
> slots, then. The most important part is that they are looked up on the
> *class*, not the instance; but there are some other quirks too:

Sorry, no. I know how those work. 

> >>> class Meta(type):
> ...     def __getattribute__(self, attr):
> ...             print("Fetching %s from the metaclass" % attr)
> ...             return super().__getattribute__(attr)
> ...
> >>> class Demo(metaclass=Meta):
> ...     def __getattribute__(self, attr):
> ...             print("Fetching %s from the class" % attr)
> ...             return super().__getattribute__(attr)
> ...
> >>> x = Demo()
> >>> x * 2
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
> TypeError: unsupported operand type(s) for *: 'Demo' and 'int'
> Neither the metaclass nor the class itself had __getattribute__

Yes - if you go back to my first e-mail on the thread, and the example code,
that is why I am saying all along the proxy have to explicitly define all
possible dunder methods. 

I've repeateadly written that all _other_ methods and attributes access
go through __getattribute__. 
> called, because __mul__ goes into the corresponding slot. HOWEVER:
> >>> Demo().__mul__
> Fetching __mul__ from the class
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "<stdin>", line 4, in __getattribute__
> Fetching __dict__ from the class
> Fetching __class__ from the class
> Fetching __dict__ from the metaclass
> Fetching __bases__ from the metaclass
> AttributeError: 'Demo' object has no attribute '__mul__'. Did you
> mean: '__module__'?
> If you explicitly ask for the dunder method, it does go through
> __getattribute__.

I know that. The thing is if I define in a class both
"__add__" and "__getattribte__" I will cover both
"instance + 0" and "instance.__add__(0)"

> That's a consequence of it being a proxy, though. You're assuming that
> a proxy is the only option. Proxies are never fully transparent, and
> that's a fundamental difficulty with working with them; you can't
> treat them like the underlying object, you have to think of them as
> proxies forever.

No, not in Python. 
Once you have a proxy class that cover all dundeer methods to operate and return
the proxied object, whoever will make use of that proxy will have it working in a transparent way.
In any code that don't try direct memory access to the proxied object data. 
"Lelo" objects from that link can be freely passed around and used -
at one point, if the object is not dropped, code has to go through one of the dunder methods -
there is no way Python code can do any calculation or output the proxied object without doing so. 
And that experiment worked fantastically fine in doing so, better than I thought it would,
and that is the only thng I am trying to say.

> The original proposal, if I'm not mistaken, was that the "deferred
> thing" really truly would become the resulting object. That requires
> compiler support, but it makes everything behave sanely: basic
> identity checks function as you'd expect, there are no bizarre traps
> with weak references, C-implemented functions don't have to be
> rewritten to cope with them, etc, etc, etc.

So, as I 've written from the first message, this would require deep suport in
the language, which the proxy approach does not. 

> > A similar proxy that is used in day to day coding is a super() instance, and I
> > never saw one needing  `super(cls, instance) is instance` to be true.
> That's partly because super() deliberately does NOT return a
> transparent, or even nearly-transparent, proxy. The point of it is to
> have different behaviour from the underlying instance. So, obviously,
> the super object itself has to be a distinct thing.
> Usually, a proxy offers some kind of special value that makes it
> distinct from the original object (otherwise why have it?), so it'll
> often have some special attributes that tie in with that (for
> instance, a proxy for objects stored in a database might have an
> "is_unsaved" attribute/method to show whether it's been assigned a
> unique ID yet). This is one of very few places where there's no value
> whatsoever in keeping the proxy around; you just want to go straight
> to the real object with minimal fuss.

and that is acheived when what happens when all the dunder methods 
transparently work on the real object: minimal fuss. inst + other work,
str(inst) work, print(inst) work, because it will call str(inst) further down,.

> > > Then you are not talking about the same thing at all. You're talking
> > > about a completely different concept, and you *are* the "you" from my
> > > last paragraphs.
> >
> > I see.
> > I've stepped in because that approach worked _really_ well, and I don't think
> > it is _all_ that different from the proposal on the thread, and is instead a
> > middleground not involving "inplace object mutation", that could make something very
> > close to that proposal feasible.
> This seems to be a bit of a theme: a proposal is made, someone else
> says "but you could do it in this completely different way", and
> because code is so flexible, that's always technically true. But it's
> not the same proposal, and when you describe it as a different
> implementation of the same proposal, you confuse the issue quite a
> bit.
> Your proposal is basically just a memoized lambda function with
> proxying capabilities. The OP in this thread was talking about
> deferred expressions. And my proposal was about a different way to do
> argument defaults. All of these are *different* proposals, they are
> not just implementations of each other. Trying to force one proposal
> to be another just doesn't work.

I am not trying to force anything. My idea was to put on the table
a way to achieve most of the effects of the initial proposal with
less effort. I guess this is "python-ideas" for a reason. 

One of the things we collectively try to achieve is, IMHO, to think  about
innovative proposals, and avoid bloating the language with things that could
easily be implemented as 3rdy party packages. In this case, I'd like to 
'save the core from further complelxities' if the aspects deemed
interesting from the proposal presented can be implemented
with the existing mechanisms. 

And the "inplace object mutation" is a thing that, for me, looks
specially scary in terms of complexity of the runtime. 
We had recently another lenghty thread - about splitting
class declaration with a "forward class declaration", that after
some lengthy discussion was dismissed because it would
also require this.

> > Maybe I'd be more happy to see a generic way to implement "super proxys"
> > like these in a less hacky way, and then those could be used to build the deferred objects
> > as in this proposal, than this specific implementation. In the example project itself, Lelo,  the
> > proxys are used to calculate the object in a subprocess, rather than just delaying their
> > resolve in-thread.
> IMO that's a terrible idea. 

Go back to e-mail one. I only used the thing as  a toy and proof of concept,
and presented it as such.  It is just that the proxying concept it uses work. A conventionally wrapped
"Future" is much more useful for actual work.  

> A proxy usually has some other purpose for
> existing; purely transparent proxies are usually useless. Making it
> easier to make transparent proxies in a generic way isn't going to be
> any value to anything that doesn't want to be fully transparent.

In my reading of the problem at hand, a "purely transparent proxy" is
a nice approach. What one does not want is to compute the
expression eagerly. And it does not even need to be "purely transparent"
as __getattribute__ implementation allows for some attribute namespace
that will allow one to query or otherwise message the underlying object.

> Calculating in a subprocess means that everything needed for that
> calculation has to be able to be serialized (probably pickled) and
> sent to the subprocess, and the result likewise. That's very limiting,
> and where you're okay with that, you probably _aren't_ okay with that
> sort of thing magically happening with all attribute lookups.

Please - I know you ahve some messgae to convey, but you don't have
to reply to every sentence I write.
In the case presented, of course, after the first lookup was required, the
result was waited for, and held as an ordinary internal attribute
to the proxy in the same process. Even for a relatively careless toy
implementation that is the obvious thing to do.

> > > > I just wrote because it is something I made work before - and if there are indeed
> > > > uses for it, the language might not even need changes to support it
> > > > beyond an operator keyword.
> > > >
> > >
> > > Yes, you've done something that is broadly similar to this proposal,
> > > but like every idea, has its own set of limitations. It's easy to say
> > > "I did something different from what you did, and it doesn't require
> > > language support", but your version of the proposal introduces new
> > > problems, which is why I responded to them.
> >
> > Alright - but the only outstanding problem is the "is" and "id"comparison -
> > I am replying still because I have the impression you had not grokked
> > the main point: at some point, sooner or later, for any object in Python,
> > one of the dunder methods _will_ be called (except for identity comparison,
> > if one has it as an "end in itself"). Be it for printing, serializing, or being the target of a unary or
> > binary operator. This path can be hooked to trigger the deferred resolve
> > in the proposal in this thread.
> As shown above, not true; dunder methods are not always called if they
> don't exist, so __getattribute__ cannot always proxy them.

Not sure which part you are thinking I don't understand about this mechanism
here. for the second time in this message; dunder attributes are not used
through __getattribute__ and I know that, and that is why they have to be
present in the proxy class.

There is, of course, _another_ problem - that there are, and not few, code paths
that assume that when a dunder attribute _is_ present, the object have that
capability. So, having the proxy  raise typerror or return NotImplemented 
if the proxied object don't have a certain dunder is not enough. 
(that is, one could check for "__len__" and treat the deferred object
as a sequnece - "__len__" would be in the proxy, but not on
the resolved object) - and yes that is a problem for this approach.

> > That said, I am not super in favor of it being in the language, and I will leave
> > that for other people to discuss.
> >
> > So, thank you for your time. Really!
> >
> No probs. Always happy to discuss ideas; there's nothing wrong with
> throwing thoughts out there, as long as you don't mind people
> disagreeing with you :)


> ChrisA
> _______________________________________________
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-leave@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/MYPVXMSAVOB5HNRBVIHD3IEJ3B5TLT5A/
> Code of Conduct: http://python.org/psf/codeofconduct/