I'm also keen to understand this and think I can elucidate a little, from my study of the code. The perspective of someone who *didn't* write it might help, but I defer to the authors' version when it appears.
Thank you! I'd be happy to know the authors' opinion as well.
A crucial observation is that there is only one nb_add slot in a type definition. Think about adding a int(1) + float(2). Where it lands in long_add (v->ob_type->tp_as_number->nb_add) will return NotImplemented, because it does not understand the float right-hand argument, but float_add (w->ob_type->tp_as_number->nb_add) is able to give an answer, since it can float the int, behaving as float.__radd__(f, i). So the slots have to implement both __add__ and __radd__.
I agree. And by default, this is done by expanding the macro "SLOT1BIN(slot_nb_add, nb_add, "__add__", "__radd__")".
The logic is the same as binary_op1, but the function has to deal with the possibility that one or other type may already provide a special function wrapper function in its nb_add slot, which is what the test tp_as_number->SLOTNAME == TESTFUNC is about. TESTFUNC is nearly always the same as FUNCNAME. Then it also deals with quite a complex decision in method_is_overloaded().
I have overlooked this test, and now I'm confused by it. In the definition of slot_nb_add, it seems to check that tp_as_number->nb_add == slot_nb_add. Is it (a convoluted way) to check that tp_as_number->nb_add != NULL? Otherwise, I guess this always hold for standard library modules, but someone may change tp_as_number->nb_add for another function?
The partial repetition of the logic, which I think is now nested (because binary_op1() may have called slot_nb_add) is necessary to insert the more complex version into the decision tree. But this is roughly where my ability to visualise the paths runs out.
I have an hypothesis: when binary_op1(v, w, ...) is called: 1) either slotv (v->ob_type->tp_as_number->nb_add) is defined. This calls v's slot_nb_add. This slot_nb_add may then call either v's "__add__" or w's "__radd__" using the calls to vectorcall_maybe. 2) or slotv is not defined. In that case we start the whole process using w's slot_nb_add, which should be stored in w->ob_type->tp_as_number->nb_add.
In particular, does this mean that if slotv is defined, the if(slotw) https://github.com/python/cpython/blob/3.8/Objects/abstract.c#L813 will never be reached (meaning this if could be changed by an else if)? Letting the case where w is a subtype of v aside for now, this would mean that some tests may be performed twice, but only one slot_nb_add will be called? (of course, this slot_nb_add will be able to call either its __add__ or the __radd__ of the other)
I would not say they are "defined" by the slot_nb_add function. Rather, if one or other has been defined (in Python), v.__add__ or w.__radd__ is called by the single function slot_nb_add. They cannot be called directly from C, but call_maybe() supplies the mechanism.
A confusing factor is that for types defined in C and filling the nb_add slot, the slot function (float_add, or whatever) has to be wrapped by two descriptors that can then sit in the type's dictionary as "__add__" and "__radd__". In that case these *are* defined by a wrapped C function, but that function is the function in the implementation of the type (float_add, say), not slot_nb_add. There are two kinds of wrapper: one used "Python-side out", so Python-calling "__add__" leads to nb_add's behaviour, and one "C-side out" so C-calling via nb_add leads to "__add__".
This makes sense. Thank you!
I *think* it is changed to contain slot_nb_add as defined by the macro.
And then slot_nb_add is able to dispatch to __add__ or __radd__ with vectorcall_maybe. Thanks!
When this is all clearer (and hopefully accurate), I'll wrap it up in a blog post or something.