Neither of those approaches are terrible, but in more complex expressions where the dot product is only part of the expression, indeed `A @ B` reads better.
And yes, expressions on NumPy arrays will often use a number of those arithmetic operators I learned in grade school as well as `@`. But generally, the mathematics expressed in NumPy code is irreducible complexity. It's not necessarily easy to parse visually, but it *IS* the underlying mathematics.
> When I write an expression like 'a - b * c / d**e + f' that also has a bunch of symbols. But they are symbols that:
>
> - look strongly distinct
> - have meanings familiar from childhood
> - have strongly different meanings (albeit all related to arithmetic)
The double asterisk wasn't one that I used in my childhood, yet in
programming, I simply learned it and started using it. What happens is
that known concepts are made use of to teach others.
I didn't learn the double asterisk in school either. That I had to learn in programming languages. I actually prefer those programming languages that use `^` for exponentiation (in that one aspect, not overall more than Python), because it's more reminiscent of superscript.
Is that simply because you already are familiar with those operators,
or is there something inherently different about them? Would it really
be any different?
It's a mixture of familiarity and actual visual distinctness. `/` and `+` really do just *look different*. In contrast `:=` and `=` just really look similar.
Then don't use it in a signature. That's fine. Personally, I've never
used the "def f(x=_sentinel:=object())" trick, because it has very
little value
I agree with you here. I am pretty sure I've never used it either. But most of the code I read isn't code I wrote myself.
In the case of the walrus, I'm not even saying that I think it should have been prohibited in that context. Just discouraged in style guides. While I understand the handful of cases where walrus-in-signature has a certain utility, I would be happy enough to forgo those. But my concern has more to do with not limiting expressions/symbols/keywords to special one-off contexts. That's a relative thing, obviously, for example `@deco` can really only happen in one specific place, and I like decorators quite a lot. But where possible, symbols or words that can occur in expressions should be available to all kinds of program contexts, and have *pretty much* the same meaning in all of them. And yes, you can find other exceptions to this principle in Python.
This actually circles back to why I would greatly prefer `def myfunc(a=later some_expression())` as a way to express late binding of a default argument. Even though you don't like a more generalized deferred computation, and a version of PEP 671 that used a soft keyword would not automatically create such broader use, in my mind the option of later more general use is left open by that approach.
- it makes the function header carry information that
actually isn't part of the function signature (that the object is also
in the surrounding context as "_sentinel" - the function's caller
can't use that information), and doesn't have any real advantages over
just putting it a line above.
I do think some of this comes down to something I find somewhat mythical. 99%+ of the time that I want to use a sentinel, `None` is a great one. Yes I understand that a different one is required occasionally. But basically, `arg=None` means "late binding" in almost all cases. So that information is ALREADY in the header of almost all the functions I deal with.