On Thu, Aug 20, 2020 at 12:54 PM Jonathan Fine <jfine2358@gmail.com> wrote:
Todd wrote:

It has the same capabilities, the question is whether it has any additional abilities that would justify the added complexity. 

The most obvious additional ability is that always
is equivalent to
    >>> d[key]
for a suitable key.

This is a capability that we already have, which would sometimes be lost under the scheme you support.  Also lost would be the equivalence between
   >>> val = d[key]
   >>> getter = operator.itemgetter(key)
   >>> val = getter(d)

Classes that want this could always support a tuple including a dict.  For example,

d[(1, 2, {'space': 0, 'time': 2})]

So this doesn't really help much, saving a few characters at most.  
More exactly, sometimes it wouldn't be possible to find and use a key.

What do you mean by this?

This would be the case either way.  If itemgetter is made to support keyword arguments it would need to have its docs changed.  Or are you suggesting that itemgetter be made to only support the "o" class but not keyword arguments directly?  That would need to be documented too.
As I understand it, xarray uses dimension names to slice data.  Here's an example from
    >>> da[dict(space=0, time=slice(None, 2))]

Presumably, this would be replaced by something like
    >>> da[space=0, time=:2]

Now, the commands
   >>> da[space=0, time=:2]
   >>>  da[space=0, time=:2] = True
   >>> del da[space=0, time=:2]
would at the begging of the call, presumably, do the same processing on the keyword arguments. (Let this stand for a wide range of examples.)

It is arguable that making it easier for the implementer of type(da) to do all that processing in the same place would be a REDUCTION of complexity.  Allowing the processing to produce an intermediate object, say
    >>> key = dict(space=0, time=slice(None, 2))
 would help here.

I don't see how.  kwargs could be packed into a dict, which could then be processed identically to passing a dict directly.  While in your approach there would need to be a test for a new class, and then an additional step to separate out the parts of it.
We have a perfectly good way of handling keywords, so it is up to you to explain why we shouldn't use it.

The scheme you support does not distinguish
    >>> d[1, 2, x=3, y=4]
    >>> d[(1, 2), x=3, y=4]
I don't regard that as being perfectly good.

I think for backwards-compatibility it would have to be.  Why should adding keyword arguments radically change the meaning of the positional arguments?  That seems like an enormous trap.  
In addition, I would like
    >>> d = dict()
    >>> d[x=1, y=2] = 5
to work. It works out-of-the-box for my scheme. It can be made to work with a subclass of dict for the D'Aprano scheme.

First, if it was desired it could be made to work with the normal dict.  The dict class would just need to be modified to handle it.  I think it is highly debatable whether it should, but there is no reason it couldn't.  That is a separate discussion.

Having it work out-of-the-box like that in your case is actually a downside, in my opinion.  It could lead to unexpected situations where classes SEEM to work with keyword indices, but really don't.  For example someone could use it with an old version of xarray that doesn't support it, have it seem to work because it is accepting a hashable, but then have it silently do a completely different thing.  So I think classes having to explicitly handle them appropriately for that class is a benefit, not a downside.  
I'd prefer to discuss this further by writing Python modules that contain code that can be tested. The testing should cover both the technical correctness and the user experience. To support this I intend not to focus on the next version of kwkey.

I still don't see how testing it will help anything at this point.  The behavior is easy to explain, so examples could be provided without actually needing to run it.  So please let's discuss this and work through our thinking with examples before spending a lot of time writing code.