Re the benchmarks:
yes, they're micro benchmarks, the intent was to show that the performance can be non impacting
no, that doesn't invalidate them (just scopes their usefulness, my sales pitch at the end was slightly over-egging things but reasonable, imo),
yes I ignored direct quadratic behaviour in indexing, as I would never propose that as a goal
adding in iter() based comparisons would be interesting, however it doesn't invalidate the list() option, as this is very often used as a solution to the problem.
It's true, benchmarks that don't match your incentives and opinions always lie.
As for use-cases, I'll admit that I see this as a fairly minor quality-of-life issue. Finding use-cases is a bit tricky, as the fact that dictionaries have defined order is a recent feature, and I know I am (and I'm sure many other people) are still adapting to take advantage of this new functionality. There's also the fact that in python < 3, the results of dict.keys(), values() and items() was a Sequence, so the impact of this change may still be being felt (yes, even decades later, the majority of the python I've written to deal with 'messy' data involving lots of dictionaries has been in python 2).
However I've put together a set of cases that I personally would like to work as they appear to (Several of these are paraphrases of production code I've worked with):
--
>>> import random
>>> random.choice({'a': 1, 'b': 2}.keys())
'a'
--
>>> import numpy as np
>>> mapping_table = np.array(BIG_LOOKUP_DICT.items())
[[1, 99],
[2, 23],
...
]
--
>>> import sqlite3
>>> conn = sqlite3.connect(":memory:")
>>> params = {'a': 1, 'b': 2}
>>> placeholders = ', '.join(f':{p}' for p in params)
>>> statement = f"select {placeholders}"
>>> print(f"Running: {statement}")
Running: select :a, :b
>>> cur=conn.execute(statement, params.values())
>>> cur.fetchall()
[(1, 2)]
--
# This currently works, but is deprecated in 3.9
>>> import random
>>> dict(random.sample({'a': 1, 'b': 2}.items(), 2))
{'b': 2, 'a': 1}
--
>>> def min_max_keys(d):
>>> min_key, min_val = d.items()[0]
>>> max_key, max_val = min_key, min_val
>>> for key, value in d.items():
>>> if value < min_val:
>>> min_key = key
>>> min_val = value
>>> if value > max_val:
>>> max_key = key
>>> max_val = value
>>> return min_key, max_key
>>> min_max_keys({'a': 1, 'b': 2, 'c': -9999})
>>> min_max_keys({'a': 'x', 'b': 'y', 'c': 'z'})
--
>>> import os
>>> users = {'cups': 209, 'service': 991}
>>> os.setgroups(users.values())
---
Obviously, python is a general-purpose, turing complete language, so each of these options can be written in other ways. But it would be nice if the simple, readable versions also worked :D
The idea that there are future, unspecified changes to dicts() that may or may not be hampered by allowing indexing sounds like FUD to me, unless there are concrete references?
Steve