
Guido said he has mooted this discussion, so it's probably not reaching him. It took one thousand fewer messages for him to stop following this than with PEP 572, for some reason :-).
But before putting it on auto-archive, the BDFL said (1) NO GO on getting a new builtin; (2) NO OBJECTION to putting it in itertools.
My problem with the second idea is that *I* find it very wrong to have something in itertools that does not return an iterator. It wrecks the combinatorial algebra of the module.
That said, it's easy to fix... and I believe independently useful. Just make grouping() a generator function rather than a plain function. This lets us get an incremental grouping of an iterable. This can be useful if the iterable is slow or infinite, but the partial groupings are useful in themselves.
Python 3.7.0 (default, Jun 28 2018, 07:39:16) [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information.
from grouping import grouping grouped = grouping('AbBa', key=str.casefold) for dct in grouped: print(dct)
... {'a': ['A']} {'a': ['A'], 'b': ['b']} {'a': ['A'], 'b': ['b', 'B']} {'a': ['A', 'a'], 'b': ['b', 'B']}
This isn't so useful for the concrete sequence, but for this it would be great:
for grouped in grouping(data_over_wire()):
process_partial_groups(grouped)
The implementation need not and should not rely on "pre-grouping" with itertools.groupby:
def grouping(iterable, key=None): groups = {} key = key or (lambda x: x) for item in iterable: groups.setdefault(key(item), []).append(item) yield groups