On Sat, Oct 02, 2021 at 12:45:33PM +0200, Laurent Lyaudet wrote:
def enumerate_with_rest(my_list): for i, item in enumerate(my_list): yield i, item, my_list[:i] + my_list[i + 1:]
So if you called it on the list ['a', 'b', 'c', 'd'] it would yield: 0, 'a', ['b', 'c', 'd'] 1, 'b', ['a', 'c', 'd'] 2, 'c', ['a', 'b', 'd'] 3, 'd', ['a', 'b', 'c'] Is that correct? When would you use this? It seems very specialised. The only algorithm I've seen that uses something similar is generating permutations and combinations.
I am not the only one who had the same need : https://stackoverflow.com/questions/56966429/getting-pairs-of-one-item-and-t...
It would be nice to have an optimized C function for this.
Do you have reason to think that implementing this is C would be significantly faster than doing it in Python? I don't think the overhead of doing it in Python will be much, compared to the cost of iteration and slicing. Nearly all the work is already done in C: the slicing and adding of lists. You might be able to make things a bit faster like this: def enumerate_with_rest(my_list): for i, item in enumerate(my_list): a = [None]*(len(my_list) - 1) a[:i] = my_list[:i] a[i:] = mylist[i + 1:] yield i, item, a but I haven't tested it, it could be slower too. Even if it would be a little faster in C, I'm afraid it is not the responsibility of the stdlib to have an accelerated, optimized implementation of every single obscure and specialised function just because one or two people have a use for it. But if there are lots of uses for this, that might be a different story. -- Steve