str_iterator, bytes_iterator, range_iterator, list_iterator, and tuple_iterator (and probably others) should have a method that is capable of efficiently advancing the iterator, instead of having to call next repeatedly.
I suggest adding an itertools.advance function which dispatches to a dunder __advance__ method (if one exists) or, as a fallback, calls next repeatedly. Then, the iterators mentioned above (and any others capable of efficiently doing so) would implement __advance__, which would directly manipulate their index to efficiently "jump" the desired number of elements in constant-time rather than linear-time.
For example, if you have a large list and want to iterate over it, but skip the first 50000 elements, you should be able to do something like:
it = iter(mylist)
itertools.advance(it, 50000)
Note that you technically can do this with itertools.islice by supplying a start value, but itertools.islice effectively just repeatedly calls next on your behalf, so if you're skipping a lot of elements, it's unnecessarily slow.
Perhaps you can suggest an improvement to the `consume` recipe in the itertools documentation.
As a side note, I noticed that list_iterator has __setstate__ which can be used to (more or less) accomplish this, but that seems very hack-y. Although it is setting the index directly (rather than adding to it), so it'd be more awkward to use if the iterator is already partially exhausted.
it = iter(mylist)
it.__setstate__(50000)
I confess I don't get it. If the object is a list, then starting at slice n will be efficient. If the object is a more general iterable, you will need to count on `next` being called, because there may be side effects. Can you illuminate how your idea is better than list slicing in the first case, or accounts for side effects in the second?