![](https://secure.gravatar.com/avatar/334b870d5b26878a79b2dc4cfcc500bc.jpg?s=120&d=mm&r=g)
March 20, 2012
12:43 a.m.
On Tue, Mar 20, 2012 at 8:34 AM, Guido van Rossum <guido@python.org> wrote:
Anyway, I also tried to imply that it matters if the number of list items would ever be huge. It seems that is indeed possible (even if not likely) so I think iterators are useful.
But according to Nick's post, there's some sort of uniquification that is done, and the algorithm currently used computes the whole list anyway. I suppose that one could do the uniquification lazily, or find some other way to avoid that computation. Is it worth it to optimize an unlikely case?