removing duplication from a huge list.

Tim Chase python.list at tim.thechases.com
Fri Feb 27 12:30:45 EST 2009


>> How big of a list are we talking about? If the list is so big that the
>> entire list cannot fit in memory at the same time this approach wont
>> work e.g. removing duplicate lines from a very large file.
> 
> We were told in the original question: more than 15 million records,
> and it won't all fit into memory. So your observation is pertinent.

Assuming the working set of unique items will still fit within 
memory, it can be done with the following regardless of the 
input-file's size:

   def deduplicator(iterable):
     seen = set()
     for item in iterable:
       if item not in seen:
         seen.add(item)
         yield item

   s = [7,6,5,4,3,6,9,5,4,3,2,5,4,3,2,1]
   print list(deduplicator(s))

   for line in deduplicator(file('huge_test.txt')):
     print line


It maintains order, emitting only new items as they're encountered.

-tkc






More information about the Python-list mailing list