Duplicates in lists

Quinn Dunkan quinn at necro.ugcs.caltech.edu
Fri Mar 10 09:30:01 CET 2000

On Thu, 09 Mar 2000 13:39:30 -0800, Timothy Grant <tjg at avalongroup.net> wrote:
>Michael Husmann wrote:
>> is there someone who has an efficient function that finds
>> all duplicates in a list?
>> I used a hash which works quite reasonable but maybe there
>> is a better way.
>This topic was discussed very recently, and a number of good posts would
>have been archived, so I suggest going and searching Deja, in the
>meantime here is my version, which I thought was as short as possible,
>but which was later bested.
>def Unique(theList):
>    uniqueList = []
>    for value in theList:
>        if value in uniqueList:
>            continue
>        uniqueList.append(value)
>    return uniqueList

I think his dict approach was very reasonable.  Unique() above will get
increasingly less efficient as the list grows, since it searches the whole
list all over again each time.  Hashing an element is fast and you only do it
once per element:

def uniq(a):
    r = {}
    for i in a:
        r[i] = None
    return r.keys()

More information about the Python-list mailing list