Removing duplicates from a list

Christian Stapfer nil at dev.nul
Wed Sep 14 16:10:57 CEST 2005


<martijn at gamecreators.nl> wrote in message 
news:1126706415.609072.92570 at o13g2000cwo.googlegroups.com...
>I do this:
>
> def unique(keys):
>    unique = []
>    for i in keys:
>        if i not in unique:unique.append(i)
>    return unique
>
> I don't know what is faster at the moment.

This is quadratic, O(n^2), in the length n of the list
if all keys are unique.
Conversion to a set just might use a better sorting
algorithm than this (i.e. n*log(n)) and throwing out
duplicates (which, after sorting, are positioned
next to each other) is O(n). If conversion
to a set should turn out to be slower than O(n*log(n))
 [depending on the implementation], then you are well
advised to sort the list first (n*log(n)) and then
throw out the duplicate keys with a single walk over
the list. In this case you know at least what to
expect for large n...

Regards,
Christian





More information about the Python-list mailing list