Removing dupes from a list

Alexander Skwar lists.ASkwar at DigitalProjects.com
Fri Apr 26 16:39:46 EDT 2002


Hi!

What's the fastest way to make sure that a list only contains unique
entries?

Currently, I get all the elements from the list and store them as keys
in a dictionary.  I then use dict.keys() as the new value for the list,
like so:

def de_dupe(liste):
        d = {}
        for element in liste:
                d[element] = None
        return list(d)

But this will obviously only work, if all the elements of list are
immutable.

Next, I came up with this class:

class de_duper:
        def __init__(self, liste):
                # Store the last compared element
                self.__last = None
                check_list = liste[:]
                check_list.sort()
                self.clean = filter(self.__check, check_list)
        def __check(self, current_element):
                check = self.__last
                self.__last = current_element
                return cmp(check, current_element)
                
Here, I use filter and cmp.  The reason that I use a class is, that cmp
requires two arguments and I thus need to store the last checked value.
Futher, this approach requires that the list first needs to be sorted.
Because of the sort, the order of the elements is all mixed up, compared
to the original list.

What I'm now looking for, is an easy and fast way to remove duplicates
from a list without destroying the list order.  I'm looking for
something like PHP's array_unique().  Is there something like this in
Python?

Alexander Skwar
-- 
How to quote:	http://learn.to/quote (german) http://quote.6x.to (english)
Homepage:	http://www.iso-top.de      |    Jabber: askwar at a-message.de
   iso-top.de - Die günstige Art an Linux Distributionen zu kommen
                       Uptime: 2 days 13 hours 6 minutes





More information about the Python-list mailing list