[Python-checkins] CVS: python/nondist/peps pep-0270.txt,NONE,1.1

Barry Warsaw bwarsaw@users.sourceforge.net
Fri, 07 Sep 2001 15:40:40 -0700

Update of /cvsroot/python/python/nondist/peps
In directory usw-pr-cvs1:/tmp/cvs-serv32151

Added Files:
Log Message:
PEP 270, uniq method for list objects, Jason Petrone

--- NEW FILE: pep-0270.txt ---
PEP: 270
Title: uniq method for list objects
Version: $Revision: 1.1 $
Last-Modified: $Date: 2001/09/07 22:40:38 $
Author: jp@demonseed.net (Jason Petrone)
Status: Draft
Type: Standards Track
Created: 21-Aug-2001
Python-Version: 2.2


    This PEP proposes adding a method for removing duplicate elements to
    the list object.


    Removing duplicates from a list is a common task.  I think it is
    useful and general enough to belong as a method in list objects.
    It also has potential for faster execution when implemented in C,
    especially if optimization using hashing or sorted cannot be used.

    On comp.lang.python there are many, many, posts[1] asking about
    the best way to do this task.  Its a little tricky to implement
    optimally and it would be nice to save people the trouble of
    figuring it out themselves.


    Tim Peters suggests trying to use a hash table, then trying to
    sort, and finally falling back on brute force[2].  Should uniq
    maintain list order at the expense of speed?

    Is it spelled 'uniq' or 'unique'? 

Reference Implementation

    I've written the brute force version.  Its about 20 lines of code
    in listobject.c.  Adding support for hash table and sorted
    duplicate removal would only take another hour or so.


    [1] http://groups.google.com/groups?as_q=duplicates&as_ugroup=comp.lang.python  

    [2] Tim Peters unique() entry in the Python cookbook:


    This document has been placed in the public domain.

Local Variables:
mode: indented-text
indent-tabs-mode: nil
fill-column: 70