Sat, 20 Jul 2002 07:11:36 -0400 (EDT)
On Sat, 20 Jul 2002, Tim Peters wrote:
> After reading all those papers, I couldn't resist taking a crack at a new
> algorithm that might be practical, and have something you might call a
> non-recursive adaptive stable natural mergesort / binary insertion sort
Great work, Tim! I've got several Python implementations of stable-sorts
that I can now retire.
> If it weren't for the ~sort column, I'd seriously suggest replacing the
> samplesort with this.
If duplicate keys cannot be more efficiently handled, why not add a
list.stable_sort() method? That way the user gets to decide if they want
the ~sort tax. If that case is fixed later, then there is little harm in
having list.sort == list.stable_sort.
> 2*N extra bytes isn't as bad as it might sound, given
> that, in the absence of massive object duplication, each list element
> consumes at least 12 bytes (type pointer, refcount and value) + 4 bytes for
> the list pointer. Add 'em all up and that's a 13% worst-case temp memory
It doesn't bother me in the slightest (and I tend to sort big things).
13% is a reasonable trade-off for stability.
The OPAL Group - Enterprise Systems Architect
Voice: (216) 986-0710 x 19 E-mail: email@example.com
Fax: (216) 986-0714 WWW: http://www.theopalgroup.com