[Python-ideas] Contiguous-array-based ordering for OrderedDict

Andrew Barnert abarnert at yahoo.com
Sat Dec 12 19:06:56 EST 2015


On Dec 12, 2015, at 15:20, Serhiy Storchaka <storchaka at gmail.com> wrote:
> 
>> On 13.12.15 00:11, Andrew Barnert via Python-ideas wrote:
>>> On Dec 12, 2015, at 04:19, Serhiy Storchaka <storchaka at gmail.com> wrote:
>>> 
>>>>> On 12.12.15 13:06, Andrew Barnert via Python-ideas wrote:
>>>>> On Dec 12, 2015, at 01:27, Franklin? Lee <leewangzhong+python at gmail.com> wrote:
>>>>> 2. You don't have to shift for each deletion. You can wait until some
>>>>> threshold is reached before shifting, and maybe that will spread out
>>>>> the cost of shift among deletions enough to make an impact.
>>>> 
>>>> I don't think this would help nearly as much as you think. Keeping up to half the array for deleted slots also makes things more complex, doubles the extra storage. But, worst of all, whatever savings you get in the shift time (minus the increased time for the extra logic) are going to be lost in the increased search time: if the array is twice as big, the Kth real element is at 2K. So, instead of K + (N-K), you have 2K + x(N - K), and no matter how good that x speedup is, the 2K part is going to kill you.
>>>> 
>>>> Plus, an API that spreads out about the same work but does it in large batches is less usable. Imagine that you had two functions, one which always takes 1.1ms, one which usually takes 100us but every 1000 times it takes 1s (and freezes up the interpreter whole doing so). Which one would you choose?
>>> 
>>> All this is true for ordinal dict too. A hashtable needs extra storage, and iterating needs to skip unused entries. From time to time you need resize the storage and fill new storage with O(N) complexity. Due to unpredictability of hashes of strings and pointers, the number of collisions an resizings is not predicable, and a time of work can significantly vary from run to run.
>> 
>> That's not the same at all. The benefit of dict isn't that it's amortized, it's that it's amortized _constant_ instead of linear, which is a huge improvement, worth a bit of chunkiness. Going from linear to amortized linear, as the OP proposes, doesn't get you anything good for the cost.
>> 
>> I think you may be forgetting that dict doesn't rehash "from time to time" as this proposal does, it only does it when you grow. And it expands exponentially rather than linearly, so even in the special case where 100% of your operations are inserts, it's still only going to happen a few times.
> 
> Either you or me misunderstood the OP proposition. An array of indices needs to be "compacted" (that costs O(n)) only after at least O(n) addition/deletion/moving operations. Therefore the amortized cost is constant in worst case.

You've got two halves of a process that both take N/2 time. If you turn one of those halves into amortized constant time, your total time is still linear. (Maybe in some cases you've made it twice as fast, but that's still linear--but at any rate, in this case, he hasn't really made it twice as fast, because the first half, the linear search, now has twice as much to search.) So he's adding linear chunkiness, without reducing the total time.

That isn't the same as a normal dict, which adds only logarithmic chunkiness (it does its extra N work log N times, instead of extra N work N/C times), and in service of reducing the total time from linear to amortized constant.

It's possible that there's some other optimization here that the OP didn't include (or that I didn't notice and reply to here) that you've intuitively assumed, which can get us back to constant time. If so, that would be great. (But if there were a way to get constant-time inserts and deletes at arbitrary locations within an array, I suspect nobody would have invented linked lists; am I missing something?)

> All this looks similar to Raymond's proof-of-concept for a compact dictionary (ordering is a side effect). [1]

But his design only preserves ordering if you never delete (or move, but he doesn't have an API for that). For example:

    >>> d = Dict(((1,1), (2,2), (3,3), (4,4)))
    >>> list(d)
    [1, 2, 3, 4]
    >>> del d[1]
    >>> list(d)
    [4, 2, 3]

And constant time deletes (or moves) without destroying order, which is exactly the problem that the hash of linked list nodes in the current design is there to solve. If you don't care about that problem, you don't need that design, but if you're building a general replacement for OrderedDict, you need it (or something to replace it).

Of course an ordered mapping that doesn't support deletions (or moves) could still be useful for lots of things (e.g., most of the time people say they want ordering for a **kw or a class dict, they don't need to delete or move anything).

> [1] https://code.activestate.com/recipes/578375-proof-of-concept-for-a-more-space-efficient-faster/
> 
> 
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/


More information about the Python-ideas mailing list