https://github.com/python/cpython/blob/master/Objects/odictobject.c :

"""
In order to preserve O(1) performance for node removal (finding nodes), we
must do better than just looping through the linked-list.  Here are options
we've considered:
1. use a second dict to map keys to nodes (a la the pure Python version).
2. keep a simple hash table mirroring the order of dict's, mapping each key
   to the corresponding node in the linked-list.
3. use a version of shared keys (split dict) that allows non-unicode keys.
4. have the value stored for each key be a (value, node) pair, and adjust
   __getitem__(), get(), etc. accordingly.
The approach with the least performance impact (time and space) is #2,
mirroring the key order of dict's dk_entries with an array of node pointers.
While lookdict() and friends (dk_lookup) don't give us the index into the
array, we make use of pointer arithmetic to get that index.  **An alternative
would be to refactor lookdict() to provide the index, explicitly exposing
the implementation detail.**  We could even just use a custom lookup function
for OrderedDict that facilitates our need.  However, both approaches are
significantly more complicated than just using pointer arithmetic.
The catch with mirroring the hash table ordering is that we have to keep
the ordering in sync through any dict resizes.  However, that order only
matters during node lookup.  We can simply defer any potential resizing
until we need to do a lookup.
"""

On Sat, Aug 1, 2020 at 10:06 PM Wes Turner <wes.turner@gmail.com> wrote:
Is there any reason that these features couldn't be added to OrderedDict (which is a linked list)?
https://github.com/python/cpython/blob/master/Objects/odictobject.c

On Sat, Aug 1, 2020, 9:13 PM Inada Naoki <songofacandy@gmail.com> wrote:
On Sun, Aug 2, 2020 at 2:34 AM Christopher Barker <pythonchb@gmail.com> wrote:
>
> On Sat, Aug 1, 2020 at 2:28 AM Marco Sulla <Marco.Sulla.Python@gmail.com> wrote:
>>
>> On Sat, 1 Aug 2020 at 03:00, Inada Naoki <songofacandy@gmail.com> wrote:
>>>
>>> Please teach me if you know any algorithm which has no hole, O(1)
>>> deletion, preserving insertion order, and efficient and fast as array.
>
>
> I would think the goal here would be to re-order once in a while to remove the holes. But that would take time, of course, so you wouldn't want to do it on every deletion. But when?
>
> One option: maybe too specialized, but it could re-pack the array when an indexing operation is made -- since that operation is O(N) anyway. And that would then address the issue of performance for multiple indexing operations -- if you made a bunch of indexing operation in a row without deleting (which would be the case, if this is an alternative to making a copy in a Sequence first), then the first one would repack the internal array (presumably faster than making a copy) and the rest would have O(1) access.
>

Repacking is mutation, and mutating dict while iterating it breaks the iterator.
But `d.items()[42]` don't looks like mutation.

> Given that this use case doesn't appear to be very important, I doubt it's worth it, but it seems it would be possible.
>
> Another thought -- could the re-packing happen whenever the entire dict is iterated through? Though maybe there's no way to know when that's going to happen -- all you get are the individual calls for the next one, yes?
>

You are right. it couldn't.


--
Inada Naoki  <songofacandy@gmail.com>
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-leave@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/ED2GRWD4RARR2LGP45PK4M6R3MLTAF75/
Code of Conduct: http://python.org/psf/codeofconduct/