Is 100,000 entries big for a dictionary?

rturpin at my-deja.com rturpin at my-deja.com
Sun Dec 31 23:26:20 CET 2000


In article <mailman.978298994.20463.python-list at python.org>,
  "Tim Peters" <tim.one at home.com> wrote:
> Provided you have sufficient RAM, the theoretical performance
> (for lookup, insert and delete) remains O(1) expected case,
> O(N) worst case, regardless of dict size.  This is also what
> I've seen in practice:  Python ensures that at least 1/3 of
> the internal hash table entries are unused (it uses an
> extreme form of open addressing ..

Thank you for the comments. I assume it does the usual
extensible hashing mechanism, extending the hash value
by one bit, and the table by a factor of two, when it
is time to grow? I'm glad to read that it uses open
probing. I think this makes more sense for tables that
reside in memory, and that have no strong granularity.

> if-you-do-have-frequent-collisions-i'll-consider-it-a-bug-ly y'rs

What hash function does it use?

Russell



Sent via Deja.com
http://www.deja.com/



More information about the Python-list mailing list