DBM scalability

George Sakkis gsakkis at rutgers.edu
Sat Oct 22 04:59:49 CEST 2005

"Paul Rubin" <http://phr.cx@NOSPAM.invalid> wrote:

> "George Sakkis" <gsakkis at rutgers.edu> writes:
> > I'm trying to create a dbm database with around 4.5 million entries
> > but the existing dbm modules (dbhash, gdbm) don't seem to cut
> > it. What happens is that the more entries are added, the more time
> > per new entry is required, so the complexity seems to be much worse
> > than linear. Is this to be expected
> No, not expected.  See if you're using something like db.keys() which
> tries to read all the keys from the db into memory, or anything like that.

It turns out it doesn't have to do with python or the dbm modules. The same program on a different
box and platform runs linearly, so I guess it has to do with the OS and/or the hard disk


More information about the Python-list mailing list