Most efficient way to build very large dictionaries

Roger Binns rogerb at rogerbinns.com
Wed Dec 24 11:44:59 CET 2008


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Martin wrote:
> I'd think he's talking about in memory SQLite Databases, this way you
> should be quite fast _and_ could dump all that to a persistent
> storage...

I was just talking about regular on disk SQLite databases.  In terms of
priming the pump, you can just open/read/close the whole database file
which will cause the database to be in the operating system cache
buffers, or just let SQLite do its thing.

For anyone who is curious about what Martin is referring to, SQLite does
support the database file being memory (although it isn't a file at that
point).  It has a fake filename of :memory:.  As an example you can copy
the table 'example' to memory using:

  ATTACH ':memory:' as memdb;
  CREATE TABLE memdb.memexample AS select * from example;

As with Python, all this stuff is premature optimization.  Get it
working right first and then try tweaks to improve performance.

Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAklSEqcACgkQmOOfHg372QTPpgCgvSKGMCJAKhnm5I8qHdmZtRh3
SgMAoI3DVhWCVdUE1TLck9ZEfp/Ln1H5
=5kNT
-----END PGP SIGNATURE-----




More information about the Python-list mailing list