Kosovo database; Python speed

aaron_watters at my-dejanews.com aaron_watters at my-dejanews.com
Wed Apr 21 15:58:51 EDT 1999


In article <371DB466.32097FE5 at pop.vet.uu.nl>,
  M.Faassen at vet.uu.nl wrote:
> Richard van de Stadt wrote:
> >
> > Suppose we were going to make a database to help Kosovars locate
> > their family members. This would probably result in hundreds of
> > thousands of records (say 1 record (file) per person).
> >
> > Would Python be fast enough to manage this data, make queries on
> > the data, or should compiled programs be used?
>
> Depends on what queries you make, but if used smartly, Python can
> probably be fast enough. From what I've heard Gadfly is (a database
> implemented in Python).

It also depends on what you expect the queries to be.  For this
kind of problem "grep" might work pretty well, actually.

Gadfly is best at the moment when you are doing a lot of exact matches,
so I'd expect if you were doing matches on last/first name by exact
spelling gadfly would be okay on a sufficiently large machine.
However for inexact matches I'd recommend other methods, like grep
for example.  Generally if all you have is one big table something
like gadfly is less compelling than if you have many interrelated
structures to manage and query.  Also look at dbm, gdbm, bplustree,
and similar.

  http://www.chordate.com/gadfly.html
  http://starship.skyport.net/crew/aaron_watters/bplustree/

   -- Aaron Watters

===
% ping elvis
elvis is alive
% _

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    




More information about the Python-list mailing list