simultaneous multiple requests to very simple database

Thomas Bartkus tom at dtsam.com
Wed Jan 19 00:33:14 CET 2005


"Eric S. Johansson" <esj at harvee.org> wrote in message
news:mailman.871.1106087623.22381.python-list at python.org...
<snip>
> 99.9 percent of what I do (and I suspect this could be true for others)
> could be satisfied by a slightly enhanced super dictionary with a record
> level locking.

BUT - Did you not mention! :
    > Estimated number of records will be in the ballpark of 50,000 to
100,000 in his
    > early phase and 10 times that in the future.  Each record will run
about
    > 100 to 150 bytes.
.
And
    > The very large dictionary must be accessed from
    > multiple processes simultaneously

And
   > I need to be able to lock records
   > within the very large dictionary when records are written to

And
   > although I must complete processing in less than 90 seconds.

And - the hole in the bottom of the hull -
   all of the above using "a slightly enhanced super dictionary".

*Super* dictionary??? *Slightly* enhanced???
Have you attempted any feasability tests?  Are you running a Cray?

There are many database systems available, and Python (probably) has free
bindings to every one of them.  Whichever one might choose, it would add
simplicity, not complexity to what you are attempting.  The problems you
mention are precisely those that databases are meant to solve. The only
tough (impossible?) requirement you have is that you don't want to use one.

When you write that "super dictionary", be sure to post code!
I could use one of those myself.
Thomas Bartkus





More information about the Python-list mailing list