Is it more CPU-efficient to read/write config file or read/write sqlite database?

Tim Chase python.list at
Mon Dec 16 01:07:54 CET 2013

On 2013-12-16 10:12, Cameron Simpson wrote:
> On 14Dec2013 10:15, Tim Chase <python.list at> wrote:
> Annoyingly, sqlite:
>   + only lets one process access the db at a time, taking you back
> to a similar situation as with config files

Is this a Python limitation?  According to the docs[1], it's not a
sqlite limitation (except, as noted, on non-locking filesystems like

>   + only lets you access the db from the same thread in which it
>     was opened, outstandingly annoying; I've had to gratuitously
>     refactor code because of this

I do believe that limitation does hold though depending on the
build-options with which sqlite was compiled [2], though it might be
somewhat different from within Python where the GIL could
possibly prevent actual OS-level-thread issues.

>   + traditionally, sqlite is extreme fsync() happy; forces a disc
>     level flush on each commit - extremely slow on busy databases,
>     not to mention hard of drives

I'd say this is the right thing for a DB to do.  If it comes back
from a commit() call, it better be on that disk, barring a failure
of the physical hardware.  If it comes back from a commit() and data
gets lost because of a power-failure, something is wrong.

> > * well, except on NFS shares and other places where file-locking
> > is unreliable
> Backing off to config files, making a lock directory is NFS safe.
> So is opening a lock file for write with zero permissions (low level
> open with mode=0).

Interesting.  I haven't used NFS in a long time for anything other
than quick experiments, so it's nice to file this away.  Do you have
a link to some official docs corroborating what you state?



More information about the Python-list mailing list