On Oct 29, 2003, at 10:45 AM, Brad Knowles wrote:
That said, storing meta-data in a real database and then using external filesystem techniques for actually accessing the data, should give you the best of both worlds -- the speed of access of the database, and the reliability and well-understood access and backup mechanisms of filesystems.
Hint: look at what INN did when they implmented cycbufs.
Effectively, you create 1-N files, or create files as needed. Each file is N bytes long, pre-allocated on file creation. When you store messages, they're written into the file sequentially (or any other way you want. If you want to get into best fit allocations and turn this into a malloc() style heap, be my guest).
Metadata to access the info is then a filename, and an lseek() pointer into the file, and # of bytes to read, plus your normal identifying info. It's fast, it's efficient use of file pointers, it avoids the worst aspects of the unix file system, and I'm amazed nobody ever thinks to use it for other purposes (or that it took that long for usenet people to discover it, I suggested a simpler variant of it back in the 80s and was told inodes are our friends...)
you can even do expiration/purge/etc if you want, by moving stuff around and changing the pointers.
I've even thought of using it as the backing store for a picture library. With a nice relational database and a series of these "data boxes", I think you have store data in the best and fastest possible way...