On Mon, Dec 1, 2014 at 3:13 PM, Chris Angelico <rosuav@gmail.com> wrote:
On Tue, Dec 2, 2014 at 2:03 AM, David Wilson <dw+python-ideas@hmmz.org> wrote:
>> before it would be useful to include in the stdlib, along with some
>> motivating use cases
>
> One example would be a continuous auction, like a stock exchange order
> book. In that case, efficient enumeration is desirable by all of
> account ID, order ID, or (price, time).

For small numbers of entries, it'd be simpler to just sort and filter
on demand; for large numbers of entries, you should probably be using
a database, which will have these sorts of facilities. Is there a
mid-range where it's better to keep it all in memory, but it's too
slow to sort on demand?

My current work project keeps ~8 GB of data in RAM (and looking at 64-128GB servers to get us through the next 3 years). Sorting on demand would be way too slow but it doesn't need to be in a database either - it can be reconstructed from an event stream, and running a DB server is extra ops. Using an in-memory DB like SQLite is an unnecessary extra layer.

Currently it just has maps as needed to speed up queries, and I don't think it fits the use case for an MI container, but other projects might. With the amount of RAM in modern machines "keep it all in memory" is viable for lots of use cases with otherwise large values of N.

- Jeff