On Tue, Dec 2, 2014 at 1:20 PM, Antoine Pitrou email@example.com wrote:
collections.OrderedDict uses its own slow linked list. I suppose lru_cache is micro-optimized; also, it's thread-safe.
One of the propositions is to expose these implementations in a way that:
1. The implementation used for functools.lru_cache can be chosen/switched. 2. The current implementation in lru_cache can be used for other purposes.
Grako parsers, for example don't use lru_cache in memoization because they need finer control about what gets cached.