keep=True defeats the purpose of a caching strategy.  An re.compile call within some code somewhere is typically not in a position to know if it is going to be called a lot.

I think the code, as things are now, with dynamic construction at runtime based on a simple test is the best of both worlds to avoid the more complicated cost of calling re.compile and going through its cache logic.  If the caching is ever is improved in the future to be faster, the code can arguably be simplified to use or re.match directly and rely solely on the caching.

ie: don't change anything.

On Sat, Mar 23, 2013 at 4:03 PM, Bruce Leban <> wrote:
To summarize:

- compiling regexes is slow so applications frequently compute it once and save it
- compiling all the regexes at startup slows down startup for regexes that may never be used
- a common pattern is to compute once at time of use and it would be nice to optimize this pattern
- the regex library has a cache feature which means that frequently it will be optimized automatically
- however, there's no guarantee that the regex you care about won't fall out of the cache.

I think this addresses all the issues better than compute_lazy:

re.compile(r'...', keep=True)

When keep=True is specified, the regex library keeps the cached value for the lifetime of the process. The regex is computed only once on first use and you don't need to create a place to store it. Furthermore, if you use the same regex in more than one place, once with keep=True, the other uses will automatically be optimized.

--- Bruce
Latest blog post: Alice's Puzzle Page
Learn how hackers think:

Python-ideas mailing list