Adding a "call_once" decorator to functools
Hello, After a great discussion in python-ideas[1][2] it was suggested that I cross-post this proposal to python-dev to gather more comments from those who don't follow python-ideas. The proposal is to add a "call_once" decorator to the functools module that, as the name suggests, calls a wrapped function once, caching the result and returning it with subsequent invocations. The rationale behind this proposal is that: 1. Developers are using "lru_cache" to achieve this right now, which is less efficient than it could be 2. Special casing "lru_cache" to account for zero arity methods isn't trivial and we shouldn't endorse lru_cache as a way of achieving "call_once" semantics 3. Implementing a thread-safe (or even non-thread safe) "call_once" method is non-trivial 4. It complements the lru_cache and cached_property methods currently present in functools. The specifics of the method would be: 1. The wrapped method is guaranteed to only be called once when called for the first time by concurrent threads 2. Only functions with no arguments can be wrapped, otherwise an exception is thrown 3. There is a C implementation to keep speed parity with lru_cache I've included a naive implementation below (that doesn't meet any of the specifics listed above) to illustrate the general idea of the proposal: ``` def call_once(func): sentinel = object() # in case the wrapped method returns None obj = sentinel @functools.wraps(func) def inner(): nonlocal obj, sentinel if obj is sentinel: obj = func() return obj return inner ``` I'd welcome any feedback on this proposal, and if the response is favourable I'd love to attempt to implement it. 1. https://mail.python.org/archives/list/python-ideas@python.org/thread/5OR3LJO... 2. https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-...
On 27Apr2020 2237, tom@tomforb.es wrote:
2. Special casing "lru_cache" to account for zero arity methods isn't trivial and we shouldn't endorse lru_cache as a way of achieving "call_once" semantics
Why not? It's a decorator, isn't it? Just make it check for number of arguments at decoration time and return a different object. That way, people can decorate their functions now and get correct behaviour (I assume?) on 3.8 and earlier, and also a performance improvement on 3.9, without having to do any version checking. This part could even be written in Python.
3. Implementing a thread-safe (or even non-thread safe) "call_once" method is non-trivial
Agree that this is certainly true. But presumably we should be making lru_cache thread safe if it isn't.
4. It complements the lru_cache and cached_property methods currently present in functools.
It's unfortunate that cached_property doesn't work at module level (as was pointed out on the other threads - thanks for linking those, BTW). Cheers, Steve
Why not? It's a decorator, isn't it? Just make it check for number of arguments at decoration time and return a different object.
It’s not that it’s impossible, but I didn’t think the current implementation doesn’t make it easy (https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323... <https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323c97/Lib/functools.py#L771>). You’d ideally want to skip creating all these objects and special case `user_function` having no parameters, but then you have an issue with `cache_info()` being passed `cache_len()`. So maybe it’s simplest to use the `cache` dictionary with a single static key, but then you’re not really helping much, or avoiding this method altogether, which seemed pretty messy. The C implementation seemed easier to implement - you could re-use the `cache` member (https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323... <https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323c97/Modules/_functoolsmodule.c#L1192>) and store the result of the function call, but that also seemed sub-optimal as the `root` member doesn’t make much sense to be there. At least, that was my line of thought. It basically seemed that it would be more trouble than it was potentially worth, and it might be better to spend my time on `call_once` than special-casing `lru_cache`.
But presumably we should be making lru_cache thread safe if it isn’t.
lru_cache is indeed thread-safe but it doesn’t guarantee that the wrapped method is only called _once_ per unique set of arguments. It apparently just ensures that the internal state of the cache is not corrupted by concurrent accesses.
It's unfortunate that cached_property doesn't work at module level
It is indeed, but a solution that works generally in any function defined at the module level or not would be good to have.
On 27 Apr 2020, at 22:55, Steve Dower <steve.dower@python.org> wrote:
On 27Apr2020 2237, tom@tomforb.es wrote:
2. Special casing "lru_cache" to account for zero arity methods isn't trivial and we shouldn't endorse lru_cache as a way of achieving "call_once" semantics
Why not? It's a decorator, isn't it? Just make it check for number of arguments at decoration time and return a different object.
That way, people can decorate their functions now and get correct behaviour (I assume?) on 3.8 and earlier, and also a performance improvement on 3.9, without having to do any version checking.
This part could even be written in Python.
3. Implementing a thread-safe (or even non-thread safe) "call_once" method is non-trivial
Agree that this is certainly true. But presumably we should be making lru_cache thread safe if it isn't.
4. It complements the lru_cache and cached_property methods currently present in functools.
It's unfortunate that cached_property doesn't work at module level (as was pointed out on the other threads - thanks for linking those, BTW).
Cheers, Steve
On 27Apr2020 2311, Tom Forbes wrote:
Why not? It's a decorator, isn't it? Just make it check for number of arguments at decoration time and return a different object.
It’s not that it’s impossible, but I didn’t think the current implementation doesn’t make it easy
This is the line I'd change: https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323... At this point, you could inspect the user_function object and choose a different wrapper than _lru_cache_wrapper if it takes zero arguments. Though you'd likely still end up with a lot of the code being replicated. You're probably right to go for the C implementation. If the Python implementation is correct, then best to leave the inefficiencies there and improve the already-fast version. Looking at https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c it seems the fast path for no arguments could be slightly improved, but it doesn't look like it'd be much. (I'm deliberately not saying how I'd improve it in case you want to do it anyway as a learning exercise, and because I could be wrong :) ) Equally hard to say how much more efficient a new API would be, so unless it's written already and you have benchmarks, that's probably not the line of reasoning to use. An argument that people regularly get this wrong and can't easily get it right with what's already there is most compelling - see the recent removeprefix/removesuffix discussions if you haven't. Cheers, Steve
On 2020-04-28 00:26, Steve Dower wrote:
On 27Apr2020 2311, Tom Forbes wrote:
Why not? It's a decorator, isn't it? Just make it check for number of arguments at decoration time and return a different object.
It’s not that it’s impossible, but I didn’t think the current implementation doesn’t make it easy
This is the line I'd change: https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323...
At this point, you could inspect the user_function object and choose a different wrapper than _lru_cache_wrapper if it takes zero arguments. Though you'd likely still end up with a lot of the code being replicated.
Making a stdlib function completely change behavior based on a function signature feels a bit too magic to me. I know lots of libraries do this, but I always thought of it as a cool little hack, good for debugging and APIs that lean toward being simple to use rather than robust. The explicit `call_once` feels more like API that needs to be supported for decades.
You're probably right to go for the C implementation. If the Python implementation is correct, then best to leave the inefficiencies there and improve the already-fast version.
Looking at https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c it seems the fast path for no arguments could be slightly improved, but it doesn't look like it'd be much. (I'm deliberately not saying how I'd improve it in case you want to do it anyway as a learning exercise, and because I could be wrong :) )
Equally hard to say how much more efficient a new API would be, so unless it's written already and you have benchmarks, that's probably not the line of reasoning to use. An argument that people regularly get this wrong and can't easily get it right with what's already there is most compelling - see the recent removeprefix/removesuffix discussions if you haven't.
Cheers, Steve
On 28Apr2020 1243, Petr Viktorin wrote:
On 2020-04-28 00:26, Steve Dower wrote:
On 27Apr2020 2311, Tom Forbes wrote:
Why not? It's a decorator, isn't it? Just make it check for number of arguments at decoration time and return a different object.
It’s not that it’s impossible, but I didn’t think the current implementation doesn’t make it easy
This is the line I'd change: https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323...
At this point, you could inspect the user_function object and choose a different wrapper than _lru_cache_wrapper if it takes zero arguments. Though you'd likely still end up with a lot of the code being replicated.
Making a stdlib function completely change behavior based on a function signature feels a bit too magic to me. I know lots of libraries do this, but I always thought of it as a cool little hack, good for debugging and APIs that lean toward being simple to use rather than robust. The explicit `call_once` feels more like API that needs to be supported for decades.
I've been trying to clarify whether call_once is intended to be the functional equivalent of lru_cache (without the stats-only mode). If that's not the behaviour, then I agree, magically switching to it is no good. But if it's meant to be the same but just more efficient, then we already do that kind of thing all over the place (free lists, strings, empty tuple singleton, etc.). And I'd argue that it's our responsibility to select the best implementation automatically, as it saves libraries from having to pull the same tricks. Cheers, Steve
Hi, A pattern that I used multiple times is to compute an object attribute only once and cache the result into the object. Dummy example: --- class X: def __init__(self, name): self.name = name self._cached_upper = None def _get(self): if self._cached_upper is None: print("compute once") self._cached_upper = self.name.upper() return self._cached_upper upper = property(_get) obj = X("victor") print(obj.upper) print(obj.upper) # use cached value --- It would be interesting to be able to replace obj.upper property with an attribute (to reduce the performance overhead of calling _get() method), but "obj.upper = value" raises an error since the property prevents to set the attribute. I understood that the proposed @called_once would store the cached value into the function namespace. Victor Le lun. 27 avr. 2020 à 23:44, <tom@tomforb.es> a écrit :
Hello, After a great discussion in python-ideas[1][2] it was suggested that I cross-post this proposal to python-dev to gather more comments from those who don't follow python-ideas.
The proposal is to add a "call_once" decorator to the functools module that, as the name suggests, calls a wrapped function once, caching the result and returning it with subsequent invocations. The rationale behind this proposal is that: 1. Developers are using "lru_cache" to achieve this right now, which is less efficient than it could be 2. Special casing "lru_cache" to account for zero arity methods isn't trivial and we shouldn't endorse lru_cache as a way of achieving "call_once" semantics 3. Implementing a thread-safe (or even non-thread safe) "call_once" method is non-trivial 4. It complements the lru_cache and cached_property methods currently present in functools.
The specifics of the method would be: 1. The wrapped method is guaranteed to only be called once when called for the first time by concurrent threads 2. Only functions with no arguments can be wrapped, otherwise an exception is thrown 3. There is a C implementation to keep speed parity with lru_cache
I've included a naive implementation below (that doesn't meet any of the specifics listed above) to illustrate the general idea of the proposal:
``` def call_once(func): sentinel = object() # in case the wrapped method returns None obj = sentinel @functools.wraps(func) def inner(): nonlocal obj, sentinel if obj is sentinel: obj = func() return obj return inner ```
I'd welcome any feedback on this proposal, and if the response is favourable I'd love to attempt to implement it.
1. https://mail.python.org/archives/list/python-ideas@python.org/thread/5OR3LJO... 2. https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-... _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/5CFUCM4W... Code of Conduct: http://python.org/psf/codeofconduct/
-- Night gathers, and now my watch begins. It shall not end until my death.
Victor Stinner wrote:
Hi, A pattern that I used multiple times is to compute an object attribute only once and cache the result into the object. Dummy example:
How is that different from https://docs.python.org/3/library/functools.html?highlight=cached_property#f... -Brett
class X: def __init__(self, name): self.name = name self._cached_upper = None def _get(self): if self._cached_upper is None: print("compute once") self._cached_upper = self.name.upper() return self._cached_upper upper = property(_get)
obj = X("victor") print(obj.upper) print(obj.upper) # use cached value It would be interesting to be able to replace obj.upper property with an attribute (to reduce the performance overhead of calling _get() method), but "obj.upper = value" raises an error since the property prevents to set the attribute. I understood that the proposed @called_once would store the cached value into the function namespace. Victor Le lun. 27 avr. 2020 à 23:44, tom@tomforb.es a écrit :
Hello, After a great discussion in python-ideas[1][2] it was suggested that I cross-post this proposal to python-dev to gather more comments from those who don't follow python-ideas. The proposal is to add a "call_once" decorator to the functools module that, as the name suggests, calls a wrapped function once, caching the result and returning it with subsequent invocations. The rationale behind this proposal is that:
Developers are using "lru_cache" to achieve this right now, which is less efficient than it could be Special casing "lru_cache" to account for zero arity methods isn't trivial and we shouldn't endorse lru_cache as a way of achieving "call_once" semantics Implementing a thread-safe (or even non-thread safe) "call_once" method is non-trivial It complements the lru_cache and cached_property methods currently present in functools.
The specifics of the method would be:
The wrapped method is guaranteed to only be called once when called for the first time by concurrent threads Only functions with no arguments can be wrapped, otherwise an exception is thrown There is a C implementation to keep speed parity with lru_cache
I've included a naive implementation below (that doesn't meet any of the specifics listed above) to illustrate the general idea of the proposal: def call_once(func): sentinel = object() # in case the wrapped method returns None obj = sentinel @functools.wraps(func) def inner(): nonlocal obj, sentinel if obj is sentinel: obj = func() return obj return inner
I'd welcome any feedback on this proposal, and if the response is favourable I'd love to attempt to implement it.
https://mail.python.org/archives/list/python-ideas@python.org/thread/5OR3LJO... https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-...
Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/5CFUCM4W... Code of Conduct: http://python.org/psf/codeofconduct/ --
Night gathers, and now my watch begins. It shall not end until my death.
Oh, I didn't know this Python 3.8 new feature (@functools.cached_property). It does exactly what I needed, cool! Victor Le mar. 28 avr. 2020 à 21:18, Brett Cannon <brett@python.org> a écrit :
Victor Stinner wrote:
Hi, A pattern that I used multiple times is to compute an object attribute only once and cache the result into the object. Dummy example:
How is that different from https://docs.python.org/3/library/functools.html?highlight=cached_property#f...
-Brett
class X: def __init__(self, name): self.name = name self._cached_upper = None def _get(self): if self._cached_upper is None: print("compute once") self._cached_upper = self.name.upper() return self._cached_upper upper = property(_get)
obj = X("victor") print(obj.upper) print(obj.upper) # use cached value It would be interesting to be able to replace obj.upper property with an attribute (to reduce the performance overhead of calling _get() method), but "obj.upper = value" raises an error since the property prevents to set the attribute. I understood that the proposed @called_once would store the cached value into the function namespace. Victor Le lun. 27 avr. 2020 à 23:44, tom@tomforb.es a écrit :
Hello, After a great discussion in python-ideas[1][2] it was suggested that I cross-post this proposal to python-dev to gather more comments from those who don't follow python-ideas. The proposal is to add a "call_once" decorator to the functools module that, as the name suggests, calls a wrapped function once, caching the result and returning it with subsequent invocations. The rationale behind this proposal is that:
Developers are using "lru_cache" to achieve this right now, which is less efficient than it could be Special casing "lru_cache" to account for zero arity methods isn't trivial and we shouldn't endorse lru_cache as a way of achieving "call_once" semantics Implementing a thread-safe (or even non-thread safe) "call_once" method is non-trivial It complements the lru_cache and cached_property methods currently present in functools.
The specifics of the method would be:
The wrapped method is guaranteed to only be called once when called for the first time by concurrent threads Only functions with no arguments can be wrapped, otherwise an exception is thrown There is a C implementation to keep speed parity with lru_cache
I've included a naive implementation below (that doesn't meet any of the specifics listed above) to illustrate the general idea of the proposal: def call_once(func): sentinel = object() # in case the wrapped method returns None obj = sentinel @functools.wraps(func) def inner(): nonlocal obj, sentinel if obj is sentinel: obj = func() return obj return inner
I'd welcome any feedback on this proposal, and if the response is favourable I'd love to attempt to implement it.
https://mail.python.org/archives/list/python-ideas@python.org/thread/5OR3LJO... https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-...
Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/5CFUCM4W... Code of Conduct: http://python.org/psf/codeofconduct/ --
Night gathers, and now my watch begins. It shall not end until my death.
Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/TYUV24XO... Code of Conduct: http://python.org/psf/codeofconduct/
-- Night gathers, and now my watch begins. It shall not end until my death.
Would either of the existing solutions work for you? class X: def __init__(self, name): self.name = name @cached_property def title(self): print("compute title once") return self.name.title() @property @lru_cache def upper(self): print("compute uppper once") return self.name.upper() obj = X("victor") print(obj.title) print(obj.title) print(obj.upper) print(obj.upper)
On 4/30/20 4:47 PM, raymond.hettinger@gmail.com wrote:
Would either of the existing solutions work for you?
class X: def __init__(self, name): self.name = name
@cached_property def title(self): print("compute title once") return self.name.title()
@property @lru_cache def upper(self): print("compute uppper once") return self.name.upper()
Thanks for the concrete example. AFAICT, it doesn't require (and probably shouldn't have) a lock to be held for the duration of the call. Would it be fair to say the 100% of your needs would be met if we just added this to the functools module?
call_once = lru_cache(maxsize=None) I am -0 on adding `call_once = lru_cache(maxsize=None)` here. I feel
The second one seems a bit dangerous in that it will erroneously keep objects alive until they are either ejected from the cache or until the class itself is collected (plus only 128 objects would be in the cache at one time): https://bugs.python.org/issue19859 like it could be misleading in that people might think that it ensures that the function is called exactly once (it reminds me of the FnOnce <https://doc.rust-lang.org/std/ops/trait.FnOnce.html> trait in Rust), and all it buys us is a nice way to advertise "here's a use case for lru_cache". That said, in any of the times I've had one of these "call exactly one time" situations, the biggest constraint I've had is that I always wanted the return value to be the same object so that `f(x) is f(x)`, but I've never had a situation where it was /required/ that the function be called exactly once, so I rarely if ever have bothered to get that property. I suppose I could imagine a situation where calling the function mutates or consumes an object as part of the call, like: class LazyList: def __init__(self, some_iterator): self._iter = some_iterator self._list = None @call_once def as_list(self): self._list = list(self._iter) return self._list But I think it's just speculation to imagine anyone needs that or would find it useful, so I'm in favor of waiting for someone to chime in with a concrete use case where this property would be valuable. Best, Paul
01.05.20 01:23, Paul Ganssle пише:
class LazyList: def __init__(self, some_iterator): self._iter = some_iterator self._list = None
@call_once def as_list(self): self._list = list(self._iter) return self._list
call_once is not applicable here, because it is only for functions which do not have arguments, but as_list() takes the self argument.
tom@tomforb.es wrote:
I would like to suggest adding a simple “once” method to functools. As the name suggests, this would be a decorator that would call the decorated function, cache the result and return it with subsequent calls.
It seems like you would get just about everything you want with one line: call_once = lru_cache(maxsize=None) which would be used like this: @call_once def welcome(): len('hello')
Using lru_cache like this works but it’s not as efficient as it could be - in every case you’re adding lru_cache overhead despite not requiring it.
You're likely imagining more overhead than there actually is. Used as shown above, the lru_cache() is astonishingly small and efficient. Access time is slightly cheaper than writing d[()] where d={(): some_constant}. The infinite_lru_cache_wrapper() just makes a single dict lookup and returns the value.¹ The lru_cache_make_key() function just increments the empty args tuple and returns it.² And because it is a C object, calling it will be faster than for a Python function that just returns a constant, "lambda: some_constant()". This is very, very fast. Raymond ¹ https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c#L87... ² https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c#L80...
Hello, After a great discussion in python-ideas[1][2] it was suggested that I cross-post this proposal to python-dev to gather more comments from those who don't follow python-ideas.
The proposal is to add a "call_once" decorator to the functools module that, as the name suggests, calls a wrapped function once, caching the result and returning it with subsequent invocations. The rationale behind this proposal is that: 1. Developers are using "lru_cache" to achieve this right now, which is less efficient than it could be 2. Special casing "lru_cache" to account for zero arity methods isn't trivial and we shouldn't endorse lru_cache as a way of achieving "call_once" semantics 3. Implementing a thread-safe (or even non-thread safe) "call_once" method is non-trivial 4. It complements the lru_cache and cached_property methods currently present in functools.
The specifics of the method would be: 1. The wrapped method is guaranteed to only be called once when called for the first time by concurrent threads 2. Only functions with no arguments can be wrapped, otherwise an exception is thrown 3. There is a C implementation to keep speed parity with lru_cache
I've included a naive implementation below (that doesn't meet any of the specifics listed above) to illustrate the general idea of the proposal:
``` def call_once(func): sentinel = object() # in case the wrapped method returns None obj = sentinel @functools.wraps(func) def inner(): nonlocal obj, sentinel if obj is sentinel: obj = func() return obj return inner ```
I'd welcome any feedback on this proposal, and if the response is favourable I'd love to attempt to implement it.
1. https://mail.python.org/archives/list/python-ideas@python.org/thread/5OR3LJO... 2. https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-... _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/5CFUCM4W... Code of Conduct: http://python.org/psf/codeofconduct/
Hey Raymond, Thanks for your input here! A new method wouldn’t be worth adding purely for performance reasons then, but there is still an issue around semantics and locking. Should we encourage/document `lru_cache` as the way to do `call_once`? If so, then I guess that’s suitable, but people have brought up that it might be hard to discover and that it doesn’t actually ensure the function is called once. The reason I bring this up is that I’ve seen several ad-hoc `call_once` implementations recently, and creating one is surprisingly complex for someone who’s not that experienced with Python. So I think there’s room to improve the discoverability of lru_cache as an “almost” `call_once` alternative, or room for a dedicated method that might re-use bits of the`lru_cache` implementation. Tom
On 28 Apr 2020, at 20:51, Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
tom@tomforb.es wrote:
I would like to suggest adding a simple “once” method to functools. As the name suggests, this would be a decorator that would call the decorated function, cache the result and return it with subsequent calls.
It seems like you would get just about everything you want with one line:
call_once = lru_cache(maxsize=None)
which would be used like this:
@call_once def welcome(): len('hello')
Using lru_cache like this works but it’s not as efficient as it could be - in every case you’re adding lru_cache overhead despite not requiring it.
You're likely imagining more overhead than there actually is. Used as shown above, the lru_cache() is astonishingly small and efficient. Access time is slightly cheaper than writing d[()] where d={(): some_constant}. The infinite_lru_cache_wrapper() just makes a single dict lookup and returns the value.¹ The lru_cache_make_key() function just increments the empty args tuple and returns it.² And because it is a C object, calling it will be faster than for a Python function that just returns a constant, "lambda: some_constant()". This is very, very fast.
Raymond
¹ https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c#L87... ² https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c#L80...
Hello, After a great discussion in python-ideas[1][2] it was suggested that I cross-post this proposal to python-dev to gather more comments from those who don't follow python-ideas.
The proposal is to add a "call_once" decorator to the functools module that, as the name suggests, calls a wrapped function once, caching the result and returning it with subsequent invocations. The rationale behind this proposal is that: 1. Developers are using "lru_cache" to achieve this right now, which is less efficient than it could be 2. Special casing "lru_cache" to account for zero arity methods isn't trivial and we shouldn't endorse lru_cache as a way of achieving "call_once" semantics 3. Implementing a thread-safe (or even non-thread safe) "call_once" method is non-trivial 4. It complements the lru_cache and cached_property methods currently present in functools.
The specifics of the method would be: 1. The wrapped method is guaranteed to only be called once when called for the first time by concurrent threads 2. Only functions with no arguments can be wrapped, otherwise an exception is thrown 3. There is a C implementation to keep speed parity with lru_cache
I've included a naive implementation below (that doesn't meet any of the specifics listed above) to illustrate the general idea of the proposal:
``` def call_once(func): sentinel = object() # in case the wrapped method returns None obj = sentinel @functools.wraps(func) def inner(): nonlocal obj, sentinel if obj is sentinel: obj = func() return obj return inner ```
I'd welcome any feedback on this proposal, and if the response is favourable I'd love to attempt to implement it.
1. https://mail.python.org/archives/list/python-ideas@python.org/thread/5OR3LJO... 2. https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-... _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/5CFUCM4W... Code of Conduct: http://python.org/psf/codeofconduct/
On 4/29/2020 3:55 AM, Tom Forbes wrote:
Hey Raymond, Thanks for your input here! A new method wouldn’t be worth adding purely for performance reasons then, but there is still an issue around semantics and locking.
One thing I don't understand about the proposed @call_once (or whatever it's called): why is locking a concern here any more than it's a concern for @lru_cache? Is there something special about it? Or, if locking is a requirement for @call_once (maybe optionally), then wouldn't adding the same support to @lru_cache make sense? Eric
On Apr 29, 2020, at 12:55 AM, Tom Forbes <tom@tomforb.es> wrote:
Hey Raymond, Thanks for your input here! A new method wouldn’t be worth adding purely for performance reasons then, but there is still an issue around semantics and locking.
Right.
it doesn’t actually ensure the function is called once.
Let's be precise about this. The lru_cache() logic is: 1) if the function has already been called and result is known, return the prior result :-) 2) call the underlying function 3) add the question/answer pair to the cache dict. You are correct that a lru_cache() wrapped function can be called more than once if before step three happens, the wrapped function is called again, either by another thread or by a reentrant call. This is by design and means that lru_cache() can be wrapped around almost anything, reentrant or not. Also calls to lru_cache() don't block across the function call, nor do they fail because another call is in progress. This makes lru_cache() easy to use and reliable, but it does allow the possibility that the function is called more than once. The call_once() decorator would need different logic: 1) if the function has already been called and result is known, return the prior result :-) 2) if function has already been called, but the result is not yet known, either block or fail :-( 3) call the function, this cannot be reentrant :-( 4) record the result for future calls. The good news is that call_once() can guarantee the function will not be called more than once. The bad news is that task switches during step three will either get blocked for the duration of the function call or they will need to raise an exception. Likewise, it would be a mistake use call_once() when reentrancy is possible.
The reason I bring this up is that I’ve seen several ad-hoc `call_once` implementations recently, and creating one is surprisingly complex for someone who’s not that experienced with Python.
Would it fair to describe call_once() like this? call_once() is just like lru_cache() but: 1) guarantees that a function never gets called more than once 2) will block or fail if a thread-switch happens during a call 3) only works for functions that take zero arguments 4) only works for functions that can never be reentrant 5) cannot make the one call guarantee across multiple processes 6) does not have instrumentation for number of hits 7) does not have a clearing or reset mechanism Raymond
On Wed, 29 Apr 2020 12:01:24 -0700 Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
The call_once() decorator would need different logic:
1) if the function has already been called and result is known, return the prior result :-) 2) if function has already been called, but the result is not yet known, either block or fail :-(
It definitely needs to block.
3) call the function, this cannot be reentrant :-(
Right. The typical use for such a function is lazy initialization of some resource, not recursive computation.
4) record the result for future calls.
[...]
Would it fair to describe call_once() like this?
call_once() is just like lru_cache() but:
1) guarantees that a function never gets called more than once 2) will block or fail if a thread-switch happens during a call
Definitely block.
3) only works for functions that take zero arguments 4) only works for functions that can never be reentrant 5) cannot make the one call guarantee across multiple processes 6) does not have instrumentation for number of hits 7) does not have a clearing or reset mechanism
Clearly, instrumentation and a clearing mechanism are not necessary. They might be "nice to have", but needn't hinder initial adoption of the API. Regards Antoine.
On Apr 29, 2020, at 4:20 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
On Wed, 29 Apr 2020 12:01:24 -0700 Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
The call_once() decorator would need different logic:
1) if the function has already been called and result is known, return the prior result :-) 2) if function has already been called, but the result is not yet known, either block or fail :-(
It definitely needs to block.
Do you think it is safe to hold a non-reentrant lock across an arbitrary user function? Traditionally, the best practice for locks was to acquire, briefly access a shared resource, and release promptly.
3) call the function, this cannot be reentrant :-(
Right. The typical use for such a function is lazy initialization of some resource, not recursive computation.
Do you have some concrete examples we could look at? I'm having trouble visualizing any real use cases and none have been presented so far. Presumably, the initialization function would have to take zero arguments, have a useful return value, must be called only once, not be idempotent, wouldn't fail if called in two different processes, can be called from multiple places, and can guarantee that a decref, gc, __del__, or weakref callback would never trigger a reentrant call. Also, if you know of a real world use case, what solution is currently being used. I'm not sure what alternative call_once() is competing against.
6) does not have instrumentation for number of hits 7) does not have a clearing or reset mechanism
Clearly, instrumentation and a clearing mechanism are not necessary. They might be "nice to have", but needn't hinder initial adoption of the API.
Agreed. It is inevitable that those will be requested, but they are incidental to the core functionality. Do you have any thoughts on what the semantics should be if the inner function raises an exception? Would a retry be allowed? Or does call_once() literally mean "can never be called again"? Raymond
On Thu, 30 Apr 2020 at 00:37, Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
On Apr 29, 2020, at 4:20 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
On Wed, 29 Apr 2020 12:01:24 -0700 Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
Also, if you know of a real world use case, what solution is currently being used. I'm not sure what alternative call_once() is competing against.
Of course this is meant to be something simple - so there are no "real world use cases" that are "wow, it could not have been done without it". I was one of the first to reply to this on "python-ideas", as I often need the pattern, but seldon worrying about rentrancy, or parallel calling. Most of the uses are just that: initalize a resource lazily, and just "lru_cache" could work. My first thought was for something more light-weight than lru_cache (and a friendlier name). So, one of the points I'd likely have used this is here: https://github.com/jsbueno/terminedia/blob/d97976fb11ac54b527db4183497730883...
_______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/Y2MUKYDC... Code of Conduct: http://python.org/psf/codeofconduct/
On Apr 30, 2020, at 6:32 AM, Joao S. O. Bueno <jsbueno@python.org.br> wrote:
Of course this is meant to be something simple - so there are no "real world use cases" that are "wow, it could not have been done without it".
The proposed implementation does something risky, it hold holds a non-reentrant lock across a call to an arbitrary user-defined function. The only reason to do so is to absolutely guarantee the function will never be called twice. We really should look for some concrete examples that require that guarantee, and it would be nice to see how that guarantee is being implemented currently (it isn't obvious to me). Also, most initialization functions I've encountered take at least one argument, so the proposed call_once() implementation wouldn't be usable at all.
I was one of the first to reply to this on "python-ideas", as I often need the pattern, but seldon worrying about rentrancy, or parallel calling. Most of the uses are just that: initalize a resource lazily, and just "lru_cache" could work. My first thought was for something more light-weight than lru_cache (and a friendlier name).
Right. Those cases could be solved trivially if we added: call_once = lru_cache(maxsize=None) which is lightweight, very fast, and has a clear name. Further, it would work with multiple arguments and would not fail if the underlying function turned out to be reentrant. AFAICT, the *only* reason to not use the lru_cache() implementation is that in multithreaded code, it can't guarantee that the underlying function doesn't get called a second time while still executing the first time. If those are things you don't care about, then you don't need the proposed implementation; we can give you what you want by adding a single line to functools.
So, one of the points I'd likely have used this is here:
https://github.com/jsbueno/terminedia/blob/d97976fb11ac54b527db4183497730883...
Thanks — this is a nice example. Here's what it tells us: 1) There exists at least one use case for a zero argument initialization function 2) Your current solution is trivially easy, clear, and fast. "if CHAR_BASE: return". 3) This function returns None, so efforts by call_once() to block and await a result are wasted. 4) It would be inconsequential if this function were called twice. 5) A more common way to do this is to move the test into the lookup() function -- see below. Raymond ------------------------- CHAR_BASE = {} def _init_chars(): for code in range(0, 0x10ffff): char = chr(code) values = {} attrs = "name category east_asian_width" for attr in attrs.split(): try: values[attr] = getattr(unicodedata, attr)(char) except ValueError: values[attr] = "undefined" CHAR_BASE[code] = Character(char, code, values["name"], values["category"], values["east_asian_width"]) def lookup(name_part, chars_only=False): if not CHAR_BASE: _init_chars() results = [char for char in CHAR_BASE.values() if re.search(name_part, char.name, re.IGNORECASE)] if not chars_only: return results return [char.char for char in results]
On Wed, Apr 29, 2020 at 9:36 PM Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
Do you have some concrete examples we could look at? I'm having trouble visualizing any real use cases and none have been presented so far.
This pattern occurs not infrequently in our Django server codebase at Instagram. A typical case would be that we need a client object to make queries to some external service, queries using the client can be made from various locations in the codebase (and new ones could be added any time), but there is noticeable overhead to the creation of the client (e.g. perhaps it does network work at creation to figure out which remote host can service the needed functionality) and so having multiple client objects for the same remote service existing in the same process is waste. Or another similar case might be creation of a "client" object for querying a large on-disk data set.
Presumably, the initialization function would have to take zero arguments,
Right, typically for a globally useful client object there are no arguments needed, any required configuration is also already available globally.
have a useful return value,
Yup, the object which will be used by other code to make network requests or query the on-disk data set.
must be called only once,
In our use cases it's more a SHOULD than a MUST. Typically if it were called two or three times in the process due to some race condition that would hardly matter. However if it were called anew for every usage that would be catastrophically inefficient.
not be idempotent,
Any function like the ones I'm describing can be trivially made idempotent by initializing a global variable and short-circuit returning that global if already set. But that's precisely the boilerplate this utility seeks to replace.
wouldn't fail if called in two different processes,
Separate processes would each need their own and that's fine.
can be called from multiple places,
Yes, that's typical for the uses I'm describing.
and can guarantee that a decref, gc, __del__, or weakref callback would never trigger a reentrant call.
"Guarantee" is too strong, but at least in our codebase use of Python finalizers is considered poor practice and they are rarely used, and in any case it would be extraordinarily strange for a finalizer to make use of an object like this that queries an external resource. So this is not a practical concern. Similarly it would be very strange for creation of an instance of a class to call a free function whose entire purpose is to create and return an instance of that very class, so reentrancy is also not a practical concern.
Also, if you know of a real world use case, what solution is currently being used. I'm not sure what alternative call_once() is competing against.
Currently we typically would use either `lru_cache` or the manual "cache" using a global variable. I don't think that practically `call_once` would be a massive improvement over either of those, but it would be slightly clearer and more discoverable for the use case.
Do you have any thoughts on what the semantics should be if the inner function raises an exception? Would a retry be allowed? Or does call_once() literally mean "can never be called again"?
For the use cases I'm describing, if the method raises an exception the cache should be left unpopulated and a future call should try again. Arguably a better solution for these cases is to push the laziness internal to the class in question, so it doesn't do expensive or dangerous work on instantiation but delays it until first use. If that is done, then a simple module-level instantiation suffices to replace the `call_once` pattern. Unfortunately in practice we are often dealing with existing widely-used APIs that weren't designed that way and would be expensive to refactor, so the pattern continues to be necessary. (Doing expensive or dangerous work at import time is a major problem that we must avoid, since it causes every user of the system to pay that startup cost in time and risk of failure, even if for their use the object would never be used.) Carl
On Apr 30, 2020, at 10:44 AM, Carl Meyer <carl@oddbird.net> wrote:
On Wed, Apr 29, 2020 at 9:36 PM Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
Do you have some concrete examples we could look at? I'm having trouble visualizing any real use cases and none have been presented so far.
This pattern occurs not infrequently in our Django server codebase at Instagram. A typical case would be that we need a client object to make queries to some external service, queries using the client can be made from various locations in the codebase (and new ones could be added any time), but there is noticeable overhead to the creation of the client (e.g. perhaps it does network work at creation to figure out which remote host can service the needed functionality) and so having multiple client objects for the same remote service existing in the same process is waste.
Or another similar case might be creation of a "client" object for querying a large on-disk data set.
Thanks for the concrete example. AFAICT, it doesn't require (and probably shouldn't have) a lock to be held for the duration of the call. Would it be fair to say the 100% of your needs would be met if we just added this to the functools module? call_once = lru_cache(maxsize=None) That's discoverable, already works, has no risk of deadlock, would work with multiple argument functions, has instrumentation, and has the ability to clear or reset. I'm still looking for an example that actually requires a lock to be held for a long duration. Raymond
On Thu, Apr 30, 2020 at 3:12 PM Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
Thanks for the concrete example. AFAICT, it doesn't require (and probably shouldn't have) a lock to be held for the duration of the call. Would it be fair to say the 100% of your needs would be met if we just added this to the functools module?
call_once = lru_cache(maxsize=None)
That's discoverable, already works, has no risk of deadlock, would work with multiple argument functions, has instrumentation, and has the ability to clear or reset.
Yep, I think that's fair. We've never AFAIK had a problem with `lru_cache` races, and if we did, in most cases we'd be fine with having it called twice. I can _imagine_ a case where the call loads some massive dataset directly into memory and we really couldn't afford it being loaded twice under any circumstance, but even if we have a case like that, we don't do enough threading for it ever to have been an actual problem that I'm aware of.
I'm still looking for an example that actually requires a lock to be held for a long duration.
Don't think I can provide a real-world one from my own experience! Thanks, Carl
participants (15)
-
Antoine Pitrou
-
Brett Cannon
-
Carl Meyer
-
Eric V. Smith
-
Greg Ewing
-
Joao S. O. Bueno
-
Paul Ganssle
-
Petr Viktorin
-
Raymond Hettinger
-
raymond.hettinger@gmail.com
-
Serhiy Storchaka
-
Steve Dower
-
Tom Forbes
-
tom@tomforb.es
-
Victor Stinner