
On 3 June 2017 at 20:59, Daniel Bershatsky <bepshatsky@yandex.ru> wrote:
Thank you for taking the time to write this up!
There is, thanks to context managers: import contextlib def foo(i): print(i) def bar(): with contextlib.ExitStack() as stack: stack.callback(foo, 42) print(3.14) bar() Now, defer is certainly pithier, but thanks to contextlib2, the above code can be used all the way back to Python 2.6, rather than being limited to running on 3.7+. I was also motivated enough to *write* ExitStack() to solve this problem, but even I don't use it often enough to consider it worthy of being a builtin, let alone syntax. So while I'm definitely sympathetic to the use case (otherwise ExitStack wouldn't have a callback() method), "this would be useful" isn't a sufficient argument in this particular case - what's needed is a justification that this pattern of resource management is common enough to justify giving functions an optional implicit ExitStack instance and assigning a dedicated keyword for adding entries to it. Alternatively, the case could be made that there's a discoverability problem, where folks aren't necessarily being pointed towards ExitStack as a dynamic resource management tool when that's what they need, and to consider what could be done to help resolve that (with adding a new kind of statement being just one of the options evaluated). Cheers, Nick. P.S. Nikolas Rauth has a more in-depth write-up of the utility of ExitStack here: https://www.rath.org/on-the-beauty-of-pythons-exitstack.html -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Its also worth mentioning that the `defer` statement has come up in other contexts, and is already often used as an identifier already (see *https://mail.python.org/pipermail/python-ideas/2017-February/044682.html <https://mail.python.org/pipermail/python-ideas/2017-February/044682.html>*), so there are a lot of practical considerations for this to overcome even if its deemed necessary (which I think Nick shows that it probably shouldn't be). --Josh On Sat, Jun 3, 2017 at 8:24 AM Nick Coghlan <ncoghlan@gmail.com> wrote:

On 3 June 2017 at 22:24, Nick Coghlan <ncoghlan@gmail.com> wrote:
It occurred to me that I should elaborate a bit further here, and point out explicitly that one of the main benefits of ExitStack (and, indeed, the main reason it exists) is that it allows resource lifecycle management to be deterministic, *without* necessarily tying it to function calls or with statements. The behave BDD test framework, for example, defines hooks that run before and after each feature and scenario, as well as before and after the entire test run. I use those to set up "scenario_cleanup", "_feature_cleanup" and "_global_cleanup" stacks as part of the testing context: https://github.com/leapp-to/prototype/blob/master/integration-tests/features... If a test step implementor allocates a resource that needs to be cleaned up, they register it with "context.scenario_cleanup", and then the "after scenario" hook takes care of closing the ExitStack instance and cleaning everything up appropriately. For me, that kind of situation is when I'm most inclined to reach for ExitStack, whereas when the cleanup needs align with the function call stack, I'm more likely to reach for contexlib.contextmanager or an explicit try/finally. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

I agree that the stated use cases are better handled with ExitStack. One area where `defer` might be useful is in lazy-evaluating global constants. For example in a genomics library used at my work, one module involves compiling a large number of regular expressions, and setting them as global contants in the module, like: FOO1_re = re.compile(r'...') FOO_TO_BAR_re = {foo: complicated_computation_of_regex(foo) for foo in LONG_LIST_OF_THINGS} ... This utility module is imported in a lot of places in the codebase, which meant that importing almost anything from our codebase involved precompiling all these regular expressions, which took around 500ms to to run the anything (the test runner, manually testing code in the shell, etc.) It would be ideal to only do these computations if/when they are needed. This is a more general issue than this specific example, e.g. for libraries which parse large data sources like pycountry (see my PR <https://bitbucket.org/flyingcircus/pycountry/pull-requests/10/improve-module...> for one possible ugly solution using proxy objects; the author instead went with the simpler, less general solution of manually deciding when the data is needed). See also django.utils.functional.lazy <https://docs.djangoproject.com/en/1.11/_modules/django/utils/functional/>, which is used extensively in the framework. A statement like: `defer FOO = lengthy_computation_of_foo()` which deferred the lengthy computation until it is used for something would be useful to allow easily fixing these issues without writing ugly hacks like proxy objects or refactoring code into high-overhead cached properties or the like. Alternatively, if there are less ugly or bespoke ways to handle this kind of issue, I'd be interested in hearing them. Best, Lucas On Sat, Jun 3, 2017 at 11:38 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:

On Sun, Jun 4, 2017 at 12:23 AM, Lucas Wiman <lucas.wiman@gmail.com> wrote:
I think in general I'd recommend making the API for accessing these things be a function call interface, so that it's obvious to the caller that some expensive computation might be going on. But if you're stuck with an attribute-lookup based interface, then you can use a __getattr__ hook to compute them the first time they're accessed: class LazyConstants: def __getattr__(self, name): value = compute_value_for(name) setattr(self, name, value) return value __getattr__ is only called as a fallback, so by setting the computed value on the object we make any future attribute lookups just as cheap as they would be otherwise. You can get this behavior onto a module object by doing "sys.modules[__name__] = Constants()" inside the module body, or by using a <del>hack</del> elegant bit of code like https://github.com/njsmith/metamodule/ (mostly the latter would only be preferred if you have a bunch of other attributes exported from this same module and trying to move all of them onto the LazyConstants object would be difficult). -n -- Nathaniel J. Smith -- https://vorpus.org

On 4 June 2017 at 17:37, Nathaniel Smith <njs@pobox.com> wrote:
This reminds me: we could really use some documentation help in relation to https://bugs.python.org/issue22986 making module __class__ attributes mutable in Python 3.5+ At the moment, that is just reported in Misc/NEWS as "Issue #22986: Allow changing an object's __class__ between a dynamic type and static type in some cases.", which doesn't do anything to convey the significant *implications* of now being able to define module level properties as follows: >>> x = 10 >>> x 10 >>> import __main__ as main >>> main.x 10 >>> from types import ModuleType >>> class SpecialMod(ModuleType): ... @property ... def x(self): ... return 42 ... >>> main.__class__ = SpecialMod >>> x 10 >>> main.x 42 (I know that's what metamodule does under the hood, but if the 3.5+ only limitation is acceptable, then it's likely to be clearer to just do this inline rather than hiding it behind a 3rd party API) One potentially good option would be a HOWTO guide on "Lazy attribute initialization" in https://docs.python.org/3/howto/index.html that walked through from the basics of using read-only properties with double-underscore prefixed result caching, through helper functions & methods decorated with lru_cache, and all the way up to using __class__ assignment to enable the definition of module level properties. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Hello, 2017-06-04 Nathaniel Smith <njs@pobox.com> dixit:
Another solution is to use a Pyramid's-@reify-like decorator to make a caching non-data descriptor (i.e., the kind of descriptor that can be shadowed with an ordinary instance attribute): ``` class LazyConstants: @reify def FOO(self): return <complicated computation...> @reify def BAR(self): return <another complicated computation...> ``` In my work we use just @pyramid.decorator.reify [1], but the mechanism is so simple that it you can always implement it by yourself [2], though providing some features related to_ _doc__/introspection-ability etc. may need some additional deliberation to do it right... That may mean to it would be worth to add it to the standard library. Wouldn't be? Cheers. *j [1] See: http://docs.pylonsproject.org/projects/pyramid/en/latest/api/decorator.html#... [2] The gist of the implementation is just: ``` class lazyproperty(object): def __init__(self, maker): self.maker = maker def __get__(self, instance, owner): if instance is None: return self value = self.maker(instance) setattr(instance, self.maker.__name__, value) return value ```

On Sat, Jun 3, 2017 at 6:59 AM, Daniel Bershatsky <bepshatsky@yandex.ru> wrote: Or with usage defer keyword
IMHO, a block in which the intention of a `finally: is not well understood, needs refactoring. Some *old* code is like that, but it doesn’t mean it’s *bad*. Then, as a matter of personal preference, I’m not comfortable with that the *defer* idiom talks first about things that should be done last. It’s a debt acquired without enough syntactic evidence (Oh! Mi gosh!, There were those defers at the start of the function I just changed).
From import this:
Explicit is better than implicit. -- Juancarlo *Añez*

Daniel Bershatsky <bepshatsky@yandex.ru> writes:
Related: "Python equivalent of golang's defer statement" https://stackoverflow.com/questions/34625089/python-equivalent-of-golangs-de...

On 3 June 2017 at 20:59, Daniel Bershatsky <bepshatsky@yandex.ru> wrote:
Thank you for taking the time to write this up!
There is, thanks to context managers: import contextlib def foo(i): print(i) def bar(): with contextlib.ExitStack() as stack: stack.callback(foo, 42) print(3.14) bar() Now, defer is certainly pithier, but thanks to contextlib2, the above code can be used all the way back to Python 2.6, rather than being limited to running on 3.7+. I was also motivated enough to *write* ExitStack() to solve this problem, but even I don't use it often enough to consider it worthy of being a builtin, let alone syntax. So while I'm definitely sympathetic to the use case (otherwise ExitStack wouldn't have a callback() method), "this would be useful" isn't a sufficient argument in this particular case - what's needed is a justification that this pattern of resource management is common enough to justify giving functions an optional implicit ExitStack instance and assigning a dedicated keyword for adding entries to it. Alternatively, the case could be made that there's a discoverability problem, where folks aren't necessarily being pointed towards ExitStack as a dynamic resource management tool when that's what they need, and to consider what could be done to help resolve that (with adding a new kind of statement being just one of the options evaluated). Cheers, Nick. P.S. Nikolas Rauth has a more in-depth write-up of the utility of ExitStack here: https://www.rath.org/on-the-beauty-of-pythons-exitstack.html -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Its also worth mentioning that the `defer` statement has come up in other contexts, and is already often used as an identifier already (see *https://mail.python.org/pipermail/python-ideas/2017-February/044682.html <https://mail.python.org/pipermail/python-ideas/2017-February/044682.html>*), so there are a lot of practical considerations for this to overcome even if its deemed necessary (which I think Nick shows that it probably shouldn't be). --Josh On Sat, Jun 3, 2017 at 8:24 AM Nick Coghlan <ncoghlan@gmail.com> wrote:

On 3 June 2017 at 22:24, Nick Coghlan <ncoghlan@gmail.com> wrote:
It occurred to me that I should elaborate a bit further here, and point out explicitly that one of the main benefits of ExitStack (and, indeed, the main reason it exists) is that it allows resource lifecycle management to be deterministic, *without* necessarily tying it to function calls or with statements. The behave BDD test framework, for example, defines hooks that run before and after each feature and scenario, as well as before and after the entire test run. I use those to set up "scenario_cleanup", "_feature_cleanup" and "_global_cleanup" stacks as part of the testing context: https://github.com/leapp-to/prototype/blob/master/integration-tests/features... If a test step implementor allocates a resource that needs to be cleaned up, they register it with "context.scenario_cleanup", and then the "after scenario" hook takes care of closing the ExitStack instance and cleaning everything up appropriately. For me, that kind of situation is when I'm most inclined to reach for ExitStack, whereas when the cleanup needs align with the function call stack, I'm more likely to reach for contexlib.contextmanager or an explicit try/finally. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

I agree that the stated use cases are better handled with ExitStack. One area where `defer` might be useful is in lazy-evaluating global constants. For example in a genomics library used at my work, one module involves compiling a large number of regular expressions, and setting them as global contants in the module, like: FOO1_re = re.compile(r'...') FOO_TO_BAR_re = {foo: complicated_computation_of_regex(foo) for foo in LONG_LIST_OF_THINGS} ... This utility module is imported in a lot of places in the codebase, which meant that importing almost anything from our codebase involved precompiling all these regular expressions, which took around 500ms to to run the anything (the test runner, manually testing code in the shell, etc.) It would be ideal to only do these computations if/when they are needed. This is a more general issue than this specific example, e.g. for libraries which parse large data sources like pycountry (see my PR <https://bitbucket.org/flyingcircus/pycountry/pull-requests/10/improve-module...> for one possible ugly solution using proxy objects; the author instead went with the simpler, less general solution of manually deciding when the data is needed). See also django.utils.functional.lazy <https://docs.djangoproject.com/en/1.11/_modules/django/utils/functional/>, which is used extensively in the framework. A statement like: `defer FOO = lengthy_computation_of_foo()` which deferred the lengthy computation until it is used for something would be useful to allow easily fixing these issues without writing ugly hacks like proxy objects or refactoring code into high-overhead cached properties or the like. Alternatively, if there are less ugly or bespoke ways to handle this kind of issue, I'd be interested in hearing them. Best, Lucas On Sat, Jun 3, 2017 at 11:38 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:

On Sun, Jun 4, 2017 at 12:23 AM, Lucas Wiman <lucas.wiman@gmail.com> wrote:
I think in general I'd recommend making the API for accessing these things be a function call interface, so that it's obvious to the caller that some expensive computation might be going on. But if you're stuck with an attribute-lookup based interface, then you can use a __getattr__ hook to compute them the first time they're accessed: class LazyConstants: def __getattr__(self, name): value = compute_value_for(name) setattr(self, name, value) return value __getattr__ is only called as a fallback, so by setting the computed value on the object we make any future attribute lookups just as cheap as they would be otherwise. You can get this behavior onto a module object by doing "sys.modules[__name__] = Constants()" inside the module body, or by using a <del>hack</del> elegant bit of code like https://github.com/njsmith/metamodule/ (mostly the latter would only be preferred if you have a bunch of other attributes exported from this same module and trying to move all of them onto the LazyConstants object would be difficult). -n -- Nathaniel J. Smith -- https://vorpus.org

On 4 June 2017 at 17:37, Nathaniel Smith <njs@pobox.com> wrote:
This reminds me: we could really use some documentation help in relation to https://bugs.python.org/issue22986 making module __class__ attributes mutable in Python 3.5+ At the moment, that is just reported in Misc/NEWS as "Issue #22986: Allow changing an object's __class__ between a dynamic type and static type in some cases.", which doesn't do anything to convey the significant *implications* of now being able to define module level properties as follows: >>> x = 10 >>> x 10 >>> import __main__ as main >>> main.x 10 >>> from types import ModuleType >>> class SpecialMod(ModuleType): ... @property ... def x(self): ... return 42 ... >>> main.__class__ = SpecialMod >>> x 10 >>> main.x 42 (I know that's what metamodule does under the hood, but if the 3.5+ only limitation is acceptable, then it's likely to be clearer to just do this inline rather than hiding it behind a 3rd party API) One potentially good option would be a HOWTO guide on "Lazy attribute initialization" in https://docs.python.org/3/howto/index.html that walked through from the basics of using read-only properties with double-underscore prefixed result caching, through helper functions & methods decorated with lru_cache, and all the way up to using __class__ assignment to enable the definition of module level properties. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Hello, 2017-06-04 Nathaniel Smith <njs@pobox.com> dixit:
Another solution is to use a Pyramid's-@reify-like decorator to make a caching non-data descriptor (i.e., the kind of descriptor that can be shadowed with an ordinary instance attribute): ``` class LazyConstants: @reify def FOO(self): return <complicated computation...> @reify def BAR(self): return <another complicated computation...> ``` In my work we use just @pyramid.decorator.reify [1], but the mechanism is so simple that it you can always implement it by yourself [2], though providing some features related to_ _doc__/introspection-ability etc. may need some additional deliberation to do it right... That may mean to it would be worth to add it to the standard library. Wouldn't be? Cheers. *j [1] See: http://docs.pylonsproject.org/projects/pyramid/en/latest/api/decorator.html#... [2] The gist of the implementation is just: ``` class lazyproperty(object): def __init__(self, maker): self.maker = maker def __get__(self, instance, owner): if instance is None: return self value = self.maker(instance) setattr(instance, self.maker.__name__, value) return value ```

On Sat, Jun 3, 2017 at 6:59 AM, Daniel Bershatsky <bepshatsky@yandex.ru> wrote: Or with usage defer keyword
IMHO, a block in which the intention of a `finally: is not well understood, needs refactoring. Some *old* code is like that, but it doesn’t mean it’s *bad*. Then, as a matter of personal preference, I’m not comfortable with that the *defer* idiom talks first about things that should be done last. It’s a debt acquired without enough syntactic evidence (Oh! Mi gosh!, There were those defers at the start of the function I just changed).
From import this:
Explicit is better than implicit. -- Juancarlo *Añez*

Daniel Bershatsky <bepshatsky@yandex.ru> writes:
Related: "Python equivalent of golang's defer statement" https://stackoverflow.com/questions/34625089/python-equivalent-of-golangs-de...
participants (8)
-
Akira Li
-
Daniel Bershatsky
-
Jan Kaliszewski
-
Joshua Morton
-
Juancarlo Añez
-
Lucas Wiman
-
Nathaniel Smith
-
Nick Coghlan