On Jan 2, 2016, at 10:14, u8y7541 The Awesome Person firstname.lastname@example.org wrote:
In most decorator tutorials, it's taught using functions inside functions. Isn't this inefficient because every time the decorator is called, you're redefining the function, which takes up much more memory?
First, most decorators are only called once. For example:
@lru_cache(maxsize=None) def fib(n) if n < 2: return n return fib(n-1) + fib(n-2)
The lru_cache function gets called once, and creates and returns a decorator function. That decorator function is passed fib, and creates and returns a wrapper function. That wrapper function is what gets stored in the globals as fib. It may then get called a zillion times, but it doesn't create or call any new function.
So, why should you care whether lru_cache is implemented with a function or a class? You're talking about a difference of a few dozen bytes, once in the entire lifetime of your program.
Plus, where do you get the idea that a function object is "much larger"? Each new function that gets built used the same code object, globals dict, etc., so you're only paying for the cost of a function object header, plus a tuple of cell objects (pointers) for any state variables. Your alternative is to create a class instance header (maybe a little smaller than a function object header), and store all those state variables in a dict (33-50% bigger even with the new split-instance-dict optimizations).
Anyway, I'm willing to bet that in this case, the function is ~256 bytes while the class is ~1024, so you're actually wasting rather than saving memory. But either way, it's far too little memory to care.
I prefer defining decorators as classes with __call__ overridden. Is there a reason why decorators are taught with functions inside functions?
Because most decorators are more concise, more readable, and easier to understand that way. And that's far more important than a micro-optimization which may actually be a pessimization but which even more likely isn't going to matter at all.