If you don't understand this, it may help to profile [fib(i) for i in range(10000)]. You'll see that the wrapper function gets called a ton of times, the wrapper function gets called 10000 times, and the factory function (which created wrapper functions) gets called 0 times.
Ah, I see now. Thank you. 

On Sat, Jan 2, 2016 at 7:50 PM, Andrew Barnert <abarnert@yahoo.com> wrote:
On Jan 2, 2016, at 19:00, u8y7541 The Awesome Person <surya.subbarao1@gmail.com> wrote:

The wrapper functions themselves, though, exist in a one:one
correspondence with the functions they're applied to - when you apply
functools.lru_cache to a function, the transient decorator produced by
the decorator factory only lasts as long as the execution of the
function definition, but the wrapper function lasts for as long as the
wrapped function does, and gets invoked every time that function is
called (and if a function is performance critical enough for the
results to be worth caching, then it's likely performance critical
enough to be thinking about micro-optimisations). (Nick Coghlan)
 
Yes, that is what I was thinking of. Just like Quake's fast inverse square root. Even though it is a micro-optimization, it greatly affects how fast the game runs.

Of course micro-optimizations _can_ matter--when you're optimizing the work done in the inner loop of a program that's CPU-bound, even a few percent can make a difference.

But that doesn't mean they _always_ matter. Saving 50ns in some code that runs thousands of times per frames makes a difference; saving 50ns in some code that happens once at startup does not. That's why we have profiling tools: so you can find the bit of your program where you're spending 99% of your time doing something a billion times, and optimize that part.

And it also doesn't mean that everything that sounds like it should be lighter is worth doing. You have to actually test it and see. In the typical case where you're replacing one function object with one class object and one instance object, that's actually taking more space, not less.

But, as I explained, the function will _not_ be redefined and trashed every frame; it will be created one time. (Andrew Barnert)
 
Hmm... Nick says different...

No, Nick doesn't say different. Read it again. The wrapper function lives as long as the wrapped function lives. It doesn't get created anew each time you call it.

If you don't understand this, it may help to profile [fib(i) for i in range(10000)]. You'll see that the wrapper function gets called a ton of times, the wrapper function gets called 10000 times, and the factory function (which created wrapper functions) gets called 0 times.

This all suggests that if your application is severely memory
constrained (e.g. it's running on an embedded interpreter like
MicroPython), then it *might* make sense to incur the extra complexity
of using classes with a custom __call__ method to define wrapper
functions, over just using a nested function. (Nick Coghlan)

Yes, I was thinking of that when I started this thread, but this thread is just from my speculation. 

Nick is saying that there may be some cases where it might make sense to use a class. That doesn't at all support your idea that tutorials should teach using classes instead of functions. In general, using functions will be faster; in the most common case, using functions will use less memory; most importantly, in the vast majority of cases, it won't matter anyway.  Maybe a MicroPython tutorial should have a section on how running on a machine with only 4KB changes a lot of the usual tradeoffs, using a decorator as an example. But a tutorial on decorators should show using a function, because it's the simplest, most readable way to do it.



--
-Surya Subbarao