
On Tue, Oct 27, 2015 at 12:51 PM, Mark Lawrence <breamoreboy@yahoo.co.uk> wrote:
On 27/10/2015 17:55, Yury Selivanov wrote:
Serhiy,
On 2015-10-27 1:45 PM, Serhiy Storchaka wrote:
There is known trick to optimize a function:
def foo(x, y=0, len=len, pack=struct.pack, maxsize=1<<BPF): ...
It has a side effect: change function's signature. Would be nice to have a way to set function's local variables at creation time without affecting a signature.
I see this a lot in all kinds of code. In my experience it doesn't actually speed things up in a measurable way.
Is the below code really much slower?
def foo(x, y=0): pack=struct.pack maxsize=1<<BPF #CODE
If the #CODE is a tight long-running loop - then no, because the loop will probably run much longer than an extra attribute lookup + one extra bit shift on each "foo()" call. And if there is no tight loop - then you won't probably notice those optimizations anyways.
I think that adding a "const" statement deserves some discussion, but not from the standpoint of micro-optimizations.
Thanks, Yury
From my very naive perspective I'd have thought that the only real difference between the two implementations is that Yury's has the optimization hard coded within the function body, while Serhiy's allows you to override the hard coded defaults at run time. Am I hot, warm, tepid, cold or approaching 0 Kelvin?
Yury's code has to look up the global value of struct, and then get its pack attribute, every time the function is called. Serhiy's code only does this once, when the function is created, and when it is called the local value is loaded directly from the function defaults, which amounts to a single tuple lookup rather than two sequential dict lookups.