Hello,
On Mon, 14 Dec 2020 09:37:42 +1100
Chris Angelico
2 8 LOAD_NAME 0 (obj) 10 LOAD_METHOD 1 (meth) ...
3 16 LOAD_NAME 0 (obj) 18 LOAD_ATTR 1 (meth) ...
Creating bound method objects can be expensive. Python has a history of noticing ways to improve performance without changing semantics, and implementing them. Details here:
https://docs.python.org/3/library/dis.html#opcode-LOAD_METHOD
Thanks for the response. And I know all that. LOAD_METHOD/CALL_METHOD was there in MicroPython right from the start. Like, the very first commit to the project in 2013 already had it: https://github.com/micropython/micropython/commit/429d71943d6b94c7dc3c40a39f...
If you force the bound method object to be created (by putting it in a variable),
But that's what the question was about, and why there was the intro! Let's please go over it again. Do you agree with the following: a + (b + c) <=> t = b + c; a + t ? Where "<=>" is the equivalence operator. I do hope you agree, because it's both basis for evaluation implementation and for refactoring rules, and the latter is especially important for line-oriented language like Python, where wrapping expression across lines requires explicit syntactic markers, which some people consider ugly, so there should be clear rules for splitting long expressions which don't affect there semantic. So ok, if you agree with the above, do you agree with the below: (a.b)() <=> t = a.b; t() ? And I really wonder what depth of non-monotonic logic we can reach on trying to disagree with the above ;-). Python does have cases where syntactic refactoring is not possible. The most infamous example is super() (Which reminds that, when args to it were made optional, it would have been much better to make it just "super.", there would be much less desire to "refactor" it). But the more such places a language has, the less regular, hard to learn, reason about, and optimize the language is. And poorer designed too. So, any language with aspiration to not be called words should avoid such cases. And then again, what can we tell about: "(a.b)() <=> t = a.b; t()" []
This is why lots of us are unimpressed by your strict mode - CPython is perfectly capable of optimizing the common cases without changing the semantics, so why change the semantics? :)
But please remember that you're talking with someone who takes LOAD_METHOD for granted, from 2013. And who takes inline caches for granted from 2015. So, what what would be the reason to take all that for granted and still proceeding with the strict mode? Oh, the reasons are obvious: a) it's the natural extension of the above; b) it allows to reach much deeper (straight to the machine code, again), and by much cheaper means (machine code for call will contain the same as in C, no 10x times more code in guards). For comparison, CPython added LOAD_METHOD in 2016. And lookup caching started to be added in 2019. And it took almost 1.5 years to extend caching from a single opcode to 2nd one. 1.5 years, Chris! commit 91234a16367b56ca03ee289f7c03a34d4cfec4c8 Date: Mon Jun 3 21:30:58 2019 +0900 bpo-26219: per opcode cache for LOAD_GLOBAL (GH-12884) commit 109826c8508dd02e06ae0f1784f1d202495a8680 Date: Tue Oct 20 06:22:44 2020 +0100 bpo-42093: Add opcode cache for LOAD_ATTR (GH-22803) And 3rd one, LOAD_NAME, isn't covered, and it's easy to see why: instead of using best-practice uniform inline caches, desired-to-be-better Python semantics spawned the following monsters: co->co_opcache_map = (unsigned char *)PyMem_Calloc(co_size, 1); typedef struct { PyObject *ptr; /* Cached pointer (borrowed reference) */ uint64_t globals_ver; /* ma_version of global dict */ uint64_t builtins_ver; /* ma_version of builtin dict */ } _PyOpcache_LoadGlobal; All that stuff sits in your L1 cache, thrashes something else in and out all the time, and makes it all still slow, slow, slow. "Perfectly capable" you say? Heh. -- Best regards, Paul mailto:pmiscml@gmail.com