
Since it's a quiet time on python-dev at the moment <grin>, I thought I'd just toss this bedraggled parrot in... Every now and then, the comment arises "this <enhancement X> should only incur a small performance hit". I just ran one of my apps under 1.5.2+ and 2.1b2. The little hits along the way have here added up to a 26% slowdown. Around the time 2.0 was released, there was a brief thread along the lines of "let's get 2.0 out the door, and tune it up in 2.1 - there's some low-hanging fruit about". Any chance we could get someone like Christian and Tim wound up on looking at performance issues, however briefly <wink>? (I know, they don't have time - I just remembered the old days on c.l.py when they'd try to outdo each other with weird and wonderful optimizations.) This is not a flame at 2.x, BTW - 2.x is a good thing! (BTW, gc.disable() improved things by 3%.) -- Mark Favas - m.favas@per.dem.csiro.au CSIRO, Private Bag No 5, Wembley, Western Australia 6913, AUSTRALIA

[Mark Favas]
How do you know it is in fact "the little bits" and not one specific bit? For example, I recall that line-at-a-time input was dozens of times slower (relatively speaking) on your box than on anyone else's box. Not enough info here, and especially not when you say (emphasis added) "I just ran ONE of my apps ...". Perhaps that app does something unique? Or perhaps that app does something common that's uniquely slow on your box? Or ...
Heh heh. I remember that too. Good followup <wink>.
No chance for Tim. I have no spare work time or spare spare time left. And AFAIK, PythonLabs has no plans to do any performance tuning. If you identify a specific new choke point, though, then repairing it should be a focused low-effort job. I doubt you're seeing an accumulation of small slowdowns adding to 26% anyway -- there's really nothing we've done that should have an ubiquitous effect other than adding cyclic gc (but you said later that gc only accounted for 3% in your app). Hmm. One other: the new comparison code is both very complex and very cleanly written. As a result, I've worn my finger numb stepping through it in a debugger: if your app is doing oodles of comparisions, I wouldn't be surprised to see it losing 20% to layers and layers of function calls trying to figure out *how* to compare things.
I just remembered the old days on c.l.py when they'd try to outdo each other with weird and wonderful optimizations.)
Recall that none of those got into the distribution, though. Guido doesn't like weird and wonderful optimizations in the Python source code. And, indeed, many of those eventually succumbed to the *obvious* ways to write them in C (e.g., converting an MD5 digest to a string of hex digits -- 2.0 added an md5.hex_digest() method to solve that directly, and binascii.hexlify() for the general case).
This is not a flame at 2.x, BTW - 2.x is a good thing!
You're not fooling me, Mark. I've known from the start that this is just another thinly veiled attack on 2.1's __future__ statement <wink>. first-find-out-where-it's-slower-ly y'rs - tim

[Mark Favas]
How do you know it is in fact "the little bits" and not one specific bit? For example, I recall that line-at-a-time input was dozens of times slower (relatively speaking) on your box than on anyone else's box. Not enough info here, and especially not when you say (emphasis added) "I just ran ONE of my apps ...". Perhaps that app does something unique? Or perhaps that app does something common that's uniquely slow on your box? Or ...
Heh heh. I remember that too. Good followup <wink>.
No chance for Tim. I have no spare work time or spare spare time left. And AFAIK, PythonLabs has no plans to do any performance tuning. If you identify a specific new choke point, though, then repairing it should be a focused low-effort job. I doubt you're seeing an accumulation of small slowdowns adding to 26% anyway -- there's really nothing we've done that should have an ubiquitous effect other than adding cyclic gc (but you said later that gc only accounted for 3% in your app). Hmm. One other: the new comparison code is both very complex and very cleanly written. As a result, I've worn my finger numb stepping through it in a debugger: if your app is doing oodles of comparisions, I wouldn't be surprised to see it losing 20% to layers and layers of function calls trying to figure out *how* to compare things.
I just remembered the old days on c.l.py when they'd try to outdo each other with weird and wonderful optimizations.)
Recall that none of those got into the distribution, though. Guido doesn't like weird and wonderful optimizations in the Python source code. And, indeed, many of those eventually succumbed to the *obvious* ways to write them in C (e.g., converting an MD5 digest to a string of hex digits -- 2.0 added an md5.hex_digest() method to solve that directly, and binascii.hexlify() for the general case).
This is not a flame at 2.x, BTW - 2.x is a good thing!
You're not fooling me, Mark. I've known from the start that this is just another thinly veiled attack on 2.1's __future__ statement <wink>. first-find-out-where-it's-slower-ly y'rs - tim
participants (3)
-
Mark Favas
-
Neil Schemenauer
-
Tim Peters