Neil Toronto wrote:
Guido van Rossum wrote:
Hm.
On my Linux box, in the trunk:
Before the patch: Pystone(1.1) time for 50000 passes = 1.16 This machine benchmarks at 43103.4 pystones/second
After the patch: Pystone(1.1) time for 50000 passes = 1.14 This machine benchmarks at 43859.6 pystones/second
That's only about 1.75% faster. But pystone is a lousy benchmark.
I'm not aware of any benchmark that isn't. :)
Can you humor me and change the PY_LONG_LONG to Py_ssize_t in both PyDictObject and PyFastGlobalsObject and see if that helps? It does on one of my test machines.
Speaking of which, here's a question for everybody. I was wondering whether 64 bits is necessary. It takes an hour of concerted effort - nothing but "module.d = 1; del module.d" for an hour straight - to overflow a 32-bit version number. Is anybody going to actually get close to doing that in a global namespace?
Of course not. And 640k is as much memory as anyone could reasonably need ...
I don't think a malicious user could exploit it. The most they could do is segfault by doing exactly 2**32 entry-invalidating operations and then one get or set. They've got better things to do if they're running code on your machine.
FWIW - and I wouldn't bother with this if I weren't mucking about with dict internals - with a 32-bit version number, I've verified that gcc emits only one extra instruction in dict functions that increment it. It's two for a 64-bit number. The version test in LOAD_GLOBAL does take a bit more time with 64 bits, though.
That's a good local optimization for today's conditions, probably. Who nows whether it will survive the next ten years. And decisions like that have a weird habit of running into pathological cases whose authors curse you when they find out where the problem arose. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/