Le 12/09/2018 à 09:16, Neil Schemenauer a écrit :
I was curious about how much slower CPython will be now that I'm using functions for Py_Type(), Py_INCREF(), etc. Some quick and dirty benchmark seems to show it is not so bad, maybe about 10% slower. Note that I updated the code to use C99 inline functions. They are neat.
After doing the quick and dirty benchmark on the costs, I got curious about what might be the gain of using tagged pointers for small ints. The natural thing to do is to provide a fast-path in the ceval loop (ignoring Victor's warnings). BTW, the discussion in
https://bugs.python.org/issue21955
is quite interesting if you enjoy an epic saga of micro-optimization. I tried implementing a fast-path just for BINARY_ADD of two fixedint numbers.
The result looks promising:
./python -m perf timeit --name='x+y' -s 'x=10000; y=2' 'x+y' --dup 1000 -v -o int.json ./python -m perf timeit --name='x+y' -s 'x=fixedint(10000); y=fixedint(2)' 'x+y' --dup 1000 -v -o fixedint.json ./python -m perf compare_to int.json fixedint.json Mean +- std dev: [int] 32.3 ns +- 1.0 ns -> [fixedint] 10.8 ns +- 0.3 ns: 3.00x faster (-67%)
Hmm... so you get a 10% global slowdown, plus a 3x speedup on a silly microbenchmark, and you call that promising? ;-)
Regards
Antoine.