Would an e.g. bm_dict.py in [1] be a good place for a few benchmarks of dict; or is there a more appropriate project for authoritatively measuring performance regressions and optimizations of core {cpython,} data structures?

[1] https://github.com/python/pyperformance/tree/master/pyperformance/benchmarks

(pytest-benchmark looks neat, as well. an example of how to use pytest.mark.parametrize to capture multiple metrics might be helpful:
https://github.com/ionelmc/pytest-benchmark )

Its easy to imagine a bot that runs some or all performance benchmarks on a PR when requested in a PR comment; there's probably already a good way to do this?

On Wed, Sep 16, 2020, 10:44 PM Wes Turner <wes.turner@gmail.com> wrote:
That sounds like a worthwhile optimization. FWIW, is this a bit simpler but sufficient?:

python -m timeit -n 2000  --setup "from uuid import uuid4; \
    o = {uuid4().hex: i for i in range(10000)}" \

Is there a preferred tool to comprehensively measure the performance impact of a PR (with e.g. multiple contrived and average-case key/value sets)?

On Wed, Sep 16, 2020, 7:07 PM Marco Sulla <Marco.Sulla.Python@gmail.com> wrote:
Well, it seems ok now:

I've done a quick speed test and speedup is quite high for a creation
using keywods or a dict with "holes": about 30%:

python -m timeit -n 2000  --setup "from uuid import uuid4 ; o =
{str(uuid4()).replace('-', '') : str(uuid4()).replace('-', '') for i
in range(10000)}" "dict(**o)"

python -m timeit -n 10000  --setup "from uuid import uuid4 ; o =
{str(uuid4()).replace('-', '') : str(uuid4()).replace('-', '') for i
in range(10000)} ; it = iter(o) ; key0 = next(it) ; o.pop(key0)"

Can I do a PR?
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-leave@python.org
Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/QWXD2D4SC6XHZLV3QA4TMGMI7Z7SAJ2R/
Code of Conduct: http://python.org/psf/codeofconduct/