[Python-Dev] PEP 414 - some numbers from the Django port

Vinay Sajip vinay_sajip at yahoo.co.uk
Wed Mar 7 23:36:43 CET 2012


Armin Ronacher <armin.ronacher <at> active-4.com> writes:

> What are you trying to argue?  That the overall Django testsuite does
> not do a lot of string processing, less processing with native strings?
> 
> I'm surprised you see a difference at all over the whole Django
> testsuite and I wonder why you get a slowdown at all for the ported
> Django on 2.7.

The point of the figures is to show there is *no* difference (statistically
speaking) between the three sets of samples. Of course, any individual run or
set of runs could be higher or lower due to other things happening on the
machine (not that I was running any background tasks), so the idea of the simple
statistical analysis is to determine whether these samples could all have come
from the same populations. According to ministat, they could have (with a 95%
confidence level).

The Django test suite is pretty comprehensive, so it would presumably exercise
every part of Django, including the parts that handle processing of requests and
producing responses. I can't confirm this, not having done a coverage analysis
of Django; but this seems like a more representative workload than any
microbenchmark which just measures a single operation, like the overhead of a
wrapper. And so my argument was that the microbenchmark numbers didn't give a
meaningful indication of the actual performance in a real scenario, and they
should be taken in that light.

No doubt there are other, better (more useful) tests that could be performed
(e.g. ab run against all three variants and requests/sec figures compared) but I
had the Django test run figures to hand (since they're a byproduct of the
porting work), and so presented them in my post. Anyway, it doesn't really
matter now, since the latest version of the PEP no longer mentions those figures.

Regards,

Vinay Sajip



More information about the Python-Dev mailing list