On 5/23/14 1:22 PM, Guido van Rossum wrote:
On Fri, May 23, 2014 at 10:17 AM, Eric Snow <ericsnowcurrently@gmail.com> wrote:
On Fri, May 23, 2014 at 10:49 AM, Guido van Rossum <guido@python.org> wrote:
> Looking at my own (frequent) use of coverage.py, I would be totally fine if
> disabling peephole optimization only affected my app's code, and kept using
> the precompiled stdlib. (How exactly this would work is left as an exercise
> for the reader.)

Would it be a problem if .pyc files weren't generated or used (a la -B
or PYTHONDONTWRITEBYTECODE) when you ran coverage?

In first approximation that would probably be okay, although it would make coverage even slower. I was envisioning something where it would still use, but not write, pyc files for the stdlib or site-packages, because the code in whose coverage I am interested is puny compared to the stdlib code it imports.

I was concerned about losing any time in test suites that are already considered too slow.  But I tried to do some controlled measurements of these scenarios, and found the worst case (no .pyc available, and none written) was only 2.8% slower than full .pyc files available.  When I tried to measure stdlib .pyc's available, and no .pyc's for my code, the results were actually very slightly faster than the typical case.  I think this points to the difficult in controlling all the variables! 

In any case, it seems that the penalty for avoiding the .pyc files is not burdensome.

--
--Guido van Rossum (python.org/~guido)


_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/