[Python-ideas] Changing optimisation level from a script

Nick Coghlan ncoghlan at gmail.com
Sat Sep 10 07:53:11 EDT 2016


On 10 September 2016 at 03:20, Brett Cannon <brett at python.org> wrote:
> I don't know if it's been discussed, but I have thought about it in context
> of PEP 511. The problem with swapping optimization levels post-start is that
> you end up with inconsistencies, e.g. asserts that depend on other
> asserts/__debug__ to function properly. If you let people jump around you
> potentially will break code in odd ways. Now obviously that's not
> necessarily a reason to not allow it, but it is something to consider.
>
> Where this does become a potential issue in the future is if we ever start
> to have optimizations that span modules, e.g. function inlining and the
> such. We don't have support for this now, but if we ever make it easier to
> do such things then the ability to change the optimization level
> mid-execution would break assumptions or flat-out ban cross-module
> optimizations in fear that too much code would break.
>
> So I'm not flat-out saying no to this idea, but there are some things to
> consider first.

We technically already have to deal with this problem, since folks can
run compile() themselves with "optimize" set to something other than
-1.

"sys.flags.optimize" then gives the default setting used for
"optimize" by the import system, eval, exec, etc.

So if we did make this configurable, I'd suggest something along the
lines of the other "buyer beware" settings in sys that can easily
break the world, like setcheckinterval, setrecursionlimit,
setswitchinterval, settrace, setprofile, and (our first PEP 8
compliant addition), set_coroutine_wrapper.

Given sys.flags.optimize already exists to read the current setting
from Python, we'd just need "sys.set_default_optimize()" to configure
it.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


More information about the Python-ideas mailing list