If people are curious as to where import makes its decision as to what
bytecode to load based on the optimization level, see
https://github.com/python/cpython/blob/ad3b9aeaab5122b22445f9120a6ccdc1987c1...
.
On Sat, 10 Sep 2016 at 04:53 Nick Coghlan
I don't know if it's been discussed, but I have thought about it in context of PEP 511. The problem with swapping optimization levels post-start is
On 10 September 2016 at 03:20, Brett Cannon
wrote: that you end up with inconsistencies, e.g. asserts that depend on other asserts/__debug__ to function properly. If you let people jump around you potentially will break code in odd ways. Now obviously that's not necessarily a reason to not allow it, but it is something to consider.
Where this does become a potential issue in the future is if we ever start to have optimizations that span modules, e.g. function inlining and the such. We don't have support for this now, but if we ever make it easier to do such things then the ability to change the optimization level mid-execution would break assumptions or flat-out ban cross-module optimizations in fear that too much code would break.
So I'm not flat-out saying no to this idea, but there are some things to consider first.
We technically already have to deal with this problem, since folks can run compile() themselves with "optimize" set to something other than -1.
"sys.flags.optimize" then gives the default setting used for "optimize" by the import system, eval, exec, etc.
So if we did make this configurable, I'd suggest something along the lines of the other "buyer beware" settings in sys that can easily break the world, like setcheckinterval, setrecursionlimit, setswitchinterval, settrace, setprofile, and (our first PEP 8 compliant addition), set_coroutine_wrapper.
Given sys.flags.optimize already exists to read the current setting from Python, we'd just need "sys.set_default_optimize()" to configure it.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia