Changing optimisation level from a script

Hi all, When starting CPython from the command line you can pass the -O option to enable optimisations (eg `assert 0` won't raise an exception when -O is passed). But, AFAIK, there is no way to change the optimisation level after the interpreter has started up, ie there is no Python function call or variable that can change the optimisation. In MicroPython we want to be able to change the optimisation level from within a script because (on bare metal at least) there is no analog of passing options like -O. My idea would be to have a function like `sys.optimise(value)` that sets the optimisation level for all subsequent runs of the parser/compiler. For example: import sys import mymodule # no optimisations exec('assert 0') # will raise an exception sys.optimise(1) # enable optimisations import myothermodule # optimisations are enabled for this (unless it's already imported by mymodule) exec('assert 0') # will not raise an exception What do you think? Sorry if this has been discussed before! Cheers, Damien.

On Thu, 8 Sep 2016 at 21:36 Damien George <damien.p.george@gmail.com> wrote:
Hi all,
When starting CPython from the command line you can pass the -O option to enable optimisations (eg `assert 0` won't raise an exception when -O is passed). But, AFAIK, there is no way to change the optimisation level after the interpreter has started up, ie there is no Python function call or variable that can change the optimisation.
In MicroPython we want to be able to change the optimisation level from within a script because (on bare metal at least) there is no analog of passing options like -O.
My idea would be to have a function like `sys.optimise(value)` that sets the optimisation level for all subsequent runs of the parser/compiler. For example:
import sys import mymodule # no optimisations exec('assert 0') # will raise an exception sys.optimise(1) # enable optimisations import myothermodule # optimisations are enabled for this (unless it's already imported by mymodule) exec('assert 0') # will not raise an exception
What do you think? Sorry if this has been discussed before!
I don't know if it's been discussed, but I have thought about it in context of PEP 511. The problem with swapping optimization levels post-start is that you end up with inconsistencies, e.g. asserts that depend on other asserts/__debug__ to function properly. If you let people jump around you potentially will break code in odd ways. Now obviously that's not necessarily a reason to not allow it, but it is something to consider. Where this does become a potential issue in the future is if we ever start to have optimizations that span modules, e.g. function inlining and the such. We don't have support for this now, but if we ever make it easier to do such things then the ability to change the optimization level mid-execution would break assumptions or flat-out ban cross-module optimizations in fear that too much code would break. So I'm not flat-out saying no to this idea, but there are some things to consider first.

I very much doubt that one assert might depend on another, and if they do, we can just tell people not to change the debug level. The API to change this should set __debug__ appropriately. On Fri, Sep 9, 2016 at 10:20 AM, Brett Cannon <brett@python.org> wrote:
On Thu, 8 Sep 2016 at 21:36 Damien George <damien.p.george@gmail.com> wrote:
Hi all,
When starting CPython from the command line you can pass the -O option to enable optimisations (eg `assert 0` won't raise an exception when -O is passed). But, AFAIK, there is no way to change the optimisation level after the interpreter has started up, ie there is no Python function call or variable that can change the optimisation.
In MicroPython we want to be able to change the optimisation level from within a script because (on bare metal at least) there is no analog of passing options like -O.
My idea would be to have a function like `sys.optimise(value)` that sets the optimisation level for all subsequent runs of the parser/compiler. For example:
import sys import mymodule # no optimisations exec('assert 0') # will raise an exception sys.optimise(1) # enable optimisations import myothermodule # optimisations are enabled for this (unless it's already imported by mymodule) exec('assert 0') # will not raise an exception
What do you think? Sorry if this has been discussed before!
I don't know if it's been discussed, but I have thought about it in context of PEP 511. The problem with swapping optimization levels post-start is that you end up with inconsistencies, e.g. asserts that depend on other asserts/__debug__ to function properly. If you let people jump around you potentially will break code in odd ways. Now obviously that's not necessarily a reason to not allow it, but it is something to consider.
Where this does become a potential issue in the future is if we ever start to have optimizations that span modules, e.g. function inlining and the such. We don't have support for this now, but if we ever make it easier to do such things then the ability to change the optimization level mid-execution would break assumptions or flat-out ban cross-module optimizations in fear that too much code would break.
So I'm not flat-out saying no to this idea, but there are some things to consider first.
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido van Rossum (python.org/~guido)

On 10 September 2016 at 03:20, Brett Cannon <brett@python.org> wrote:
I don't know if it's been discussed, but I have thought about it in context of PEP 511. The problem with swapping optimization levels post-start is that you end up with inconsistencies, e.g. asserts that depend on other asserts/__debug__ to function properly. If you let people jump around you potentially will break code in odd ways. Now obviously that's not necessarily a reason to not allow it, but it is something to consider.
Where this does become a potential issue in the future is if we ever start to have optimizations that span modules, e.g. function inlining and the such. We don't have support for this now, but if we ever make it easier to do such things then the ability to change the optimization level mid-execution would break assumptions or flat-out ban cross-module optimizations in fear that too much code would break.
So I'm not flat-out saying no to this idea, but there are some things to consider first.
We technically already have to deal with this problem, since folks can run compile() themselves with "optimize" set to something other than -1. "sys.flags.optimize" then gives the default setting used for "optimize" by the import system, eval, exec, etc. So if we did make this configurable, I'd suggest something along the lines of the other "buyer beware" settings in sys that can easily break the world, like setcheckinterval, setrecursionlimit, setswitchinterval, settrace, setprofile, and (our first PEP 8 compliant addition), set_coroutine_wrapper. Given sys.flags.optimize already exists to read the current setting from Python, we'd just need "sys.set_default_optimize()" to configure it. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

If people are curious as to where import makes its decision as to what bytecode to load based on the optimization level, see https://github.com/python/cpython/blob/ad3b9aeaab5122b22445f9120a6ccdc1987c1... . On Sat, 10 Sep 2016 at 04:53 Nick Coghlan <ncoghlan@gmail.com> wrote:
I don't know if it's been discussed, but I have thought about it in context of PEP 511. The problem with swapping optimization levels post-start is
On 10 September 2016 at 03:20, Brett Cannon <brett@python.org> wrote: that
you end up with inconsistencies, e.g. asserts that depend on other asserts/__debug__ to function properly. If you let people jump around you potentially will break code in odd ways. Now obviously that's not necessarily a reason to not allow it, but it is something to consider.
Where this does become a potential issue in the future is if we ever start to have optimizations that span modules, e.g. function inlining and the such. We don't have support for this now, but if we ever make it easier to do such things then the ability to change the optimization level mid-execution would break assumptions or flat-out ban cross-module optimizations in fear that too much code would break.
So I'm not flat-out saying no to this idea, but there are some things to consider first.
We technically already have to deal with this problem, since folks can run compile() themselves with "optimize" set to something other than -1.
"sys.flags.optimize" then gives the default setting used for "optimize" by the import system, eval, exec, etc.
So if we did make this configurable, I'd suggest something along the lines of the other "buyer beware" settings in sys that can easily break the world, like setcheckinterval, setrecursionlimit, setswitchinterval, settrace, setprofile, and (our first PEP 8 compliant addition), set_coroutine_wrapper.
Given sys.flags.optimize already exists to read the current setting from Python, we'd just need "sys.set_default_optimize()" to configure it.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

What is to stop you from adding this to micropython as a library extension? I've never felt the urge to do this and I don't think I've ever heard it requested before. If you think it should be in the stdlib, given the timing of the 3.6 feature freeze the earliest time it could land would be Python 3.7. Finally, let's not define APIs with British spelling. :-) On Thu, Sep 8, 2016 at 9:35 PM, Damien George <damien.p.george@gmail.com> wrote:
Hi all,
When starting CPython from the command line you can pass the -O option to enable optimisations (eg `assert 0` won't raise an exception when -O is passed). But, AFAIK, there is no way to change the optimisation level after the interpreter has started up, ie there is no Python function call or variable that can change the optimisation.
In MicroPython we want to be able to change the optimisation level from within a script because (on bare metal at least) there is no analog of passing options like -O.
My idea would be to have a function like `sys.optimise(value)` that sets the optimisation level for all subsequent runs of the parser/compiler. For example:
import sys import mymodule # no optimisations exec('assert 0') # will raise an exception sys.optimise(1) # enable optimisations import myothermodule # optimisations are enabled for this (unless it's already imported by mymodule) exec('assert 0') # will not raise an exception
What do you think? Sorry if this has been discussed before!
Cheers, Damien. _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido van Rossum (python.org/~guido)

What is to stop you from adding this to micropython as a library extension?
That's what I would like to do, and usually we do just go ahead and add our own extensions, trying to be Pythonic as possible :) But we do that a lot and sometimes I think it would be good to discuss with upstream (ie python-dev/python-ideas) about adding new functions/classes so that MicroPython doesn't diverge too much (eg Paul Sokolovsky had a discussion about time.sleep_ms, time.sleep_us etc, which was fruitful). sys.optimize(value) is a pretty simple addition so I thought it would be a straight forward discussion to have here. I guess my main question to this list is: if CPython were to add a function to change the optimisation level at runtime, what would it look like? I don't want to push CPython to actually add such a thing if it's not seen as a useful addition. Instead I want to see how others would implement it if they needed to. A nice outcome would be that MicroPython adds the function now (eg sys.optimize), and in the future if CPython finds that it needs it, it also adds the same function with the same name/module/signature. Hence why I would like to add it correctly into MicroPython from the very beginning. Alternatively we can just use the micropython module and add micropython.optimize(value), and then if CPython does add it later on we'll have to change how we do it.
I've never felt the urge to do this and I don't think I've ever heard it requested before. If you think it should be in the stdlib, given the timing of the 3.6 feature freeze the earliest time it could land would be Python 3.7.
No, I definitely don't want to make a rush for 3.6. I guess the timing of this post was not the best considering the 3.6 release :)
Finally, let's not define APIs with British spelling. :-)
Haha, the spelling never even crossed my mind! Well, sys.opt() is definitely too short and confusing, so "optimize" would be the choice. Cheers, Damien.

On 2016-09-10 01:04, Damien George wrote:
What is to stop you from adding this to micropython as a library extension?
That's what I would like to do, and usually we do just go ahead and add our own extensions, trying to be Pythonic as possible :)
But we do that a lot and sometimes I think it would be good to discuss with upstream (ie python-dev/python-ideas) about adding new functions/classes so that MicroPython doesn't diverge too much (eg Paul Sokolovsky had a discussion about time.sleep_ms, time.sleep_us etc, which was fruitful). sys.optimize(value) is a pretty simple addition so I thought it would be a straight forward discussion to have here.
I guess my main question to this list is: if CPython were to add a function to change the optimisation level at runtime, what would it look like? I don't want to push CPython to actually add such a thing if it's not seen as a useful addition. Instead I want to see how others would implement it if they needed to.
A nice outcome would be that MicroPython adds the function now (eg sys.optimize), and in the future if CPython finds that it needs it, it also adds the same function with the same name/module/signature. Hence why I would like to add it correctly into MicroPython from the very beginning.
Alternatively we can just use the micropython module and add micropython.optimize(value), and then if CPython does add it later on we'll have to change how we do it.
If I saw a function called 'optimize' which took an argument, I might initially think that it returns an optimised form of the argument, except that it's being called like a procedure, so it must be setting something. Then there's the question of how to get the current optimisation level.
I've never felt the urge to do this and I don't think I've ever heard it requested before. If you think it should be in the stdlib, given the timing of the 3.6 feature freeze the earliest time it could land would be Python 3.7.
No, I definitely don't want to make a rush for 3.6. I guess the timing of this post was not the best considering the 3.6 release :)
Finally, let's not define APIs with British spelling. :-)
Haha, the spelling never even crossed my mind! Well, sys.opt() is definitely too short and confusing, so "optimize" would be the choice.
Perhaps: sys.opt_level = 1

On Sat, Sep 10, 2016 at 10:04:46AM +1000, Damien George wrote:
I guess my main question to this list is: if CPython were to add a function to change the optimisation level at runtime, what would it look like?
I don't think it would look like sys.optimize(flag). At the very least, it would have to take *at least* three values: - no optimizations (the default) - remove asserts (-O) - remove asserts and docstrings (-OO) but even that is very limiting. Why can't I keep asserts, which may be useful and important in running code, but remove docstrings, which mostly are used interactively? I think the idea of "optimization levels" has reached its use-by date and we should start considering separate flags for each individual optimization. In particular, I expect that some time soon somebody will propose at least one more optimization: - remove annotations and I seem to recall somebody requesting the ability to turn off constant folding, although I don't remember why. I can easily imagine Python implementations offering flags to control more complex/risky optimizations: - resolve built-in names at compile-time[1] - inline small functions - unroll loops etc. Victor Stinner is (I think) working on bringing some of these to CPython, and it seems to me that if they are to be under any user-control at all, a simple optimization level counter is not going to be sufficient. We're going to need a set of flags. But it seems to me that the ability to change optimizations at runtime would be rather confusing. Shouldn't optimizations, or at least some of them, apply on a per-module basis rather than globally? I don't even know how to reason about code optimizations if any function I run might have changed the optimization level without my knowledge. So I think this is a very dubious idea. [1] Yes, I know that will change the semantics of Python code. -- Steve

There are also peephole optimization. Removing it may prevent false positives for coverage tool http://bugs.python.org/issue2506 On Sat, Sep 10, 2016 at 4:01 AM Steven D'Aprano <steve@pearwood.info> wrote:
On Sat, Sep 10, 2016 at 10:04:46AM +1000, Damien George wrote:
I guess my main question to this list is: if CPython were to add a function to change the optimisation level at runtime, what would it look like?
I don't think it would look like sys.optimize(flag). At the very least, it would have to take *at least* three values:
- no optimizations (the default) - remove asserts (-O) - remove asserts and docstrings (-OO)
but even that is very limiting. Why can't I keep asserts, which may be useful and important in running code, but remove docstrings, which mostly are used interactively? I think the idea of "optimization levels" has reached its use-by date and we should start considering separate flags for each individual optimization.
In particular, I expect that some time soon somebody will propose at least one more optimization:
- remove annotations
and I seem to recall somebody requesting the ability to turn off constant folding, although I don't remember why. I can easily imagine Python implementations offering flags to control more complex/risky optimizations:
- resolve built-in names at compile-time[1] - inline small functions - unroll loops
etc. Victor Stinner is (I think) working on bringing some of these to CPython, and it seems to me that if they are to be under any user-control at all, a simple optimization level counter is not going to be sufficient. We're going to need a set of flags.
But it seems to me that the ability to change optimizations at runtime would be rather confusing. Shouldn't optimizations, or at least some of them, apply on a per-module basis rather than globally?
I don't even know how to reason about code optimizations if any function I run might have changed the optimization level without my knowledge. So I think this is a very dubious idea.
[1] Yes, I know that will change the semantics of Python code.
-- Steve _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Thanks, Andrew Svetlov

2016-09-10 2:56 GMT-04:00 Andrew Svetlov <andrew.svetlov@gmail.com>:
There are also peephole optimization. Removing it may prevent false positives for coverage tool http://bugs.python.org/issue2506
My PEP 511 adds "-o noopt" to disable the peephole optimizer ;-) "PEP 511 -- API for code transformers" www.python.org/dev/peps/pep-0511/ Victor

On 10 September 2016 at 03:01, Steven D'Aprano <steve@pearwood.info> wrote:
In particular, I expect that some time soon somebody will propose at least one more optimization:
- remove annotations
In process of developing the implementation of variable annotations this idea already appeared (I even did some benchmarks) but it was decided to postpone this idea. Maybe we could return to it when we will have more time. -- Ivan

On 09/10/2016 02:04 AM, Damien George wrote:
What is to stop you from adding this to micropython as a library extension?
That's what I would like to do, and usually we do just go ahead and add our own extensions, trying to be Pythonic as possible :)
But we do that a lot and sometimes I think it would be good to discuss with upstream (ie python-dev/python-ideas) about adding new functions/classes so that MicroPython doesn't diverge too much (eg Paul Sokolovsky had a discussion about time.sleep_ms, time.sleep_us etc, which was fruitful). sys.optimize(value) is a pretty simple addition so I thought it would be a straight forward discussion to have here.
I guess my main question to this list is: if CPython were to add a function to change the optimisation level at runtime, what would it look like? I don't want to push CPython to actually add such a thing if it's not seen as a useful addition. Instead I want to see how others would implement it if they needed to.
Hello, The API you proposed here comes is similar to something I see a lot in MicroPython libraries: functions/methods that combine a getter and setter. For example, to set the value on a pin, you do: pin.value(1) and to read, you do: result = pin.value() If an API like this was added to the stdlib, I'd expect it to use a property, e.g. pin.value = 1 result = pin.value I was wondering, what's the story of this aspect of MicroPython API? Does it have hidden advantages? Were you inspired by another library? Or was it just the easiest way to get the functionality (I assume you implemented functions before properties), and then it stuck?

In eGenix PyRun we expose a new helper in the sys module for this: --- ./Python/sysmodule.c 2015-12-07 02:39:11.000000000 +0100 +++ ./Python/sysmodule.c 2016-05-03 19:22:35.793193862 +0200 @@ -1205,6 +1205,50 @@ Return True if Python is exiting."); +/*** PyRun Extension ***************************************************/ + +#define SYS_SETFLAG(c_flag, flag_name) \ + if (strcmp(flagname, flag_name) == 0) { \ + old_value = c_flag; \ + if (value != -1) \ + c_flag = value; \ + } else + +static PyObject * +sys_setflag(PyObject* self, PyObject* args) +{ + char *flagname; + int value = -1, old_value = value; + + if (!PyArg_ParseTuple(args, "s|i", &flagname, &value)) + goto onError; + + SYS_SETFLAG(Py_DebugFlag, "debug") + SYS_SETFLAG(Py_OptimizeFlag, "optimize") + SYS_SETFLAG(Py_DontWriteBytecodeFlag, "dont_write_bytecode") + SYS_SETFLAG(Py_VerboseFlag, "verbose") + SYS_SETFLAG(Py_HashRandomizationFlag, "hash_randomization") + SYS_SETFLAG(Py_InspectFlag, "inspect") + { + PyErr_SetString(PyExc_ValueError, + "unknown flag name"); + goto onError; + } + return PyLong_FromLong((long)old_value); + + onError: + return NULL; +} + +#undef SYS_SETFLAG + +PyDoc_STRVAR(sys_setflag__doc__, +"_setflag(flagname, value) -> old_value\n\ +Set the given interpreter flag and return its previous value."); + +/*** End of PyRun Extension ***********************************************/ + + static PyMethodDef sys_methods[] = { /* Might as well keep this in alphabetic order */ {"callstats", (PyCFunction)PyEval_GetCallStats, METH_NOARGS, @@ -1268,6 +1312,7 @@ {"setdlopenflags", sys_setdlopenflags, METH_VARARGS, setdlopenflags_doc}, #endif + {"_setflag", sys_setflag, METH_VARARGS, sys_setflag__doc__}, {"setprofile", sys_setprofile, METH_O, setprofile_doc}, {"getprofile", sys_getprofile, METH_NOARGS, getprofile_doc}, {"setrecursionlimit", sys_setrecursionlimit, METH_VARARGS, We need this for similar reasons, since PyRun does the command line processing in Python rather than in C. The helper doesn't only expose the optimize flag, but also several other flags. On 10.09.2016 02:04, Damien George wrote:
What is to stop you from adding this to micropython as a library extension?
That's what I would like to do, and usually we do just go ahead and add our own extensions, trying to be Pythonic as possible :)
But we do that a lot and sometimes I think it would be good to discuss with upstream (ie python-dev/python-ideas) about adding new functions/classes so that MicroPython doesn't diverge too much (eg Paul Sokolovsky had a discussion about time.sleep_ms, time.sleep_us etc, which was fruitful). sys.optimize(value) is a pretty simple addition so I thought it would be a straight forward discussion to have here.
I guess my main question to this list is: if CPython were to add a function to change the optimisation level at runtime, what would it look like? I don't want to push CPython to actually add such a thing if it's not seen as a useful addition. Instead I want to see how others would implement it if they needed to.
A nice outcome would be that MicroPython adds the function now (eg sys.optimize), and in the future if CPython finds that it needs it, it also adds the same function with the same name/module/signature. Hence why I would like to add it correctly into MicroPython from the very beginning.
Alternatively we can just use the micropython module and add micropython.optimize(value), and then if CPython does add it later on we'll have to change how we do it.
I've never felt the urge to do this and I don't think I've ever heard it requested before. If you think it should be in the stdlib, given the timing of the 3.6 feature freeze the earliest time it could land would be Python 3.7.
No, I definitely don't want to make a rush for 3.6. I guess the timing of this post was not the best considering the 3.6 release :)
Finally, let's not define APIs with British spelling. :-)
Haha, the spelling never even crossed my mind! Well, sys.opt() is definitely too short and confusing, so "optimize" would be the choice.
Cheers, Damien. _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Sep 10 2016)
Python Projects, Coaching and Consulting ... http://www.egenix.com/ Python Database Interfaces ... http://products.egenix.com/ Plone/Zope Database Interfaces ... http://zope.egenix.com/
::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/
participants (11)
-
Andrew Svetlov
-
Brett Cannon
-
Damien George
-
Guido van Rossum
-
Ivan Levkivskyi
-
M.-A. Lemburg
-
MRAB
-
Nick Coghlan
-
Petr Viktorin
-
Steven D'Aprano
-
Victor Stinner