Re: [Python-ideas] SI scale factors in Python

Here is a fairly typical example that illustrates the usefulness of supporting SI scale factors and units in Python. This is simple simulation code that is used to test a current mirror driving an 100kOhm resistor ... Here is some simulation code that uses SI scale factors for delta in [-500nA, 0, 500nA]: input = 2.75uA + delta wait(1us) expected = 100kOhm*(2.75uA + delta) tolerance = 2.2mV fails = check_output(expected, tolerance) print('%s: I(in)=%rA, measured V(out)=%rV, expected V(out)=%rV, diff=%rV.' % ( 'FAIL' if fails else 'pass', input, get_output(), expected, get_output() - expected )) with the output being: pass: I(in)=2.25uA, measured V(out)=226.7mV, expected V(out)=225mV, diff=1.7mV. pass: I(in)=2.75uA, measured V(out)=276.8mV, expected V(out)=275mV, diff=1.8mV. FAIL: I(in)=3.25uA, measured V(out)=327.4mV, expected V(out)=325mV, diff=2.4mV. And the same code in Python today ... for delta in [-5e-7, 0, 5e-7]: input = 2.75e-6 + delta wait(1e-6) expected = 1e5*(2.75e-6 + delta) tolerance = 2.2e-3 fails = check_output(expected, tolerance) print('%s: I(in)=%eA, measured V(out)=%eV, expected V(out)=%eV, diff=%eV.' % ( 'FAIL' if fails else 'pass', input, get_output(), expected, get_output() - expected )) with the output being: pass: I(in)=2.25e-6A, measured V(out)=226.7e-3V, expected V(out)=225e-3V, diff=1.7e-3V. pass: I(in)=2.75e-6A, measured V(out)=276.8e-3V, expected V(out)=275e-3V, diff=1.8e-3V. FAIL: I(in)=3.25e-6A, measured V(out)=327.4e-3V, expected V(out)=325e-3V, diff=2.4e-3V. There are two things to notice. In the first example the numbers are easier to read: for example, 500nA is easier to read the 5e-7. Second, there is information in the units that provides provides useful information. One can easily see that the input signal is a current and that the output is a voltage. Furthermore, anybody can look at this code and do a simple sanity check on the expressions even if they don't understand the system being simulated. They can check the units on the input and output expressions to assure that they are all consistent. -Ken

On Fri, Aug 26, 2016 at 6:06 AM, Ken Kundert <python-ideas@shalmirane.com> wrote:
Rename a few things: for deltaA in [-5e-7, 0, 5e-7]: inputA = 2.75e-6 + deltaA wait(1e-6) expectedA = 1e5*(2.75e-6 + deltaA) toleranceV = 2.2e-3 fails = check_output(expectedA, toleranceV) print('%s: I(in)=%eA, measured V(out)=%eV, expected V(out)=%eV, diff=%eV.' % ( 'FAIL' if fails else 'pass', inputA, get_output(), expectedA, get_output() - expectedA )) I may have something wrong here (for instance, I'm not sure if check_output should be taking an expected amperage and a tolerance in volts), but you get the idea: it's the variables, not the literals, that get tagged. Another way to do this would be to use MyPy and type hinting to create several subtypes of floating-point number: class V(float): pass class A(float): pass for delta in [A(-5e-7), A(0), A(5e-7)]: input = A(2.75e-6) + delta # etc or in some other way have static analysis tag the variables, based on their origins.
There are two things to notice. In the first example the numbers are easier to read: for example, 500nA is easier to read the 5e-7.
This is useful, but doesn't depend on the "A" at the end.
These two are better served by marking the variables rather than the literals; in fact, if properly done, the tagging can be verified by a program, not just a human. (That's what tools like MyPy are aiming to do.) ChrisA

On 25/08/2016 22:06, Ken Kundert wrote:
And the same code with python today def wait(delay): pass def check_output(expected, tolerance): return False def get_output(): return 1*uA def f(): for delta in [-500*nA, 0, 500*nA]: input = 2.75*uA + delta wait(1*us) expected = 100*kOhm*(2.75*uA + delta) tolerance = 2.2*mV fails = check_output(expected, tolerance) print('%s: I(in)=%rA, measured V(out)=%rV, expected V(out)=%rV, diff=%rV.' % ( 'FAIL' if fails else 'pass', input, get_output(), expected, get_output() - expected )) f() pass: I(in)=2.25e-05A, measured V(out)=1e-05V, expected V(out)=22.5V, diff=-22.49999V. pass: I(in)=2.75e-05A, measured V(out)=1e-05V, expected V(out)=27.5V, diff=-27.49999V. pass: I(in)=3.2500000000000004e-05A, measured V(out)=1e-05V, expected V(out)=32.50000000000001V, diff=-32.499990000000004V. Do you really see fundamental difference with your original code?

On 26 August 2016 at 06:06, Ken Kundert <python-ideas@shalmirane.com> wrote:
Here is a fairly typical example that illustrates the usefulness of supporting SI scale factors and units in Python.
Ken, To build a persuasive case, you'll find it's necessary to stop comparing Python-with-syntactic-support to Python-with-no-syntactic-support-and-no-third-party-libraries, and instead bring in the existing dimensional analysis libraries and tooling, and show how this change would improve *those*. That is, compare your proposed syntax *not* to plain Python code, but to Python code using some of the support libraries Steven D'Aprano mentioned, whether that's SymPy (as in http://docs.sympy.org/latest/modules/physics/unitsystems/examples.html ) or one of the other unit conversion libraries (as in http://stackoverflow.com/questions/2125076/unit-conversion-in-python ) It's also worth explicitly highlighting that the main intended beneficiaries would *NOT* be traditional software applications (where the data entry UI is clearly distinct from the source code) and instead engineers, scientists, and other data analysts using environments like Project Jupyter, where the code frequently *is* the data entry UI. And given that target audience, a further question that needs to be addressed is whether or not native syntactic support would be superior to what's already possible through Jupyter/IPython cell magics like https://bitbucket.org/birkenfeld/ipython-physics The matrix multiplication PEP (https://www.python.org/dev/peps/pep-0465/ ) is one of the best examples to study on how to make this kind of case well - it surveys the available options, explains how they all have a shared challenge with the status quo, and requests the simplest possible enabling change to the language definition to help them solve the problem. One potentially fruitful argument to pursue might be to make Python a better tool for teaching maths and science concepts at primary and secondary level (since Jupyter et al are frequently seen as introducing too much tooling complexity to be accessible at that level), but again, you'd need to explore whether or not anyone is currently using Python in that way, and what their feedback is in terms of the deficiencies of the status quo. Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Fri, Aug 26, 2016 at 6:06 AM, Ken Kundert <python-ideas@shalmirane.com> wrote:
Rename a few things: for deltaA in [-5e-7, 0, 5e-7]: inputA = 2.75e-6 + deltaA wait(1e-6) expectedA = 1e5*(2.75e-6 + deltaA) toleranceV = 2.2e-3 fails = check_output(expectedA, toleranceV) print('%s: I(in)=%eA, measured V(out)=%eV, expected V(out)=%eV, diff=%eV.' % ( 'FAIL' if fails else 'pass', inputA, get_output(), expectedA, get_output() - expectedA )) I may have something wrong here (for instance, I'm not sure if check_output should be taking an expected amperage and a tolerance in volts), but you get the idea: it's the variables, not the literals, that get tagged. Another way to do this would be to use MyPy and type hinting to create several subtypes of floating-point number: class V(float): pass class A(float): pass for delta in [A(-5e-7), A(0), A(5e-7)]: input = A(2.75e-6) + delta # etc or in some other way have static analysis tag the variables, based on their origins.
There are two things to notice. In the first example the numbers are easier to read: for example, 500nA is easier to read the 5e-7.
This is useful, but doesn't depend on the "A" at the end.
These two are better served by marking the variables rather than the literals; in fact, if properly done, the tagging can be verified by a program, not just a human. (That's what tools like MyPy are aiming to do.) ChrisA

On 25/08/2016 22:06, Ken Kundert wrote:
And the same code with python today def wait(delay): pass def check_output(expected, tolerance): return False def get_output(): return 1*uA def f(): for delta in [-500*nA, 0, 500*nA]: input = 2.75*uA + delta wait(1*us) expected = 100*kOhm*(2.75*uA + delta) tolerance = 2.2*mV fails = check_output(expected, tolerance) print('%s: I(in)=%rA, measured V(out)=%rV, expected V(out)=%rV, diff=%rV.' % ( 'FAIL' if fails else 'pass', input, get_output(), expected, get_output() - expected )) f() pass: I(in)=2.25e-05A, measured V(out)=1e-05V, expected V(out)=22.5V, diff=-22.49999V. pass: I(in)=2.75e-05A, measured V(out)=1e-05V, expected V(out)=27.5V, diff=-27.49999V. pass: I(in)=3.2500000000000004e-05A, measured V(out)=1e-05V, expected V(out)=32.50000000000001V, diff=-32.499990000000004V. Do you really see fundamental difference with your original code?

On 26 August 2016 at 06:06, Ken Kundert <python-ideas@shalmirane.com> wrote:
Here is a fairly typical example that illustrates the usefulness of supporting SI scale factors and units in Python.
Ken, To build a persuasive case, you'll find it's necessary to stop comparing Python-with-syntactic-support to Python-with-no-syntactic-support-and-no-third-party-libraries, and instead bring in the existing dimensional analysis libraries and tooling, and show how this change would improve *those*. That is, compare your proposed syntax *not* to plain Python code, but to Python code using some of the support libraries Steven D'Aprano mentioned, whether that's SymPy (as in http://docs.sympy.org/latest/modules/physics/unitsystems/examples.html ) or one of the other unit conversion libraries (as in http://stackoverflow.com/questions/2125076/unit-conversion-in-python ) It's also worth explicitly highlighting that the main intended beneficiaries would *NOT* be traditional software applications (where the data entry UI is clearly distinct from the source code) and instead engineers, scientists, and other data analysts using environments like Project Jupyter, where the code frequently *is* the data entry UI. And given that target audience, a further question that needs to be addressed is whether or not native syntactic support would be superior to what's already possible through Jupyter/IPython cell magics like https://bitbucket.org/birkenfeld/ipython-physics The matrix multiplication PEP (https://www.python.org/dev/peps/pep-0465/ ) is one of the best examples to study on how to make this kind of case well - it surveys the available options, explains how they all have a shared challenge with the status quo, and requests the simplest possible enabling change to the language definition to help them solve the problem. One potentially fruitful argument to pursue might be to make Python a better tool for teaching maths and science concepts at primary and secondary level (since Jupyter et al are frequently seen as introducing too much tooling complexity to be accessible at that level), but again, you'd need to explore whether or not anyone is currently using Python in that way, and what their feedback is in terms of the deficiencies of the status quo. Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
participants (4)
-
Chris Angelico
-
Ken Kundert
-
Nick Coghlan
-
Xavier Combelle