Non-intrusive debug logging

Everyone uses logging during code development to help in debugging. Whether using a logging module or plain old print statements, this usually requires introducing one or (many) more lines of code into the model being worked on, making the existing, functional code more difficult to read. It is also easy to leave logging code in place accidentally when cleaning up. I have a possibly odd suggestion for dramatically improving debug logging, in an almost automated fashion, without any of the usual problems. I'm willing to jump through PEP hoops, but won't waste anyone's time arguing for the idea if others don't embrace it. In brief, make an extremely succinct comment, such as "#l" or "#dbgl", semantically meaningful. If appended to a line of code, such as: x = 1 Y = 2 Z = 3 x = y + z #l When executed the line would be logged (presumably to stdout) as something like: x=1 -> y=2 + z=3 -> 5 In one log line you see the variables and their values, the operations being performed with them, and the result of the evaluated expression. Placing "#l" on a def Function() line would apply "#l" logging to every line in that function or method. That's it. Adding these log requests would be easy and not clutter the code. Removing them reliably would be a trivial global operation in any text editor. The resulting debug information would be fully diagnostic. Of course there are lots of details and edge cases to consider. There always are. For example, what if x had not been previously defined? A reasonable answer: x=. -> y=2 + z=3 -> 5 In every case the solution should minimize volume of output while maximizing logged information. One can debate the precise formatting of the output. The text used for the semantically meaningful comment string is open to debate. The output pipe could be made controllable. There are natural extensions to allow periodic and conditional logging. I have ideas for reducing (already unlikely) collisions with existing comments. But I want to keep this simple for now, and I think this captures the core of the idea. For your consideration. - larryy

On 24/01/2018 23:25, Larry Yaeger wrote:
I thing that this idea has some merit, obviously with some details to be worked out as well. I would suggest, however, that if this feature is introduced it be controlled via a run-time switch &/or environment variable which defaults to off. Then rather than the developer having to do global replaces they simply turn off the switch (or reduce it to zero). It may also be preferable to use decorators rather than syntactically significant comments, (personally I don't like the latter having had too many bad experiences with them). -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer.

Some thoughts: 1. I too dislikes the idea of using comments as semantically significant annotations. I think it's quite OK for annotation that are aimed at external tools (e.g. '# nocover' or '# noqa') but not for runtime behavior. 2. It's probably possible do do interesting things using decorators at the function / class level. The "q" project (https://pypi.python.org/pypi/q) already does some useful things in that direction. Still, when using q (or another decorator-based approach), you need to first 'import q', which means that you can easily end up with spurious 'import q' in your code, after you're done debugging. Or even worse, commit them but forget to add 'q' in your setup.py / requirements.txt / Pipfile, and break CI or production. 3. Doing things unintrusively at the statement level seems much harder. S. On Thu, Jan 25, 2018 at 6:54 AM, Steve Barnes <gadgetsteve@live.co.uk> wrote:
-- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, Free&OSS Group / Systematic Cluster - http://www.gt-logiciel-libre.org/ Co-Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyData Paris - http://pydata.fr/ --- “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” — R. Buckminster Fuller

This strikes me as something a debugger should do, rather than the regular interpreter. And using comment-based syntax means that they would get ignored by the regular interpreter— which is exactly what you want. As for a decoration approach— that wouldn’t let you do anything on a line by line basis. -CHB Sent from my iPhone On Jan 25, 2018, at 2:44 AM, Stéfane Fermigier <sf@fermigier.com> wrote: Some thoughts: 1. I too dislikes the idea of using comments as semantically significant annotations. I think it's quite OK for annotation that are aimed at external tools (e.g. '# nocover' or '# noqa') but not for runtime behavior. 2. It's probably possible do do interesting things using decorators at the function / class level. The "q" project (https://pypi.python.org/pypi/q) already does some useful things in that direction. Still, when using q (or another decorator-based approach), you need to first 'import q', which means that you can easily end up with spurious 'import q' in your code, after you're done debugging. Or even worse, commit them but forget to add 'q' in your setup.py / requirements.txt / Pipfile, and break CI or production. 3. Doing things unintrusively at the statement level seems much harder. S. On Thu, Jan 25, 2018 at 6:54 AM, Steve Barnes <gadgetsteve@live.co.uk> wrote:
-- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, Free&OSS Group / Systematic Cluster - http://www.gt-logiciel-libre.org/ Co-Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyData Paris - http://pydata.fr/ --- “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” — R. Buckminster Fuller _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

On Thu, Jan 25, 2018 at 11:44:55AM +0100, St??fane Fermigier <sf@fermigier.com> wrote:
That is, you don't like ``# coding:`` directive? ;-) Oleg. -- Oleg Broytman http://phdru.name/ phd@phdru.name Programmers don't die, they just GOSUB without RETURN.

This can be accomplished as a decorator. Jim Crist wrote a version of this using the codetransformer library. The usage is pretty simple: @trace() def sum_word_lengths(words): total = 0 for w in words: word_length = len(w) total += word_length return total
21 The source for the trace decorator is in this available here: https://gist.github.com/jcrist/2b97c9bcc0b95caa73ce (I can't figure out how to link to a section, it is under "Tracing"). On Thu, Jan 25, 2018 at 4:03 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:

I think part of the reason that logging appears complicated is because logging actually is complicated. In the myriad different contexts a Python program runs (daemon, command line tool, interactively), the logging output should be going to all sorts of different places. Thus was born handlers. If you want "all the logs", do you really want all the logs from some library and the library it calls? Thus was born filters. For better or worse, the "line cost" of a logging call encourages them to be useful. That said, I'd maybe like a plugin for my editor that could hide all logging statements for some "mature" projects so I could try to see the control flow a bit better. On Thu, Jan 25, 2018 at 3:03 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:

On Thu, Jan 25, 2018 at 5:09 PM, Nick Timkovich <prometheus235@gmail.com> wrote:
I think that last bit is the OP's primary ask. Truly great and useful logs are genuinely hard to write. They want a cross-cutting, differentially updated context that no single section of code a) cares about or b) willingly wants to incur the costs of... especially when unused. In my mind, the most significant barrier to fantastic logging -- DEBUG in particular -- is you must 100% understand, ahead-of-time, which data and in which situations will yield solutions to unknown future problems, and then must limit that extra data to relevant inputs only (eg. only DEBUG logging a specific user or condition), ideally for a defined capture window, so you avoid writing 100MBps to /dev/null all day long. While this proposal does limit the line noise I don't think it makes logging any more accessible, or useful. The desire to tap into a running program and dynamically inspect useful data at the time it's needed is what led to this: Dynamic logging after the fact (injects "logging call sites" into a target function's __code__) https://pypi.python.org/pypi/retrospect/0.1.4 It never went beyond a basic POC and it's not meant to be a plug, only another point of interest. Instead of explicitly calling out to some logging library every 0.1 lines of code, I'd rather attach "something" to a function, as needed, and tell it what I am interested in (function args, function return, symbols, etc). This makes logging more like typing, where you could even move logging information to a stub file of sorts and bind it with the application, using it... or not! Sort of like a per-function sys.settrace. I believe this approach (ie. with something like __code__.settrace) could fulfill the OP's original ask with additional possibilities. -- C Anthony

On 24/01/2018 23:25, Larry Yaeger wrote:
I thing that this idea has some merit, obviously with some details to be worked out as well. I would suggest, however, that if this feature is introduced it be controlled via a run-time switch &/or environment variable which defaults to off. Then rather than the developer having to do global replaces they simply turn off the switch (or reduce it to zero). It may also be preferable to use decorators rather than syntactically significant comments, (personally I don't like the latter having had too many bad experiences with them). -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer.

Some thoughts: 1. I too dislikes the idea of using comments as semantically significant annotations. I think it's quite OK for annotation that are aimed at external tools (e.g. '# nocover' or '# noqa') but not for runtime behavior. 2. It's probably possible do do interesting things using decorators at the function / class level. The "q" project (https://pypi.python.org/pypi/q) already does some useful things in that direction. Still, when using q (or another decorator-based approach), you need to first 'import q', which means that you can easily end up with spurious 'import q' in your code, after you're done debugging. Or even worse, commit them but forget to add 'q' in your setup.py / requirements.txt / Pipfile, and break CI or production. 3. Doing things unintrusively at the statement level seems much harder. S. On Thu, Jan 25, 2018 at 6:54 AM, Steve Barnes <gadgetsteve@live.co.uk> wrote:
-- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, Free&OSS Group / Systematic Cluster - http://www.gt-logiciel-libre.org/ Co-Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyData Paris - http://pydata.fr/ --- “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” — R. Buckminster Fuller

This strikes me as something a debugger should do, rather than the regular interpreter. And using comment-based syntax means that they would get ignored by the regular interpreter— which is exactly what you want. As for a decoration approach— that wouldn’t let you do anything on a line by line basis. -CHB Sent from my iPhone On Jan 25, 2018, at 2:44 AM, Stéfane Fermigier <sf@fermigier.com> wrote: Some thoughts: 1. I too dislikes the idea of using comments as semantically significant annotations. I think it's quite OK for annotation that are aimed at external tools (e.g. '# nocover' or '# noqa') but not for runtime behavior. 2. It's probably possible do do interesting things using decorators at the function / class level. The "q" project (https://pypi.python.org/pypi/q) already does some useful things in that direction. Still, when using q (or another decorator-based approach), you need to first 'import q', which means that you can easily end up with spurious 'import q' in your code, after you're done debugging. Or even worse, commit them but forget to add 'q' in your setup.py / requirements.txt / Pipfile, and break CI or production. 3. Doing things unintrusively at the statement level seems much harder. S. On Thu, Jan 25, 2018 at 6:54 AM, Steve Barnes <gadgetsteve@live.co.uk> wrote:
-- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, Free&OSS Group / Systematic Cluster - http://www.gt-logiciel-libre.org/ Co-Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyData Paris - http://pydata.fr/ --- “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” — R. Buckminster Fuller _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

On Thu, Jan 25, 2018 at 11:44:55AM +0100, St??fane Fermigier <sf@fermigier.com> wrote:
That is, you don't like ``# coding:`` directive? ;-) Oleg. -- Oleg Broytman http://phdru.name/ phd@phdru.name Programmers don't die, they just GOSUB without RETURN.

This can be accomplished as a decorator. Jim Crist wrote a version of this using the codetransformer library. The usage is pretty simple: @trace() def sum_word_lengths(words): total = 0 for w in words: word_length = len(w) total += word_length return total
21 The source for the trace decorator is in this available here: https://gist.github.com/jcrist/2b97c9bcc0b95caa73ce (I can't figure out how to link to a section, it is under "Tracing"). On Thu, Jan 25, 2018 at 4:03 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:

I think part of the reason that logging appears complicated is because logging actually is complicated. In the myriad different contexts a Python program runs (daemon, command line tool, interactively), the logging output should be going to all sorts of different places. Thus was born handlers. If you want "all the logs", do you really want all the logs from some library and the library it calls? Thus was born filters. For better or worse, the "line cost" of a logging call encourages them to be useful. That said, I'd maybe like a plugin for my editor that could hide all logging statements for some "mature" projects so I could try to see the control flow a bit better. On Thu, Jan 25, 2018 at 3:03 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:

On Thu, Jan 25, 2018 at 5:09 PM, Nick Timkovich <prometheus235@gmail.com> wrote:
I think that last bit is the OP's primary ask. Truly great and useful logs are genuinely hard to write. They want a cross-cutting, differentially updated context that no single section of code a) cares about or b) willingly wants to incur the costs of... especially when unused. In my mind, the most significant barrier to fantastic logging -- DEBUG in particular -- is you must 100% understand, ahead-of-time, which data and in which situations will yield solutions to unknown future problems, and then must limit that extra data to relevant inputs only (eg. only DEBUG logging a specific user or condition), ideally for a defined capture window, so you avoid writing 100MBps to /dev/null all day long. While this proposal does limit the line noise I don't think it makes logging any more accessible, or useful. The desire to tap into a running program and dynamically inspect useful data at the time it's needed is what led to this: Dynamic logging after the fact (injects "logging call sites" into a target function's __code__) https://pypi.python.org/pypi/retrospect/0.1.4 It never went beyond a basic POC and it's not meant to be a plug, only another point of interest. Instead of explicitly calling out to some logging library every 0.1 lines of code, I'd rather attach "something" to a function, as needed, and tell it what I am interested in (function args, function return, symbols, etc). This makes logging more like typing, where you could even move logging information to a stub file of sorts and bind it with the application, using it... or not! Sort of like a per-function sys.settrace. I believe this approach (ie. with something like __code__.settrace) could fulfill the OP's original ask with additional possibilities. -- C Anthony
participants (9)
-
C Anthony Risinger
-
Chris Barker - NOAA Federal
-
Greg Ewing
-
Joseph Jevnik
-
Larry Yaeger
-
Nick Timkovich
-
Oleg Broytman
-
Steve Barnes
-
Stéfane Fermigier