Should __builtins__ have some kind of pass-through print function, for debugging?

Hi all, This came up in passing in one of the PEP 572 threads, and I'm curious if folks think it's a good idea or not. When debugging, sometimes you have a somewhat complicated expression that's not working: # Hmm, is func2() returning the right thing? while (func1() + 2 * func2()) < func3(): ... It'd be nice to print out what func2() returns, but to do that we have to refactor this code, which might be rather tricky in a case like this. I think if you want to use print() directly here, the simplest way to do that is: while True: tmp = func2() print(tmp) if not (func1() + 2 * func2()) < func3()): break ... Obviously this is annoying and error prone – especially for beginners, who are the ones most likely to need to print out lots of stuff to figure out why their code isn't working. (Chris Angelico mentioned that he finds this to be a common problem when teaching beginners.) There is a better way: if you define a trivial helper like: # "debug print": prints and then returns its argument def dp(obj): print(repr(obj)) return obj then the rewritten code becomes: while (func1() + 2 * dp(func2())) < func3(): ... Of course, this is trivial -- for me or you. But the leap to first realize that this is a useful thing, and then implement it correctly, is really asking a lot of beginners, who by assumption are struggling to do *anything* with Python syntax. And similarly, putting a package on PyPI is useful (cf. the venerable 'q' package), but still adds a significant barrier to entry: you need to be able to install packages, and you need to add an import. In fact, I can imagine that you might want to teach this trick even before you teach what imports are. So, would it make sense to include a utility like this in __builtins__? PEP 553, the breakpoint() builtin, provides some relevant precedent. Looking at it, I see it also emphasized the value of letting IDEs override the debugger, and I can see some similar value here: e.g. fancy REPLs like Spyder or Jupyter could potentially capture the objects passed to dp() and make them available for interactive viewing (imagine if they're like a large dataframe or something). Points to argue over if people like the general idea: - The name: p(), dp(), debug(), debugprint(), ...? - __str__ or __repr__? Presumably __repr__ since it's a debugging tool. - Exact semantics: there should probably be some way to add a bit of metadata that gets printed out, for cases like: while (dp(func1(), "func1") + 2 * dp(func2(), "func2")) < dp(func3(), "func3"): ... Maybe other tweaks would be useful as well. -n -- Nathaniel J. Smith -- https://vorpus.org

I think that this is either a great idea or pointless, depending on what the built-in actually does. If all it does is literally the debug print function you give:
then it is just a trivial helper as you say, and in my opinion too trivial to bother making a builtin. As far as discoverability by beginners, I think that having their instructor teach them to write such a simple helper would be a good lesson. But suppose we were willing to add a bit of compiler magic to the language, something that would be hard to do in pure Python: give dp() access to the source code of the argument it is called with, and then print out that source as well as the value's repr, plus the line number and name of the module it is called from. An example: # module.py x = 1 y = dp(x + 99)+2 print("y is", y) Then running that module would output: Line 2 of module.py, "x + 99", result 100 y is 102 Compare that to the pretty anaemic output of the dp() helper you give: 100 y is 102 I know which I would rather see when debugging. Obviously dp() would have to be magic. There's no way that I know of for a Python function to see the source code of its own arguments. I have no idea what sort of deep voodoo would be required to make this work. But if it could work, wow, that would really be useful. And not just for beginners. Some objections... Objection 1: dp() looks like an ordinary function call. Magic in Python is usually a statement, like assert. Answer: Very true. But on the other hand, there's super() inside classes. Objection 2: Yes, but even super() isn't this magical. Answer: Details, details. I'm just the ideas man, somebody else can work out the implementation... *wink* Objection 3: What if the caller has shadowed or replaced dp()? Answer: Don't do that. Let's make dp() a reserved name. Objection 4: You're kidding, right? That needs a full deprecation cycle, it will break code, etc. Answer: Okay, okay. Maybe the compiler could be smart enough to only pass the extra information on (line number, module, source code of argument) when dp() is the actual genuine builtin dp() function, and not if it has been shadowed. Objection 5: Even if there is a way to do that, it would require an expensive runtime check that will slow down calls to anything called dp(). Answer: Yes, but that's only one name out of millions. All other function calls will be unaffected. And besides, performance regressions don't count as breakage. Much. Yeah, I don't think this is going to fly either. But boy would it be useful if it could... -- Steve

On 27 April 2018 at 21:27, Steven D'Aprano <steve@pearwood.info> wrote:
If you relax the enhancement to just noting the line where the debug print came from, it doesn't need to be deep compiler magic - the same kind of stack introspection that warnings and tracebacks use would suffice. (Stack introspection to find the caller's module, filename and line number, linecache to actually retrieve the line if we want to print that). Cheers, Nick. P.S. While super() is a *little* magic, it isn't *that* magic - it gets converted from "super()" to "super(name_of_first_param, __class__)". And even that limited bit of magic has proven quirky enough to be a recurring source of irritation when it comes to interpreter maintenance. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Fri, Apr 27, 2018 at 10:53:39PM +1000, Nick Coghlan wrote:
I don't think this is worth bothering with if we relax the enhancement to just the line. As such, there are already ways to get the desired result, and people can just add it to the startup file or personal toolbox module. It doesn't need to be a builtin. Maybe in the std lib? from pdb import debugprint as dp perhaps, but not a builtin. Although it will require compiler support, I think that being able to drill down to the level of individual expressions would be a fantastic aid to debugging. -- Steve

On Fri, Apr 27, 2018 at 6:24 PM Nick Coghlan <ncoghlan@gmail.com> wrote:
I spent a bit of time and now there's a dprint <https://pypi.org/p/dprint> project on PyPI. It uses stack introspection to print out some details. If someone wants to take it for a spin and provide feedback on how it feels, this thread is as good a place as any, I suppose. :) Cheers, Pradyun
-- -- Pradyun

On Fri, Apr 27, 2018 at 9:27 PM, Steven D'Aprano <steve@pearwood.info> wrote:
It's a debugging function. It's okay if the magic has some restrictions on it. How about: 1) The dp() function is CPython-specific. Other Pythons may or may not include it, and may or may not have this magic. 2) For the magic to work, the calling module must have source code available. Otherwise, dp() will do as much as it can, but it might not be able to do everything. 3) The magic may not work if you use any name other than "dp" to call the function. Then, the function can be written much more plausibly. It can use sys._getframe to find the calling function, fetch up the source code from disk, and look at the corresponding line of code. The hardest part will be figuring out code like this: x = dp(spam) if qq else dp(ham) In theory, frm.f_lasti (the last bytecode instruction executed) should be able to help with this, but I'm not sure how well you could parse through that to figure out which of multiple dp() calls we're in. At this point, it's DEFINITELY too large for an instructor to dictate to a beginner as part of a lesson on debugging, but it could be a great addition to the 'inspect' module. You could teach students to add "from inspect import dp" to their imports, and the rest would 'just work'. I don't think this needs any specific compiler magic or making 'dp' a reserved name, but it might well be a lot easier to write if there were some compiler features provided to _all_ functions. For instance, column positions are currently available in SyntaxErrors, but not other exceptions:
Imagine if the TypeError could show a caret, pointing to the plus sign. That would require that a function store column positions, not just line numbers. I'm not sure how much overhead it would add, nor how much benefit you'd really get from those markers, but it would then be the same mechanic for exception tracebacks and for semi-magical functions like this. ChrisA

On Fri, Apr 27, 2018 at 5:58 AM, Chris Angelico <rosuav@gmail.com> wrote:
Being able to add carets to tracebacks in general would be quite nice actually. Imagine: Traceback (most recent call last): File "/tmp/blah.py", line 16, in <module> print(foo()) ^^^^^ File "/tmp/blah.py", line 6, in foo return bar(1) + bar(2) ^^^^^^ File "/tmp/blah.py", line 10, in bar return baz(2 * x) / baz(2 * x + 1) ^^^^^^^^^^ File "/tmp/blah.py", line 13, in baz return 1 + 1 / (x - 4) ^^^^^^^^^^^ ZeroDivisionError: division by zero This is how I report error messages in patsy[1], and people seem to appreciate it... it would also help Python catch back up with other languages whose error reporting has gotten much friendlier in recent years (e.g., rust, clang). Threading column numbers through the compiler might be tedious but AFAICT should be straightforward in principle. (Peephole optimizations and similar might be a bit of a puzzle, but you can do pretty crude things like saying new_span_start = min(*old_span_starts); new_span_end = max(*old_span_ends) and still get something that's at least useful, even if not necessarily 100% theoretically accurate.) The runtime overhead would be essentially zero, since this would be a static table that only gets consulted when printing tracebacks, similar to the lineno table. (Tracebacks already preserve f_lasti.) So I think the main issue would be the extra memory in each code object to hold the bytecode offset -> column numbers table. We'd need some actual numbers to judge this for real, but my guess is that the gain in usability+friendliness would be easily worth it for 99% of users, and the other 1% are already plotting how to add options to strip out unnecessary things like type annotations so if it's a problem then this could be another thing for them to add to their list – leave out these tables at -OOO or whatever. -n [1] https://patsy.readthedocs.io/en/latest/overview.html -- Nathaniel J. Smith -- https://vorpus.org

On Sat, Apr 28, 2018 at 7:29 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
but he sent it in HTML using a proportional font, which spoils the effect!
Uh...? https://vorpus.org/~njs/tmp/monospace.png It looks like my client used "font-family: monospace", maybe yours only understands <pre> or something? Anyway, if anyone else is having trouble viewing it, it seems to have come through correctly in the archives: https://mail.python.org/pipermail/python-ideas/2018-April/050137.html -n -- Nathaniel J. Smith -- https://vorpus.org

Nathaniel Smith wrote:
It looks like my client used "font-family: monospace", maybe yours only understands <pre> or something?
Hmmm, looking at the message source, it does indeed specify monospace. It seems the version of Thunderbird I'm using does a spectacularly bad job of interpreting HTML. Sorry for the false alarm, Nathaniel! -- Greg

Actually, I think I can think of a way to make this work, if we're willing to resurrect some old syntax. On Fri, Apr 27, 2018 at 09:27:34PM +1000, Steven D'Aprano wrote:
I changed my mind... let's add this as a builtin, under the name debugprint. It is a completely normal, non-magical function, which takes four (not one) arguments: def debugprint(obj, lineno=None, module=None, source=None): out = [] if module is not None: if lineno is None: lineno = "?" out.append(f"Line {lineno} of {module}") if source is not None: out.append(ascii(source)) out.append(f"result {repr(obj)}") print(', '.join(out)) return obj Now let's put all the magic into some syntax. I'm going to suggest resurrecting the `` backtick syntax from Python 2. If that's not visually distinct enough, we could double them: ``expression``. When the compiler sees an expression inside backticks, it grabs the name of the module, the line number, and the expression source, and compiles a call to: debugprint(expression, lineno, module, source) in its place. That's the only magic needed, and since it is entirely at compile-time, all that information should be easily available. (I hope.) If not, then simply replace the missing values with None. If the caller shadows debugprint, it is their responsibility to either give it the correct signature, or not to use the backticks. Since it's just a normal function call, the worst that happens is that a mismatch in arguments gives you a TypeError. Shadowing debugprint would be an easy way to disable backticks on a per-module basis, at runtime. Simply define: def debugprint(obj, *args): return obj and Bob's yer uncle. -- Steve

I've had a 'dprint' in sitecustomize for years. It clones 'print' and adds a couple of keyword parameters, 'show_stack' and 'depth', which give control over traceback output (walks back up sys._getframe for 'depth' entries). It returns the final argument if there is one, otherwise None. It can be used anywhere and everywhere that builtin print is used, plus anywhere in any expression just passing a single argument. I thought about replacing standard print with it, but I like the greppability of 'dprint' when it comes time to clean things. On Fri, Apr 27, 2018 at 6:05 AM, Steven D'Aprano <steve@pearwood.info> wrote:

When I teach decorators, I start with a "logged" decorator example: https://uwpce-pythoncert.github.io/PythonCertDevel/modules/Decorators.html#a... def logged_func(func): def logged(*args, **kwargs): print("Function {} called".format(func.__name__)) if args: print("\twith args: {}".format(args)) if kwargs: print("\twith kwargs: {}".format(kwargs)) result = func(*args, **kwargs) print("\t Result --> {}".format(result)) return result return logged Interestingly, I don't actually use such a thing in any kind of production, but it's could be a way to ackomplish what's been proposed here. As a decorator, we usually would expect to use it with the decoration syntax: @logged def a_new_func(): ... But it would also be used to re-bind a already defined function for testing. using the original example: while (func1() + 2 * func2()) < func3(): could become "logged" by adding: func2 = logged(func2) while (func1() + 2 * func2()) < func3(): I actually like that better than inserting extra code into the line you want to test. And I"m pretty clueless about what yu can to with inspect -- but maybe some more magic could be added in the decorator if you wanted that. -CHB On Fri, Apr 27, 2018 at 6:27 AM, Eric Fahlgren <ericfahlgren@gmail.com> wrote:
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Sat, Apr 28, 2018 at 4:27 AM, Steven D'Aprano <steve@pearwood.info> wrote:
Sorry, no, it's part of work code, but it's pretty simple stuff. 'get_stack' is a debug-quality stack dumper (my memory failed, now uses inspect instead of the more primitive sys._getframe), used whenever anyone wants to see where they are. The 'all' parameter lets you filter out some stdlib entries. def get_stack(all=False, skip=0, depth=0): stack = inspect.stack()[1:] # Implicitly ignore the frame containing get_stack. stack.reverse() # And make it oldest first. length = len(stack) upper = length - skip lower = (upper - depth) if depth else 0 max_len = 0 for i_frame in range(lower, upper): # Preprocessing loop to format the output frames and # calculate the length of the fields in the output. stack[i_frame] = list(stack[i_frame]) path = _cleaner().clean(stack[i_frame][1]) # Canonical form for paths... if all or '/SITE-P' not in path.upper() and '<FROZEN ' not in path.upper(): max_len = max(max_len, len(path)) stack[i_frame][1] = path formatted_frames = list() for i_frame, frame in enumerate(stack[lower:upper], lower+1): # Now that we know max_len, we can do the actual formatting. if all or '/SITE-P' not in frame[1].upper() and '<FROZEN ' not in frame[1].upper(): formatted_frames.append('%02d/%02d %*s %4d %s%s' % (i_frame, length, -max_len, frame[1], lineNo, frame[3])) return ''.join(formatted_frames) Off the top of my head, dprint looks like this: def dprint(*args, **kwds): depth = kwds.pop('depth', 0) show_stack = kwds.pop('show_stack', False) print(*args, **kwds) if show_stack: print(get_stack(depth=depth)) return args[-1] if args else None

I don't want to hijack the thread on a digression, but instead of bringing `` back for just this one purpose, it could be used as a prefix to define a candidate pool of new keywords. ``debugprint(obj) # instead of ``obj`` meaning debugprint(obj) Any ``-prefixed word would either be a defined keyword or a syntax error. -- Clint

I think that this is either a great idea or pointless, depending on what the built-in actually does. If all it does is literally the debug print function you give:
then it is just a trivial helper as you say, and in my opinion too trivial to bother making a builtin. As far as discoverability by beginners, I think that having their instructor teach them to write such a simple helper would be a good lesson. But suppose we were willing to add a bit of compiler magic to the language, something that would be hard to do in pure Python: give dp() access to the source code of the argument it is called with, and then print out that source as well as the value's repr, plus the line number and name of the module it is called from. An example: # module.py x = 1 y = dp(x + 99)+2 print("y is", y) Then running that module would output: Line 2 of module.py, "x + 99", result 100 y is 102 Compare that to the pretty anaemic output of the dp() helper you give: 100 y is 102 I know which I would rather see when debugging. Obviously dp() would have to be magic. There's no way that I know of for a Python function to see the source code of its own arguments. I have no idea what sort of deep voodoo would be required to make this work. But if it could work, wow, that would really be useful. And not just for beginners. Some objections... Objection 1: dp() looks like an ordinary function call. Magic in Python is usually a statement, like assert. Answer: Very true. But on the other hand, there's super() inside classes. Objection 2: Yes, but even super() isn't this magical. Answer: Details, details. I'm just the ideas man, somebody else can work out the implementation... *wink* Objection 3: What if the caller has shadowed or replaced dp()? Answer: Don't do that. Let's make dp() a reserved name. Objection 4: You're kidding, right? That needs a full deprecation cycle, it will break code, etc. Answer: Okay, okay. Maybe the compiler could be smart enough to only pass the extra information on (line number, module, source code of argument) when dp() is the actual genuine builtin dp() function, and not if it has been shadowed. Objection 5: Even if there is a way to do that, it would require an expensive runtime check that will slow down calls to anything called dp(). Answer: Yes, but that's only one name out of millions. All other function calls will be unaffected. And besides, performance regressions don't count as breakage. Much. Yeah, I don't think this is going to fly either. But boy would it be useful if it could... -- Steve

On 27 April 2018 at 21:27, Steven D'Aprano <steve@pearwood.info> wrote:
If you relax the enhancement to just noting the line where the debug print came from, it doesn't need to be deep compiler magic - the same kind of stack introspection that warnings and tracebacks use would suffice. (Stack introspection to find the caller's module, filename and line number, linecache to actually retrieve the line if we want to print that). Cheers, Nick. P.S. While super() is a *little* magic, it isn't *that* magic - it gets converted from "super()" to "super(name_of_first_param, __class__)". And even that limited bit of magic has proven quirky enough to be a recurring source of irritation when it comes to interpreter maintenance. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Fri, Apr 27, 2018 at 10:53:39PM +1000, Nick Coghlan wrote:
I don't think this is worth bothering with if we relax the enhancement to just the line. As such, there are already ways to get the desired result, and people can just add it to the startup file or personal toolbox module. It doesn't need to be a builtin. Maybe in the std lib? from pdb import debugprint as dp perhaps, but not a builtin. Although it will require compiler support, I think that being able to drill down to the level of individual expressions would be a fantastic aid to debugging. -- Steve

On Fri, Apr 27, 2018 at 6:24 PM Nick Coghlan <ncoghlan@gmail.com> wrote:
I spent a bit of time and now there's a dprint <https://pypi.org/p/dprint> project on PyPI. It uses stack introspection to print out some details. If someone wants to take it for a spin and provide feedback on how it feels, this thread is as good a place as any, I suppose. :) Cheers, Pradyun
-- -- Pradyun

On Fri, Apr 27, 2018 at 9:27 PM, Steven D'Aprano <steve@pearwood.info> wrote:
It's a debugging function. It's okay if the magic has some restrictions on it. How about: 1) The dp() function is CPython-specific. Other Pythons may or may not include it, and may or may not have this magic. 2) For the magic to work, the calling module must have source code available. Otherwise, dp() will do as much as it can, but it might not be able to do everything. 3) The magic may not work if you use any name other than "dp" to call the function. Then, the function can be written much more plausibly. It can use sys._getframe to find the calling function, fetch up the source code from disk, and look at the corresponding line of code. The hardest part will be figuring out code like this: x = dp(spam) if qq else dp(ham) In theory, frm.f_lasti (the last bytecode instruction executed) should be able to help with this, but I'm not sure how well you could parse through that to figure out which of multiple dp() calls we're in. At this point, it's DEFINITELY too large for an instructor to dictate to a beginner as part of a lesson on debugging, but it could be a great addition to the 'inspect' module. You could teach students to add "from inspect import dp" to their imports, and the rest would 'just work'. I don't think this needs any specific compiler magic or making 'dp' a reserved name, but it might well be a lot easier to write if there were some compiler features provided to _all_ functions. For instance, column positions are currently available in SyntaxErrors, but not other exceptions:
Imagine if the TypeError could show a caret, pointing to the plus sign. That would require that a function store column positions, not just line numbers. I'm not sure how much overhead it would add, nor how much benefit you'd really get from those markers, but it would then be the same mechanic for exception tracebacks and for semi-magical functions like this. ChrisA

On Fri, Apr 27, 2018 at 5:58 AM, Chris Angelico <rosuav@gmail.com> wrote:
Being able to add carets to tracebacks in general would be quite nice actually. Imagine: Traceback (most recent call last): File "/tmp/blah.py", line 16, in <module> print(foo()) ^^^^^ File "/tmp/blah.py", line 6, in foo return bar(1) + bar(2) ^^^^^^ File "/tmp/blah.py", line 10, in bar return baz(2 * x) / baz(2 * x + 1) ^^^^^^^^^^ File "/tmp/blah.py", line 13, in baz return 1 + 1 / (x - 4) ^^^^^^^^^^^ ZeroDivisionError: division by zero This is how I report error messages in patsy[1], and people seem to appreciate it... it would also help Python catch back up with other languages whose error reporting has gotten much friendlier in recent years (e.g., rust, clang). Threading column numbers through the compiler might be tedious but AFAICT should be straightforward in principle. (Peephole optimizations and similar might be a bit of a puzzle, but you can do pretty crude things like saying new_span_start = min(*old_span_starts); new_span_end = max(*old_span_ends) and still get something that's at least useful, even if not necessarily 100% theoretically accurate.) The runtime overhead would be essentially zero, since this would be a static table that only gets consulted when printing tracebacks, similar to the lineno table. (Tracebacks already preserve f_lasti.) So I think the main issue would be the extra memory in each code object to hold the bytecode offset -> column numbers table. We'd need some actual numbers to judge this for real, but my guess is that the gain in usability+friendliness would be easily worth it for 99% of users, and the other 1% are already plotting how to add options to strip out unnecessary things like type annotations so if it's a problem then this could be another thing for them to add to their list – leave out these tables at -OOO or whatever. -n [1] https://patsy.readthedocs.io/en/latest/overview.html -- Nathaniel J. Smith -- https://vorpus.org

On Sat, Apr 28, 2018 at 7:29 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
but he sent it in HTML using a proportional font, which spoils the effect!
Uh...? https://vorpus.org/~njs/tmp/monospace.png It looks like my client used "font-family: monospace", maybe yours only understands <pre> or something? Anyway, if anyone else is having trouble viewing it, it seems to have come through correctly in the archives: https://mail.python.org/pipermail/python-ideas/2018-April/050137.html -n -- Nathaniel J. Smith -- https://vorpus.org

Nathaniel Smith wrote:
It looks like my client used "font-family: monospace", maybe yours only understands <pre> or something?
Hmmm, looking at the message source, it does indeed specify monospace. It seems the version of Thunderbird I'm using does a spectacularly bad job of interpreting HTML. Sorry for the false alarm, Nathaniel! -- Greg

Actually, I think I can think of a way to make this work, if we're willing to resurrect some old syntax. On Fri, Apr 27, 2018 at 09:27:34PM +1000, Steven D'Aprano wrote:
I changed my mind... let's add this as a builtin, under the name debugprint. It is a completely normal, non-magical function, which takes four (not one) arguments: def debugprint(obj, lineno=None, module=None, source=None): out = [] if module is not None: if lineno is None: lineno = "?" out.append(f"Line {lineno} of {module}") if source is not None: out.append(ascii(source)) out.append(f"result {repr(obj)}") print(', '.join(out)) return obj Now let's put all the magic into some syntax. I'm going to suggest resurrecting the `` backtick syntax from Python 2. If that's not visually distinct enough, we could double them: ``expression``. When the compiler sees an expression inside backticks, it grabs the name of the module, the line number, and the expression source, and compiles a call to: debugprint(expression, lineno, module, source) in its place. That's the only magic needed, and since it is entirely at compile-time, all that information should be easily available. (I hope.) If not, then simply replace the missing values with None. If the caller shadows debugprint, it is their responsibility to either give it the correct signature, or not to use the backticks. Since it's just a normal function call, the worst that happens is that a mismatch in arguments gives you a TypeError. Shadowing debugprint would be an easy way to disable backticks on a per-module basis, at runtime. Simply define: def debugprint(obj, *args): return obj and Bob's yer uncle. -- Steve

I've had a 'dprint' in sitecustomize for years. It clones 'print' and adds a couple of keyword parameters, 'show_stack' and 'depth', which give control over traceback output (walks back up sys._getframe for 'depth' entries). It returns the final argument if there is one, otherwise None. It can be used anywhere and everywhere that builtin print is used, plus anywhere in any expression just passing a single argument. I thought about replacing standard print with it, but I like the greppability of 'dprint' when it comes time to clean things. On Fri, Apr 27, 2018 at 6:05 AM, Steven D'Aprano <steve@pearwood.info> wrote:

When I teach decorators, I start with a "logged" decorator example: https://uwpce-pythoncert.github.io/PythonCertDevel/modules/Decorators.html#a... def logged_func(func): def logged(*args, **kwargs): print("Function {} called".format(func.__name__)) if args: print("\twith args: {}".format(args)) if kwargs: print("\twith kwargs: {}".format(kwargs)) result = func(*args, **kwargs) print("\t Result --> {}".format(result)) return result return logged Interestingly, I don't actually use such a thing in any kind of production, but it's could be a way to ackomplish what's been proposed here. As a decorator, we usually would expect to use it with the decoration syntax: @logged def a_new_func(): ... But it would also be used to re-bind a already defined function for testing. using the original example: while (func1() + 2 * func2()) < func3(): could become "logged" by adding: func2 = logged(func2) while (func1() + 2 * func2()) < func3(): I actually like that better than inserting extra code into the line you want to test. And I"m pretty clueless about what yu can to with inspect -- but maybe some more magic could be added in the decorator if you wanted that. -CHB On Fri, Apr 27, 2018 at 6:27 AM, Eric Fahlgren <ericfahlgren@gmail.com> wrote:
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Sat, Apr 28, 2018 at 4:27 AM, Steven D'Aprano <steve@pearwood.info> wrote:
Sorry, no, it's part of work code, but it's pretty simple stuff. 'get_stack' is a debug-quality stack dumper (my memory failed, now uses inspect instead of the more primitive sys._getframe), used whenever anyone wants to see where they are. The 'all' parameter lets you filter out some stdlib entries. def get_stack(all=False, skip=0, depth=0): stack = inspect.stack()[1:] # Implicitly ignore the frame containing get_stack. stack.reverse() # And make it oldest first. length = len(stack) upper = length - skip lower = (upper - depth) if depth else 0 max_len = 0 for i_frame in range(lower, upper): # Preprocessing loop to format the output frames and # calculate the length of the fields in the output. stack[i_frame] = list(stack[i_frame]) path = _cleaner().clean(stack[i_frame][1]) # Canonical form for paths... if all or '/SITE-P' not in path.upper() and '<FROZEN ' not in path.upper(): max_len = max(max_len, len(path)) stack[i_frame][1] = path formatted_frames = list() for i_frame, frame in enumerate(stack[lower:upper], lower+1): # Now that we know max_len, we can do the actual formatting. if all or '/SITE-P' not in frame[1].upper() and '<FROZEN ' not in frame[1].upper(): formatted_frames.append('%02d/%02d %*s %4d %s%s' % (i_frame, length, -max_len, frame[1], lineNo, frame[3])) return ''.join(formatted_frames) Off the top of my head, dprint looks like this: def dprint(*args, **kwds): depth = kwds.pop('depth', 0) show_stack = kwds.pop('show_stack', False) print(*args, **kwds) if show_stack: print(get_stack(depth=depth)) return args[-1] if args else None

I don't want to hijack the thread on a digression, but instead of bringing `` back for just this one purpose, it could be used as a prefix to define a candidate pool of new keywords. ``debugprint(obj) # instead of ``obj`` meaning debugprint(obj) Any ``-prefixed word would either be a defined keyword or a syntax error. -- Clint
participants (9)
-
Chris Angelico
-
Chris Barker
-
Clint Hepner
-
Eric Fahlgren
-
Greg Ewing
-
Nathaniel Smith
-
Nick Coghlan
-
Pradyun Gedam
-
Steven D'Aprano