Fwd: Keyword only argument on function call

I'm still not sure why all this focus on new syntax or convoluted IDE enhancements. I presented a very simple utility function that accomplishes exactly the started goal of DRY in keyword arguments. Yes, I wrote a first version that was incomplete. And perhaps these 8-9 lines miss some corner case. But the basic goal is really, really easy to accomplish with existing Python.
On Tue, Sep 25, 2018, 6:31 AM Anders Hovmöller <boxed@killingar.net> wrote:

I'm still not sure why all this focus on new syntax or convoluted IDE enhancements. I presented a very simple utility function that accomplishes exactly the started goal of DRY in keyword arguments.
And I’ve already stated my reasons for rejecting this specific solution, but I’ll repeat them for onlookers: 1. Huge performance penalty 2. Rather verbose, so somewhat fails on the stated goal of improving readability 3. Tooling* falls down very hard on this My macropy implementation that I linked to solves 1, improves 2 somewhat (but not much), and handled half of 3 by resulting in code that tooling can validate that the passed variables exists but fails in that tooling won’t correctly validate that the arguments actually correspond to existing parameters. * by tooling I mean editors like PyCharm and static analysis tools like mypy / Anders

On Tue, Sep 25, 2018 at 8:32 AM Anders Hovmöller <boxed@killingar.net> wrote:
Huh? Have you actually benchmarked this is some way?! A couple lookups into the namespace are really not pricey operations. The cost is definitely more than zero, but for any function that does anything even slightly costly, the lookups would be barely in the noise.
2. Rather verbose, so somewhat fails on the stated goal of improving readability
The "verbose" idea I propose is 3-4 characters more, per function call, than your `fun(a, b, *, this, that)` proposal. It will actually be shorter than your newer `fun(a, b, =this, =that)` proposal once you use 4 or more keyword arguments.
3. Tooling* falls down very hard on this
It's true that tooling doesn't currently support my hypothetical function. It also does not support your hypothetical syntax. It would be *somewhat easier* to add special support for a function with a special name like `use()` than for new syntax. But obviously that varies by which tool and what purpose it is accomplishing. Of course, PyCharm and MyPy and PyLint aren't going to bother special casing a `use()` function unless or until it is widely used and/or part of the builtins or standard library. I don't actually advocate for such inclusion, but I wouldn't be stridently against that since it's just another function name, nothing really special. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

David, I saw now that I missed the biggest problem with your proposal: yet again you deliberately throw away errors. I'm talking about making Python code _less_ error prone, while you seem to want to make it _more_. Anyway, I'll modify your reach() to not have the if in it that has this error hiding property, it also simplifies it a lot. It should look like this: def reach(name): return inspect.stack()[-2][0].f_locals[name]
1. Huge performance penalty
Huh? Have you actually benchmarked this is some way?! A couple lookups into the namespace are really not pricey operations. The cost is definitely more than zero, but for any function that does anything even slightly costly, the lookups would be barely in the noise.
I'm talking about using this for all or most function calls that aren't positional only. So no, you can absolutely not assume I only use it to call expensive functions. And yea, I did benchmark it, and since you didn't define what you would think is acceptable for a benchmark you've left the door open for me to define it. This is the result of a benchmark for 10k calls (full source at the very end of this email): CPython 3.6 time with use: 0:00:02.587355 time with standard kwargs: 0:00:00.003079 time with positional args: 0:00:00.003023 pypy 6.0 time with use: 0:00:01.177555 time with standard kwargs: 0:00:00.002565 time with positional args: 0:00:00.001953 So for CPython 3.6 it's 2.587355/0.003079 = 840x times slower and pypy: 1.177555/0.002565 = 460x slower I'm quite frankly a bit amazed pypy is so good. I was under the impression it would be much worse there. They've clearly improved the speed of the stack inspection since I last checked.
2. Rather verbose, so somewhat fails on the stated goal of improving readability
The "verbose" idea I propose is 3-4 characters more, per function call, than your `fun(a, b, *, this, that)` proposal. It will actually be shorter than your newer `fun(a, b, =this, =that)` proposal once you use 4 or more keyword arguments.
True enough.
3. Tooling* falls down very hard on this
It's true that tooling doesn't currently support my hypothetical function. It also does not support your hypothetical syntax.
If it was included in Python it would of course be added super fast, while the use() function would not. This argument is just bogus.
It would be *somewhat easier* to add special support for a function with a special name like `use()` than for new syntax. But obviously that varies by which tool and what purpose it is accomplishing.
Easier how? Technically? Maybe. Politically? Absolutely not. If it's in Python then all tools _must_ follow. This solved the political problem of getting tool support and that is the only hard one. The technical problem is a rounding error in this situation.
Of course, PyCharm and MyPy and PyLint aren't going to bother special casing a `use()` function unless or until it is widely used and/or part of the builtins or standard library. I don't actually advocate for such inclusion, but I wouldn't be stridently against that since it's just another function name, nothing really special.
Ah, yea, I see here you're granting my point above. Good to see we can agree on this at least. / Anders Benchmark code: ----------------------- import inspect from datetime import datetime def reach(name): return inspect.stack()[-2][0].f_locals[name] def use(names): kws = {} for name in names.split(): kws[name] = reach(name) return kws def function(a=11, b=22, c=33, d=44): pass def foo(): a, b, c = 1, 2, 3 function(a=77, **use('b')) c = 10000 start = datetime.now() for _ in range(c): foo() print('time with use: %s' % (datetime.now() - start)) def bar(): a, b, c = 1, 2, 3 function(a=77, b=b) start = datetime.now() for _ in range(c): bar() print('time with standard kwargs: %s' % (datetime.now() - start)) def baz(): a, b, c = 1, 2, 3 function(77, b) start = datetime.now() for _ in range(c): baz() print('time with positional args: %s' % (datetime.now() - start))

On Wed, Sep 26, 2018, 3:19 AM Anders Hovmöller <boxed@killingar.net> wrote:
Beyond the belligerent tone, is there an actual POINT here? It's the middle of the night and I'm on my tablet. I'm not sure what sort of error, or in what circumstance, my toy code "throws away" errors. Actually saying so rather than playing a coy guessing game would be helpful.
So for CPython 3.6 it's 2.587355/0.003079 = 840x times slower
and pypy: 1.177555/0.002565 = 460x slower
Yes, for functions whose entire body consists of `pass`, adding pretty much any cost to unpacking the arguments will show down the operation a lot. I'm actually sightly surprised the win is not bigger in Pypy than in CPython. I'd kinda expect them to optimize away the entire call when a function call is a NOOP. Anyway, this is 100% consistent with what I said. For functions with actual bodies, the lookup is negligible. It could be made a lot faster, I'm sure, if you wrote `use()` in C. Probably even just by optimizing the Python version (`reach()` doesn't need to be a separate call, for example, it's just better to illustrate that way). Changing the basic syntax of Python to optimize NOOPs really is a non-starter. In general, changing syntax at all to avoid something easily accomplished with existing forms is—and should be—a very high barrier to cross. I haven't used macropy. I should play with it. I'm guessing it could be used to create a zero-cost `use()` that had exactly the same API as my toy `use()` function. If so, you could starting using and publishing a toy version today and provide the optimized version as an alternative also.

I saw now that I missed the biggest problem with your proposal: yet again you deliberately throw away errors. I'm talking about making Python code _less_ error prone, while you seem to want to make it _more_.
Beyond the belligerent tone, is there an actual POINT here?
Yes, there is a point: you keep insisting that I shut up about my ideas and you motivate it by giving first totally broken code, then error prone and slow code and then you are upset that I point out these facts. I think it's a bit much when you complain about the tone after all that. Especially after you wrote "If someone steps out of line of being polite and professional, just ignore it" the 9th of September in this very thread.
It's the middle of the night and I'm on my tablet.
Maybe you could just reply later?
I'm not sure what sort of error, or in what circumstance, my toy code "throws away" errors. Actually saying so rather than playing a coy guessing game would be helpful.
You explicitly wrote your code so that it tries to pass a local variable "d" that does not exist and the function does not take, and it doesn't crash on that. I guess you forgot? You've done it several times now and I've already pointed this out.
If you add sleep(0.001) it's still a factor 1.3! This is NOT a trivial overhead.
Changing the basic syntax of Python to optimize NOOPs really is a non-starter.
This is not a belligerent tone you think?
In general, changing syntax at all to avoid something easily accomplished with existing forms is—and should be—a very high barrier to cross.
Sure. I'm not arguing that it should be a low barrier, I'm arguing that it's worth it. And I'm trying to discuss alternatives.
I haven't used macropy. I should play with it. I'm guessing it could be used to create a zero-cost `use()` that had exactly the same API as my toy `use()` function. If so, you could starting using and publishing a toy version today and provide the optimized version as an alternative also.
Let me quote my mail from yesterday: "3. I have made a sort-of implementation with MacroPy: https://github.com/boxed/macro-kwargs/blob/master/test.py I think this is a dead end, but it was easy to implement and fun to try!" Let me also clarify another point: I wanted to open up the discussion to people who are interested in the general problem and just discuss some ideas. I am not at this point trying to get a PEP through. If that was my agenda, I would have already submitted the PEP. I have not. / Anders

On Wed, Sep 26, 2018, 5:12 AM Anders Hovmöller <boxed@killingar.net> wrote:
That's fine. I'm not really as bothered by your belligerent tone as I'm trying to find the point underneath it. I guess... and I'm just guessing from your hints... that you don't like the "default to None" behavior of my *TOY* code. That's fine. It's a throwaway demonstration, not an API I'm attached to. You're new here. You may not understand that, in Python, we have a STRONG, preference for doing things with libraries before changing syntax. The argument that one can do something using existing, available techniques is prima facie weight against new syntax. Obviously there ARE times when syntax is added, so the fact isn't an absolute conclusion. But so far, your arguments have only seemed to amount to "I (Anders) like this syntax." The supposed performance win, the brevity, and the hypothetical future tooling, are just hand waving so far.

Oh, I see that you indeed implemented a macropy version at https://github.com/boxed/macro-kwargs/blob/master/test.py. Other than use() vs grab() as the function name, it's the same thing. Is it true that the macro version has no performance cost? So it's now perfectly straightforward to provide both a function and a macro for grab(), and users can play with that API, right? Without changing Python, programmers can use this "shortcut keyword arguments corresponding to local names." On Wed, Sep 26, 2018, 5:31 AM David Mertz <mertz@gnosis.cx> wrote:

Oh, I see that you indeed implemented a macropy version at https://github.com/boxed/macro-kwargs/blob/master/test.py <https://github.com/boxed/macro-kwargs/blob/master/test.py>. Other than use() vs grab() as the function name, it's the same thing.
Well, except that it's import time, and that you do get the tooling on the existence of the local variables. You still don't get any check that the function has the parameters you're trying to match with your keyword arguments so it's a bit of a half measure. Steve had a fun idea of using the syntax foo=☃ where you could transform ☃ to the real name at import time. It's similar to the MacroPy solution but can be implemented with a super tiny import hook with an AST transformation, and you get the tooling part the MacroPy version is missing, but of course you lose the parts you get with MacroPy. So again it's a half measure.. just the other half.
Is it true that the macro version has no performance cost?
In python 2 pretty much since it's compile time, in python 3 no because MacroPy3 has a bug in how pyc files are cached (they aren't). But even in the python 3 case the performance impact is at import time so not really significant.
So it's now perfectly straightforward to provide both a function and a macro for grab(), and users can play with that API, right? Without changing Python, programmers can use this "shortcut keyword arguments corresponding to local names."
Sort of. But for my purposes I don't really think it's a valid approach. I'm working on a 240kloc code base (real lines, not comments and blank lines). I don't think it's a good idea, and I wouldn't be able to sell it ot the team either, to introduce macropy to a significant enough chunk of the code base to make a difference. Plus the tooling problem mentioned above would make this worse than normal kwarg anyway from a robustness point of view. I'm thinking that point 4 of my original list of ideas (PyCharm code folding) is the way to go. This would mean I could change huge chunks of code to the standard python keyword argument syntax and then still get the readability improvement in my editor without affecting anyone else. It has the downside that you don't to see this new syntax in other tools of course, but I think that's fine for trying out the syntax. The biggest problem I see is that I feel rather scared about trying to implement this in PyCharm. I've tried to find the code for soft line breaks to implement a much nicer version of that, but I ended up giving up because I just couldn't find where this happened in the code! My experience with the PyCharm code base is basically "I'm amazed it works at all!". If you know anyone who feels comfortable with the PyCharm code that could point me in the right direction I would of course be very greatful! / Anders

I'm still not sure why all this focus on new syntax or convoluted IDE enhancements. I presented a very simple utility function that accomplishes exactly the started goal of DRY in keyword arguments.
And I’ve already stated my reasons for rejecting this specific solution, but I’ll repeat them for onlookers: 1. Huge performance penalty 2. Rather verbose, so somewhat fails on the stated goal of improving readability 3. Tooling* falls down very hard on this My macropy implementation that I linked to solves 1, improves 2 somewhat (but not much), and handled half of 3 by resulting in code that tooling can validate that the passed variables exists but fails in that tooling won’t correctly validate that the arguments actually correspond to existing parameters. * by tooling I mean editors like PyCharm and static analysis tools like mypy / Anders

On Tue, Sep 25, 2018 at 8:32 AM Anders Hovmöller <boxed@killingar.net> wrote:
Huh? Have you actually benchmarked this is some way?! A couple lookups into the namespace are really not pricey operations. The cost is definitely more than zero, but for any function that does anything even slightly costly, the lookups would be barely in the noise.
2. Rather verbose, so somewhat fails on the stated goal of improving readability
The "verbose" idea I propose is 3-4 characters more, per function call, than your `fun(a, b, *, this, that)` proposal. It will actually be shorter than your newer `fun(a, b, =this, =that)` proposal once you use 4 or more keyword arguments.
3. Tooling* falls down very hard on this
It's true that tooling doesn't currently support my hypothetical function. It also does not support your hypothetical syntax. It would be *somewhat easier* to add special support for a function with a special name like `use()` than for new syntax. But obviously that varies by which tool and what purpose it is accomplishing. Of course, PyCharm and MyPy and PyLint aren't going to bother special casing a `use()` function unless or until it is widely used and/or part of the builtins or standard library. I don't actually advocate for such inclusion, but I wouldn't be stridently against that since it's just another function name, nothing really special. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

David, I saw now that I missed the biggest problem with your proposal: yet again you deliberately throw away errors. I'm talking about making Python code _less_ error prone, while you seem to want to make it _more_. Anyway, I'll modify your reach() to not have the if in it that has this error hiding property, it also simplifies it a lot. It should look like this: def reach(name): return inspect.stack()[-2][0].f_locals[name]
1. Huge performance penalty
Huh? Have you actually benchmarked this is some way?! A couple lookups into the namespace are really not pricey operations. The cost is definitely more than zero, but for any function that does anything even slightly costly, the lookups would be barely in the noise.
I'm talking about using this for all or most function calls that aren't positional only. So no, you can absolutely not assume I only use it to call expensive functions. And yea, I did benchmark it, and since you didn't define what you would think is acceptable for a benchmark you've left the door open for me to define it. This is the result of a benchmark for 10k calls (full source at the very end of this email): CPython 3.6 time with use: 0:00:02.587355 time with standard kwargs: 0:00:00.003079 time with positional args: 0:00:00.003023 pypy 6.0 time with use: 0:00:01.177555 time with standard kwargs: 0:00:00.002565 time with positional args: 0:00:00.001953 So for CPython 3.6 it's 2.587355/0.003079 = 840x times slower and pypy: 1.177555/0.002565 = 460x slower I'm quite frankly a bit amazed pypy is so good. I was under the impression it would be much worse there. They've clearly improved the speed of the stack inspection since I last checked.
2. Rather verbose, so somewhat fails on the stated goal of improving readability
The "verbose" idea I propose is 3-4 characters more, per function call, than your `fun(a, b, *, this, that)` proposal. It will actually be shorter than your newer `fun(a, b, =this, =that)` proposal once you use 4 or more keyword arguments.
True enough.
3. Tooling* falls down very hard on this
It's true that tooling doesn't currently support my hypothetical function. It also does not support your hypothetical syntax.
If it was included in Python it would of course be added super fast, while the use() function would not. This argument is just bogus.
It would be *somewhat easier* to add special support for a function with a special name like `use()` than for new syntax. But obviously that varies by which tool and what purpose it is accomplishing.
Easier how? Technically? Maybe. Politically? Absolutely not. If it's in Python then all tools _must_ follow. This solved the political problem of getting tool support and that is the only hard one. The technical problem is a rounding error in this situation.
Of course, PyCharm and MyPy and PyLint aren't going to bother special casing a `use()` function unless or until it is widely used and/or part of the builtins or standard library. I don't actually advocate for such inclusion, but I wouldn't be stridently against that since it's just another function name, nothing really special.
Ah, yea, I see here you're granting my point above. Good to see we can agree on this at least. / Anders Benchmark code: ----------------------- import inspect from datetime import datetime def reach(name): return inspect.stack()[-2][0].f_locals[name] def use(names): kws = {} for name in names.split(): kws[name] = reach(name) return kws def function(a=11, b=22, c=33, d=44): pass def foo(): a, b, c = 1, 2, 3 function(a=77, **use('b')) c = 10000 start = datetime.now() for _ in range(c): foo() print('time with use: %s' % (datetime.now() - start)) def bar(): a, b, c = 1, 2, 3 function(a=77, b=b) start = datetime.now() for _ in range(c): bar() print('time with standard kwargs: %s' % (datetime.now() - start)) def baz(): a, b, c = 1, 2, 3 function(77, b) start = datetime.now() for _ in range(c): baz() print('time with positional args: %s' % (datetime.now() - start))

On Wed, Sep 26, 2018, 3:19 AM Anders Hovmöller <boxed@killingar.net> wrote:
Beyond the belligerent tone, is there an actual POINT here? It's the middle of the night and I'm on my tablet. I'm not sure what sort of error, or in what circumstance, my toy code "throws away" errors. Actually saying so rather than playing a coy guessing game would be helpful.
So for CPython 3.6 it's 2.587355/0.003079 = 840x times slower
and pypy: 1.177555/0.002565 = 460x slower
Yes, for functions whose entire body consists of `pass`, adding pretty much any cost to unpacking the arguments will show down the operation a lot. I'm actually sightly surprised the win is not bigger in Pypy than in CPython. I'd kinda expect them to optimize away the entire call when a function call is a NOOP. Anyway, this is 100% consistent with what I said. For functions with actual bodies, the lookup is negligible. It could be made a lot faster, I'm sure, if you wrote `use()` in C. Probably even just by optimizing the Python version (`reach()` doesn't need to be a separate call, for example, it's just better to illustrate that way). Changing the basic syntax of Python to optimize NOOPs really is a non-starter. In general, changing syntax at all to avoid something easily accomplished with existing forms is—and should be—a very high barrier to cross. I haven't used macropy. I should play with it. I'm guessing it could be used to create a zero-cost `use()` that had exactly the same API as my toy `use()` function. If so, you could starting using and publishing a toy version today and provide the optimized version as an alternative also.

I saw now that I missed the biggest problem with your proposal: yet again you deliberately throw away errors. I'm talking about making Python code _less_ error prone, while you seem to want to make it _more_.
Beyond the belligerent tone, is there an actual POINT here?
Yes, there is a point: you keep insisting that I shut up about my ideas and you motivate it by giving first totally broken code, then error prone and slow code and then you are upset that I point out these facts. I think it's a bit much when you complain about the tone after all that. Especially after you wrote "If someone steps out of line of being polite and professional, just ignore it" the 9th of September in this very thread.
It's the middle of the night and I'm on my tablet.
Maybe you could just reply later?
I'm not sure what sort of error, or in what circumstance, my toy code "throws away" errors. Actually saying so rather than playing a coy guessing game would be helpful.
You explicitly wrote your code so that it tries to pass a local variable "d" that does not exist and the function does not take, and it doesn't crash on that. I guess you forgot? You've done it several times now and I've already pointed this out.
If you add sleep(0.001) it's still a factor 1.3! This is NOT a trivial overhead.
Changing the basic syntax of Python to optimize NOOPs really is a non-starter.
This is not a belligerent tone you think?
In general, changing syntax at all to avoid something easily accomplished with existing forms is—and should be—a very high barrier to cross.
Sure. I'm not arguing that it should be a low barrier, I'm arguing that it's worth it. And I'm trying to discuss alternatives.
I haven't used macropy. I should play with it. I'm guessing it could be used to create a zero-cost `use()` that had exactly the same API as my toy `use()` function. If so, you could starting using and publishing a toy version today and provide the optimized version as an alternative also.
Let me quote my mail from yesterday: "3. I have made a sort-of implementation with MacroPy: https://github.com/boxed/macro-kwargs/blob/master/test.py I think this is a dead end, but it was easy to implement and fun to try!" Let me also clarify another point: I wanted to open up the discussion to people who are interested in the general problem and just discuss some ideas. I am not at this point trying to get a PEP through. If that was my agenda, I would have already submitted the PEP. I have not. / Anders

On Wed, Sep 26, 2018, 5:12 AM Anders Hovmöller <boxed@killingar.net> wrote:
That's fine. I'm not really as bothered by your belligerent tone as I'm trying to find the point underneath it. I guess... and I'm just guessing from your hints... that you don't like the "default to None" behavior of my *TOY* code. That's fine. It's a throwaway demonstration, not an API I'm attached to. You're new here. You may not understand that, in Python, we have a STRONG, preference for doing things with libraries before changing syntax. The argument that one can do something using existing, available techniques is prima facie weight against new syntax. Obviously there ARE times when syntax is added, so the fact isn't an absolute conclusion. But so far, your arguments have only seemed to amount to "I (Anders) like this syntax." The supposed performance win, the brevity, and the hypothetical future tooling, are just hand waving so far.

Oh, I see that you indeed implemented a macropy version at https://github.com/boxed/macro-kwargs/blob/master/test.py. Other than use() vs grab() as the function name, it's the same thing. Is it true that the macro version has no performance cost? So it's now perfectly straightforward to provide both a function and a macro for grab(), and users can play with that API, right? Without changing Python, programmers can use this "shortcut keyword arguments corresponding to local names." On Wed, Sep 26, 2018, 5:31 AM David Mertz <mertz@gnosis.cx> wrote:

Oh, I see that you indeed implemented a macropy version at https://github.com/boxed/macro-kwargs/blob/master/test.py <https://github.com/boxed/macro-kwargs/blob/master/test.py>. Other than use() vs grab() as the function name, it's the same thing.
Well, except that it's import time, and that you do get the tooling on the existence of the local variables. You still don't get any check that the function has the parameters you're trying to match with your keyword arguments so it's a bit of a half measure. Steve had a fun idea of using the syntax foo=☃ where you could transform ☃ to the real name at import time. It's similar to the MacroPy solution but can be implemented with a super tiny import hook with an AST transformation, and you get the tooling part the MacroPy version is missing, but of course you lose the parts you get with MacroPy. So again it's a half measure.. just the other half.
Is it true that the macro version has no performance cost?
In python 2 pretty much since it's compile time, in python 3 no because MacroPy3 has a bug in how pyc files are cached (they aren't). But even in the python 3 case the performance impact is at import time so not really significant.
So it's now perfectly straightforward to provide both a function and a macro for grab(), and users can play with that API, right? Without changing Python, programmers can use this "shortcut keyword arguments corresponding to local names."
Sort of. But for my purposes I don't really think it's a valid approach. I'm working on a 240kloc code base (real lines, not comments and blank lines). I don't think it's a good idea, and I wouldn't be able to sell it ot the team either, to introduce macropy to a significant enough chunk of the code base to make a difference. Plus the tooling problem mentioned above would make this worse than normal kwarg anyway from a robustness point of view. I'm thinking that point 4 of my original list of ideas (PyCharm code folding) is the way to go. This would mean I could change huge chunks of code to the standard python keyword argument syntax and then still get the readability improvement in my editor without affecting anyone else. It has the downside that you don't to see this new syntax in other tools of course, but I think that's fine for trying out the syntax. The biggest problem I see is that I feel rather scared about trying to implement this in PyCharm. I've tried to find the code for soft line breaks to implement a much nicer version of that, but I ended up giving up because I just couldn't find where this happened in the code! My experience with the PyCharm code base is basically "I'm amazed it works at all!". If you know anyone who feels comfortable with the PyCharm code that could point me in the right direction I would of course be very greatful! / Anders
participants (2)
-
Anders Hovmöller
-
David Mertz