Keyword only argument on function call

I have a working implementation for a new syntax which would make using keyword arguments a lot nicer. Wouldn't it be awesome if instead of: foo(a=a, b=b, c=c, d=3, e=e) we could just write: foo(*, a, b, c, d=3, e) and it would mean the exact same thing? This would not just be shorter but would create an incentive for consistent naming across the code base. So the idea is to generalize the * keyword only marker from function to also have the same meaning at the call site: everything after * is a kwarg. With this feature we can now simplify keyword arguments making them more readable and concise. (This syntax does not conflict with existing Python code.) The full PEP-style suggestion is here: https://gist.github.com/boxed/f72221e7e77370be3e5703087c1ba54d I have also written an analysis tool you can use on your code base to see what kind of impact this suggestion might have. It's available at https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c . The results for django and twisted are posted as comments to the gist. We've run this on our two big code bases at work (both around 250kloc excluding comments and blank lines). The results show that ~30% of all arguments would benefit from this syntax. Me and my colleague Johan Lübcke have also written an implementation that is available at: https://github.com/boxed/cpython / Anders Hovmöller

On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovmöller wrote:
No.
This would not just be shorter but would create an incentive for consistent naming across the code base.
You say that as if consistent naming is *in and of itself* a good thing, merely because it is consistent. I'm in favour of consistent naming when it helps the code, when the names are clear and relevant. But why should I feel bad about failing to use the same names as the functions I call? If some library author names the parameter to a function "a", why should I be encouraged to use that same name *just for the sake of consistency*?
It's certainly more concise, provided those named variables already exist, but how often does that happen? You say 30% in your code base. (By the way, well done for writing an analysis tool! I mean it, I'm not being sarcastic. We should have more of those.) I disagree that f(*, page) is more readable than an explicit named keyword argument f(page=page). My own feeling is that this feature would encourage what I consider a code-smell: function calls requiring large numbers of arguments. Your argument about being concise makes a certain amount of sense if you are frequently making calls like this: # chosing a real function, not a made-up example open(file, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, closefd=closefd, opener=opener) If 30% of your function calls look like that, I consider it a code-smell. The benefit is a lot smaller if your function calls look more like this: open(file, encoding=encoding) and even less here: open(file, 'r', encoding=self.encoding or self.default_encoding, errors=self.errors or self.default_error_handler) for example. To get benefit from your syntax, I would need to extract out the arguments into temporary variables: encoding = self.encoding or self.default_encoding errors = self.errors or self.default_error_handler open(file, 'r', *, encoding, errors) which completely cancels out the "conciseness" argument. First version, with in-place arguments: 1 statement 2 lines 120 characters including whitespace Second version, with temporary variables: 3 statements 3 lines 138 characters including whitespace However you look at it, it's longer and less concise if you have to create temporary variables to make use of this feature. -- Steve

On Thu, Sep 6, 2018 at 9:11 AM Steven D'Aprano <steve@pearwood.info> wrote:
I've been asking this same question on the Javascript/ES6 side of my work ever since unpacking was introduced there which baked hash-lookup into the unpacking at a syntax level. In that world its impacted this same encouragement of "consistency" between local variable names and parameters of called functions and it certainly seems popular in that ecosystem. The practice still feels weird to me and I'm on the fence about it. Although, to be honest, I'm definitely leaning towards the "No, actually, it is a good thing." I grew up, development-speaking, in the Python world with a strong emphasis drilled into me that style constraints make better code and maybe this is just an extension of that. Of course, you might not always want the same name, but it is only encouraged not required. You can always rename variables. That said... I'm not actually a fan of the specific suggested syntax:
foo(*, a, b, c, d=3, e)
I just wanted to give my two cents on the name consistency issue.

I'm trying to see how it can be done with current python. from somelib import auto auto(locals(), function, 'a', 'b', 'c', d=5) auto(locals(), function).call('a', 'b', 'c', d=5) auto(locals(), function)('a', 'b', 'c', d=5) auto(locals()).bind(function).call('a', 'b', 'c', d=5) One of those syntax for a class auto could be chosen but it allows you to give locals in the call. However, locals() gives a copy of the variables so it must be given as this code illustrates : def f(x): y = x+1 a = locals() g = 4 print(a) f(5) # {'y': 6, 'x': 5} Le jeu. 6 sept. 2018 à 15:18, Calvin Spealman <cspealma@redhat.com> a écrit :

On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote:
Heh. I did expect the first mail to be uncivil :P
If it's the same thing yes. Otherwise no.
I'm in favour of consistent naming when it helps the code, when the names are clear and relevant.
Which is what I'm saying.
But why should I feel bad about failing to use the same names as the functions I call?
Yea, why would you feel bad? If you should have different names, then do. Of course.
It would encourage library authors to name their parameters well. It wouldn't do anything else.
(Caveat: 30% of the cases where my super simple and stupid tool can find.) It's similar for django btw.
I disagree that f(*, page) is more readable than an explicit named keyword argument f(page=page).
People prefer f(page) today. For some reason. That might refute your statement or not, depending on why they do it.
I don't see how that's relevant (or true, but let's stick with relevant). There are actual APIs that have lots of arguments. GUI toolkits are a great example. Another great example is to send a context dict to a template engine. To get benefit from your syntax, I would need to
Ok. Sure, but that's a straw man.... / Anders

On 06/09/18 15:05, Anders Hovmöller wrote:
For comparison, my reaction did indeed involve awe. It was full of it, in fact :-p Sorry, but that syntax looks at best highly misleading -- how many parameters are we passing? I don't like it at all.
Actually you are not. Adding specific syntax support is a strong signal that you expect people to use it and (in this case) use consistent naming. Full stop. It's a much stronger statement than you seem to think.
Evidence? -- Rhodri James *-* Kynesim Ltd

Rhodri James wrote:
that syntax looks at best highly misleading -- how many parameters are we passing? I don't like it at all.
Maybe something like this would be better: f(=a, =b, =c) Much more suggestive that you're passing a keyword argument. As for whether consistent naming is a good idea, seems to me it's the obvious thing to do when e.g. you're overriding a method, to keep the signature the same for people who want to pass arguments by keyword. You'd need to have a pretty strong reason *not* to keep the parameter names the same. Given that, it's natural to want a way to avoid repeating yourself so much when passing them on. So I think the underlying idea has merit, but the particular syntax proposed is not the best. -- Greg

Maybe something like this would be better:
f(=a, =b, =c)
Haha. Look at my PEP, it's under "rejected alternative syntax", because of the super angry replies I got on this very mailing list when I suggested this syntax a few years ago :P I think that syntax is pretty nice personally, but me and everyone at work I've discussed this with think that f(*, a, b, c) syntax is even nicer since it mirrors "def f(*, a, b, c)" so nicely. Most replies to my new syntax has been along the lines of "seems obvious" and "ooooh" :P

Op vr 7 sep. 2018 om 04:49 schreef Anders Hovmöller <boxed@killingar.net>:
I must say I like the idea of being able to write it the way you propose. Sometimes we make a function only to be called once at a specific location, more because of factoring out some functions for clarity. Been doing that myself lately for scripting, and I think it'd increase clarity. However, it's really alike to f(a, b, c), which does something totally different. It -might- become something of a newb trap, as myfunc(*, a, b, c) would be 100% equal to myfunc(*, c, a, b) but that's not true for the f(c, a, b) case. I dislike the f(=arg) syntax.

I've seen beginners make the mistake of calling f(c, a, b) and being confused why it doesn't work the way they expected, so I think the newb trap might go in the other direction. If by "newb" one means "totally new to programming" then I think the keyword style is probably less confusing but if you come from a language with only positional arguments (admittedly most languages!) then the trap goes in the other direction. Of course, I don't have the resources or time to make a study about this to figure out which is which, but I agree it's an interesting question.

On Fri, Sep 7, 2018, 12:00 AM Jacco van Dorp <j.van.dorp@deonet.nl> wrote:
Sometimes we make a function only to be called once at a specific location, more because of factoring out some functions for clarity.
I've found myself making the opposite refactoring recently, improving clarity by eliminating unnecessary extra functions, where the local scope is passed to the helper function.

On Fri, Sep 07, 2018 at 10:39:07AM +1200, Greg Ewing wrote:
But the proposal isn't just for a way to avoid repeating oneself when overriding methods: class Parent: def spam(self, spam, eggs, cheese): ... class Child(Parent): def spam(self, foo, bar, baz): # why the change in names? ... I agree that inconsistency here is a strange thing to do, and its a minor annoyance to have to manually repeat the names each time you override a class. Especially during rapid development, when the method signatures haven't yet reached a stable API. (But I don't know of any alternative which isn't worse, given that code is read far more often than its written and we don't design our language to only be usable for people using IntelliSense.) The proposal is for syntax to make one specific pattern shorter and more concise when *calling arbitrary functions*. Nothing to do with inheritance at all, except as a special case. It is pure syntactic sugar for one specific case, "name=name" when calling a function. Syntactic sugar is great, in moderation. I think this is too much sugar for not enough benefit. But I acknowledge that's because little of my code uses that name=name idiom. (Most of my functions take no more than three arguments, I rarely need to use keywords, but when I do, they hardly ever end up looking like name=name. A quick and dirty manual search of my code suggests this would be useful to me in less than 1% of function calls.) But for those who use that idiom a lot, this may seem more appealing. With the usual disclaimer that I understand it will never be manditory to use this syntax, nevertheless I can see it leading to the "foolish consistency" quote from PEP 8. "We have syntax to write shorter code, shorter code is better, so if we want to be Pythonic we must design our functions to use the same names for local variables as the functions we call." -- hypothetical blog post, Stackoverflow answer, opinionated tutorial, etc. I don't think this is a pattern we want to encourage. We have a confluence of a few code smells, each of which in isolation are not *necessarily* bad but often represent poor code: - complex function signatures; - function calls needing lots of arguments; - needing to use keyword arguments (as otherwise the function call is too hard to read); - a one-to-one correspondence between local variables and arguments; and syntax designed to make this case easier to use, and hence discourage people from refactoring to remove the pain. (If they can.) I stress that none of these are necessarily poor code, but they are frequently seen in poor code. As a simplified example: def function(alpha, beta, gamma): ... # later, perhaps another module def do_something_useful(spam, eggs, cheese): result = function(alpha=eggs, beta=spam, gamma=cheese) ... In this case, the proposed syntax cannot be applied, but the argument from consistency would suggest that I ought change the signature of do_something_useful to this so I can use the syntax: # consistency is good, m'kay? def do_something_useful(beta, alpha, gamma): result = function(*, alpha, beta, gamma) ... Alternatively, I could keep the existing signature: def do_something_useful(spam, eggs, cheese): alpha, beta, gamma = eggs, spam, cheese result = function(*, alpha, beta, gamma) ... To save seventeen characters on one line, the function call, we add an extra line and thirty-nine characters. We haven't really ended up with more concise code. In practice, I think the number of cases where people *actually can* take advantage of this feature by renaming their own local variables or function parameters will be pretty small. (Aside from inheritance.) But given the "consistency is good" meme, I reckon people would be always looking for opportunities to use it, and sad when they can't. (I know that *I* would, if I believed that consistency was a virtue for its own sake. I think that DRY is a virtue, and I'm sad when I have to repeat myself.) We know from other proposals [don't mention assignment expressions...] that syntax changes can be accepted even when they have limited applicability and can be misused. It comes down to a value judgement as to whether the pros are sufficiently pro and the cons insufficiently con. I don't think they do: Pros: - makes one specific, and (probably unusual) pain-point slightly less painful; - rewards consistency in naming when consistency in naming is justified. Cons: - creates yet another special meaning for * symbol; - implicit name binding instead of explicit; - discourages useful refactoring; - potentially encourages a bogus idea that consistency is a virtue for its own sake, regardless of whether it makes the code better or not; - similarly, it rewards consistency in naming even when consistency in naming is not needed or justified; - it's another thing for people to learn, more documentation needed, extra complexity in the parser, etc; - it may simply *shift* complexity, being even more verbose than the status quo under some circumstances. -- Steve

Steve wrote:
-- hypothetical blog post, Stackoverflow answer, opinionated tutorial, etc.
I don't think this is a pattern we want to encourage.
Steve's "hypothetical blog post" is a pattern he doesn't like, and he said that it's not a pattern we want to encourage. And he proceeds to demolish this pattern, in the rest of his post. According to https://en.wikipedia.org/wiki/Straw_man <quote> The typical straw man argument creates the illusion of having completely refuted or defeated an opponent's proposition through the covert replacement of it with a different proposition (i.e., "stand up a straw man") and the subsequent refutation of that false argument ("knock down a straw man") instead of the opponent's proposition. </quote> So what was the original proposition. I summarise from the original post. It was to allow foo(*, a, b, c, d=3, e) as a shorthand for foo(a=a, b=b, c=c, d=3, e=e) And also that on two big code bases about 30% of all arguments would benefit from this syntax. And also that it would create an incentive for consistent naming across the code base. To me, the "30% of all arguments" deserves more careful examination. Does the proposal significant improve the reading and writing of this code? And are there other, perhaps better, ways of improving this code? I'm very keen to dig into this. I'll start a new thread for this very topic. -- Jonathan

Maybe my tool should be expanded to produce more nuanced data? Like how many of those 30% are: - arity 1,2,3, etc? (Arity 1 maybe should be discarded as being counted unfairly? I don’t think so but some clearly do) - matches 1 argument, 2,3,4 etc? Matching just one is of less value than matching 5. Maybe some other statistics? / Anders

A finer grained analysis tool would be helpful. I'm -0 on the idea because I believe it would discourage more expressive names in calling contexts in order to enable the proposed syntax. But I also see a big difference between cases where all keywords match calling names and cases where only a few of them do. I.e. this is probably a small win: # function (a=a, b=b, c=c, d=d) function(*, a, b, c, d) But this feels like it invites confusion and bugs: # function (a=my_a, b=b, c=my_c, d=d) function(*, a=my_a, b, c=my_c, d) I recognize that if the syntax were added it wouldn't force anyone to use the second version... But that means no one who WRITES the code. As a reader I would certainly have to parse some of the bad uses along with the good ones. I know these examples use simplified and artificial names, but I think the case is even stronger with more realistic names or expressions. On Sat, Sep 8, 2018, 8:24 AM Anders Hovmöller <boxed@killingar.net> wrote:

A finer grained analysis tool would be helpful. I'm -0 on the idea because I believe it would discourage more expressive names in calling contexts in order to enable the proposed syntax. But I also see a big difference between cases where all keywords match calling names and cases where only a few of them do.
I’ll try to find some time to tune it when I get back to work then.
That example could also be rewritten as function(a=my_a, c=my_c, *, b, d) or function(*, b, c, d, a=my_a, c=my_c) Both are much nicer imo. Hmmm... maybe my suggestion is actually better if the special case is only after * so the first of those is legal and the rest not. Hadn’t considered that option before now.
I know these examples use simplified and artificial names, but I think the case is even stronger with more realistic names or expressions.
Stronger in what direction? :P / Anders

On Sat, Sep 8, 2018 at 9:34 AM Anders Hovmöller <boxed@killingar.net> wrote:
function(a=my_a, c=my_c, *, b, d) function(*, b, c, d, a=my_a, c=my_c)
Yes, those look less bad. They are also almost certainly should get this message rather than working: TypeError: function() got multiple values for keyword argument 'c' But they also force changing the order of keyword arguments in the call. That doesn't do anything to the *behavior* of the call, but it often affects readability. For functions with lots of keyword arguments there is often a certain convention about the order they are passed in that readers expect to see. Those examples of opening and reading files that several people have given are good examples of this. I.e. most optional arguments are not used, but when they are used they have certain relationships among them that lead readers to expect them in a certain order. Here's a counter-proposal that does not require any new syntax. Is there ANYTHING your new syntax would really get you that this solution does not accomplish?! (other than save 4 characters; fewer if you came of with a one character name for the helper)
We could implement this helper function like this:
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

I'm not sure whether my toy function is better to assume None for a name that is "used" but does not exist, or to raise a NameError. I can see arguments in both directions, but either behavior is a very small number of lines (and the same decision exists for the proposed syntax). You might also allow the `use()` function to take some argument(s) other than a space-separated string, but that's futzing with a demonstration API. On Sat, Sep 8, 2018 at 10:05 AM David Mertz <mertz@gnosis.cx> wrote:
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

On Sat, Sep 8, 2018, 6:34 AM Anders Hovmöller <boxed@killingar.net> wrote:
Even better would be to show full context on one or a few cases where this syntax helps. I've found that many proposals in this mailing list have better solutions when one can see the complete code. If your proposal seems like the best solution after seeing the context, that can be more compelling than some assertion about 30% of parameters. If you can't share proprietary code, why not link to a good example in the Django project? If nothing else, maybe Django could get a pull request out of this.

I've updated the tool to also print statistics on how many arguments there are for the places where it can perform the analysis. I also added statistics for how long variable names it finds. I'm pretty sure almost all places with the length 1 or 2 for variable names passed would be better if they had been synchronized. Those places are also an argument for my suggestion I think, because if you gain something to synchronize then that will make you less likely to shorten variable names down to 1 or 2 characters to get brevity. Maybe... If you exclude calls to functions with just one argument (not parameters) then the hit percentage on the code base at work drops from ~36% to ~31%. Not a big difference overall. I've updated the gist: https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c <https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c> / Anders

On Sat, Sep 08, 2018 at 12:05:33PM +0100, Jonathan Fine wrote:
This is called Poisoning the Well. You have carefully avoided explicitly accusing me of making a straw man argument while nevertheless making a completely irrelevant mention of it, associating me with the fallacy. That is not part of an honest or open discussion. Anders made a proposal for a change in syntax. I made a prediction of the possible unwelcome consequences of that suggested syntax. In no way, shape or form is that a straw man. To give an analogy: Politician A: "We ought to invade Iranistan, because reasons." Politician B: "If we do that, it will cost a lot of money, people will die, we'll bring chaos to the region leading to more terrorism, we might not even accomplish our aims, and our international reputation will be harmed." Politician A: "That's a straw-man! I never argued for those bad things. I just want to invade Iranistan." Pointing out unwelcome consequences of a proposal is not a Straw Man. -- Steve

I read that as him accusing you very directly.
You kept saying I was “forcing” to use the new syntax. You said it over and over even after we pointed out this was not the actual suggestion. This is classic straw man. But ok, let’s be more charitable and interpret it as you wrote it later: that it won’t be forcing per se, but that the feature will be *so compelling* it will be preferred at all times over both normal keyword arguments *and* positional arguments. For someone who doesn’t like the proposal you seem extremely convinced that everyone else will think it’s so super awesome they will actually try to force it on their colleagues etc. I like my proposal obviously but even I don’t think it’s *that* great. It would almost certainly become the strongly preferred way to do it for some cases like .format() and sending a context to a template renderer in web apps. But that’s because in those cases it is very important to match the names. / Anders

On Sun, Sep 9, 2018 at 3:37 PM, Anders Hovmöller <boxed@killingar.net> wrote:
Creating a new and briefer syntax for something is not actually *forcing* people to use it, but it is an extremely strong encouragement. It's the language syntax yelling "HERE! DO THIS!". I see it all the time in JavaScript, where ES2015 introduced a new syntax {name} equivalent to {"name":name} - people will deliberately change their variable names to match the desired object keys. So saying "forcing" is an exaggeration, but a very slight one. ChrisA

On Sun, Sep 9, 2018 at 5:32 PM, Anders Hovmöller <boxed@killingar.net> wrote:
Often neutral, sometimes definitely evil. Pretty much never good. That said, my analysis is skewed towards the times when (as an instructor) I am asked to assist - the times when a student has run into trouble. But even compensating for that, I would say that the balance still tips towards the bad. ChrisA

On Sun, Sep 09, 2018 at 07:37:21AM +0200, Anders Hovmöller wrote:
Okay.
Over and over again, you say. Then it should be really easy for you to link to a post from me saying that. I've only made six posts in this thread (seven including this one) so it should only take you a minute to justify (or retract) your accusation: https://mail.python.org/pipermail/python-ideas/2018-September/author.html Here are a couple of quotes to get you started: Of course I understand that with this proposal, there's nothing *forcing* people to use it. https://mail.python.org/pipermail/python-ideas/2018-September/053282.html With the usual disclaimer that I understand it will never be manditory [sic] to use this syntax ... https://mail.python.org/pipermail/python-ideas/2018-September/053257.html
Vigorous debate is one thing. Misrepresenting my position is not. This isn't debate club where the idea is to win by any means, including by ridiculing exaggerated versions of the other side's argument. (There's a name for that fallacy, you might have heard of it.) We're supposed to be on the same side, trying to determine what is the best features for the language. We don't have to agree on what those features are, but we do have to agree to treat each other's position with fairness. -- Steve

Can we all just PLEASE stop the meta-arguments enumerating logical fallacies and recriminating about who made it personal first?! Yes, let's discuss specific proposals and alternatives, and so on. If someone steps out of line of being polite and professional, just ignore it. On Sun, Sep 9, 2018, 8:52 AM Steven D'Aprano <steve@pearwood.info> wrote:

On Sun, Sep 9, 2018 at 7:37 AM, Anders Hovmöller <boxed@killingar.net> wrote: I've spent this whole thread thinking: "who in the world is writing code with a lot of spam=spam arguments? If you are transferring that much state in a function call, maybe you should have a class that holds that state? Or pass in a **kwargs dict? Note: I write a lot of methods (mostly __init__) with a lot of keyword parameters -- but they all tend have sensible defaults, and/or will have many values specified by literals. Then this:
OK -- those are indeed good use cases, but: for .format() -- that's why we now have f-strings -- done. for templates -- are you really passing all that data in from a bunch of variables?? as opposed to, say, a dict? That strikes me as getting code and data confused (which is sometimes hard not to do...) So still looking for a compelling use-case -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On 09/10/2018 12:52 PM, Chris Barker via Python-ideas wrote:
So still looking for a compelling use-case
In my day job I spend a lot of time writing/customizing modules for a framework called OpenERP (now Odoo*). Those modules are all subclasses, and most work will require updating at least a couple parent metheds -- so most calls look something like: def a_method(self, cr, uid, ids, values, context=None): ... super(self, parent).a_method(cr, uid, ids, values, context=context) Not a perfect example as these can all be positional, but it's the type of code where this syntax would shine. I think, however, that we shouldn't worry about a lead * to activate it, just use a leading '=' and let it show up anywhere and it follows the same semantics/restrictions as current positional vs keyword args: def example(filename, mode, spin, color, charge, orientation): pass example('a name', 'ro', =spin, =color, charge=last, =orientation) So +0 with the above proposal. -- ~Ethan~

On 10/09/2018 22:00, Ethan Furman wrote:
Couldn't just about all of the use cases mentioned so far be met in quite a neat manner by providing access to a method, or dictionary, called __params__ which would give access, as a dictionary, to the parameters as supplied in the call, (or filled in by the defaults). If this was accessible externally, as fn.__defaults__ is then examples such as:
would become: def a_method(self, cr, uid, ids, values, context=None): ... params = {k:v for k,v in __params__ if k in parent.a_method.keys()} # Possibly add some additional entries here! super(self, parent).a_method(**params) -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com

Op di 11 sep. 2018 om 06:48 schreef Steve Barnes <gadgetsteve@live.co.uk>:
So...deep black magic ? That's what this looks like. Having =spam for same-named kwargs sounds easier to comprehend for new people than a __magic__ object you can only access in function bodies and will give headaches if you have to write decorators: def other_function_defaults(*args, **kwargs): outer_params = __params__.copy() def deco(func): def inner(self, yo_momma): return func(self, **outer_params, **__params__) # overwrite with specifically provided arguments return deco I think that magic objects like that aren't really pythonic - if it were, "self" would be the same kind of magic, instead of us having to name it on every function call (A decision im really a fan of, tbh)

My 3 cents: 1. My most objective objection against the f(*, foo, bar, baz) syntax is that it looks like positional arguments, and the syntactic marker * which dissuades you of that can be arbitrarily far apart from the keyword. 2. The syntax f(=foo, =bar, =baz) at least solves that problem. Otherwise I find it quite ugly with the unbalanced = but that is obviously more subjective. 3. I still am not convinced it is needed at all. IMHO, if your code is filled with f(foo=foo, bar=bar, baz=baz) then perhaps Python is telling you that foo, bar and baz want to become fields in a new object which you should pass around. 4. (Bonus cent) Somewhat tongue-in-cheek I offer the following Vim mapping for those who find themselves typing longword=longword all the time. :inoremap <F8> =<Esc>hyiwt=lpa Now you can just do longword<F8>. Stephan Op di 11 sep. 2018 om 08:55 schreef Jacco van Dorp <j.van.dorp@deonet.nl>:

On Tue, Sep 11, 2018 at 04:47:37AM +0000, Steve Barnes wrote:
I imagine it would be fairly easy to fill in such a special __params__ local variable when the function is called. The interpreter already has to process the positional and keyword arguments, it probably wouldn't be that hard to add one more implicitly declared local and fill it in: def function(spam, eggs, *args): print( __params__ ) function(2, 6, 99, 100) # prints {'spam': 2, 'eggs': 6, '*args': (99, 100)} But this has some problems: (1) It might be cheap, but it's not free. Function calling in Python is already a minor bottleneck, having to populate one more local whether it is needed or not can only make it slower, not faster. (2) It leads to the same gotchas as locals(). What happens if you assign to the __params__ dict? What happens when the parameters change their local value? The __param__ dict probably won't change. (Like locals(), I expect that will depend on the interpreter.)
If this was accessible externally, as fn.__defaults__ is then examples such as:
Defaults are part of the function definition and are fixed when the function is created. The values assigned to parameters change every time you call the function, whether you need them or not. For non-trivial applications with many function calls, that's likely to add up to a measurable slow-down. Its also going to suffer from race conditions, unless someone much cleverer than me can think of a way to avoid them which doesn't slow down function calls even more. - I call function(a=1, b=2); - function.__params__ is set to {'a': 1, 'b': 2} - meanwhile another thread calls function(a=98, b=99); - setting function.__params__ to {'a': 98, 'b': 99} - and I then access function.__params__, getting the wrong values. I think that __params__ as an implicitly created local variable is just barely justifiable, if you don't care about slowing down all function calls for the benefit of a tiny number of them. But exposing that information as an externally visible attribute of the function object is probably unworkable and unnecessary. -- Steve

On Tue, Sep 11, 2018 at 9:34 PM, Steven D'Aprano <steve@pearwood.info> wrote:
Rather than slowing down ALL function calls, you could slow down only those that use it. The interpreter could notice the use of the name __params__ inside a function and go "oh, then I need to include the bytecode to create that". It'd probably need to be made a keyword, or at least unassignable, to ensure that you never try to close over the __params__ of another function, or declare "global __params__", or anything silly like that. I'm still -1 on adding it, though. ChrisA

Summary: locals() and suggestion __params__ are similar, and roughly speaking each can be implemented from the other. Experts / pedants would prefer not to use the name __params__ for this purpose. Steve D'Aprano wrote:
[snip]
As far as I know, locals() does not suffer from a race condition. But it's not a local variable. Rather, it's a function that returns a dict. Hence avoiding the race condition. Python has some keyword identifiers. Here's one >>> __debug__ = 1 SyntaxError: assignment to keyword Notice that this is a SYNTAX error. If __params__ were similarly a keyword identifier, then it would avoid the race condition. It would simply be a handle that allows, for example, key-value access to the state of the frame on the execution stack. In other words, a lower-level object from which locals() could be built. By the way, according to <quote> https://www.quora.com/What-is-the-difference-between-parameters-and-argument... A parameter is a variable in a method definition. When a method is called, the arguments are the data you pass into the method's parameters. Parameter is variable in the declaration of function. Argument is the actual value of this variable that gets passed to function. </quote> In my opinion, the technically well-informed would prefer something like __args__ or __locals__ instead of __params__, for the current purpose. Finally, __params__ would simply be the value of __locals__ before any assignment has been done. Here's an example >>> def fn(a, b, c): ... lcls = locals() ... return lcls ... >>> fn(1, 2, 3) {'c': 3, 'b': 2, 'a': 1} Note: Even though lcls is the identifier for a local variable, at the time locals() is called the lcls identifier is unassigned, so not picked up by locals(). So far as I can tell, __params__ and locals() can be implemented in terms of each other. There could be practical performance benefits in providing the lower-level command __params__ (but with the name __locals__ or the like). -- Jonathan

I wrote:
Following this up, I did a search for "__locals__" Python. The most interesting link I found was <quote> Implement PEP 422: Simple class initialisation hook https://bugs.python.org/issue17044#msg184195 Nick Coghlan wrote: Oh, that's bizarre - the presence of __locals__ is a side effect of calling locals() in the class body. So perhaps passing the namespace as a separate __init_class__ parameter is a better option. </quote> So it looks like (i) there's some complexity associated with locals(), and (ii) if we wish, it seems that __locals__ is available as a keyword identifier. Finally, another way to see that there's no race condition. The Python debugger supports inspection of stack frames. And it's a pure Python module. https://docs.python.org/3/library/pdb.html https://github.com/python/cpython/tree/3.7/Lib/pdb.py -- Jonathan

On Tue, Sep 11, 2018 at 04:57:16PM +0100, Jonathan Fine wrote:
Summary: locals() and suggestion __params__ are similar, and roughly speaking each can be implemented from the other.
You cannot get a snapshot of the current locals just from the function parameters, since the current locals will include variables which aren't parameters. Likewise you cannot get references to the original function parameters from the current local variables, since the params may have been re-bound since the call was made. (Unless you can guarantee that locals() is immediately called before any new local variables were created, i.e. on entry to the function, before any other code can run. As you point out further below.) There's a similarity only in the sense that parameters of a function are included as local variables, but the semantics of __params__ as proposed and locals() are quite different. They might even share some parts of implementation, but I don't think that really matters one way or another. Whether they do or don't is a mere implementation detail.
Experts / pedants would prefer not to use the name __params__ for this purpose.
I consider myself a pedant (and on a good day I might pass as something close to an expert on some limited parts of Python) and I don't have any objection to the *name* __params__. From the perspective of *inside* a function, it is a matter of personal taste whether you refer to parameter or argument: def func(a): # in the declaration, "a" is a parameter # inside the running function, once "a" has a value set, # its a matter of taste whether you call it a parameter # or an argument or both; I suppose it depends on whether # you are referring to the *variable* or its *value* # but here 1 is the argument bound to the parameter "a" result = func(1) It is the semantics that I think are problematic, not the choice of name.
Indeed. Each time you call locals(), it returns a new dict with a snapshot of the current local namespace. Because it all happens inside the same function call, no external thread can poke inside your current call to mess with your local variables. But that's different from setting function.__params__ to passed in arguments. By definition, each external caller is passing in its own set of arguments. If you have three calls to the function: function(a=1, b=2) # called by A function(a=5, b=8) # called by B function(a=3, b=4) # called by C In single-threaded code, there's no problem here: A makes the first call; the interpreter sets function.__params__ to A's arguments; the function runs with A's arguments and returns; only then can B make its call; the interpreter sets function.__params__ to B's arguments; the function runs with B's arguments and returns; only then can C make its call; the interpreter sets function.__params__ to C's arguments; the function runs with C's arguments and returns but in multi-threaded code, unless there's some form of locking, the three sets can interleave in any unpredictable order, e.g.: A makes its call; B makes its call; the interpreter sets function.__params__ to B's arguments; the interpreter sets function.__params__ to A's arguments; the function runs with B's arguments and returns; C make its call; the interpreter sets function.__params__ to C's arguments; the function runs with A's arguments and returns; the function runs with C's arguments and returns. We could solve this race condition with locking, or by making the pair of steps: the interpreter sets function.__params__ the function runs and returns a single atomic step. But that introduces a deadlock: once A calls function(), threads B and C will pause (potentially for a very long time) waiting for A's call to complete, before they can call the same function. I'm not an expert on threaded code, so it is possible I've missed some non-obvious fix for this, but I expect not. In general, solving race conditions without deadlocks is a hard problem.
The problem isn't because the caller assigns to __params__ manually. At no stage does Python code need to try setting "__params__ = x", in fact that ought to be quite safe because it would only be a local variable. The race condition problem comes from trying to set function.__params__ on each call, even if its the interpreter doing the setting.
That wouldn't have the proposed semantics. __params__ is supposed to be a dict showing the initial values of the arguments passed in to the function, not merely a reference to the current frame. [...]
Oh well, that puts me in my place :-) I have no objection to __args__, but __locals__ would be very inappropriate, as locals refers to *all* the local variables, not just those which are declared as parameters. (Parameters are a *subset* of locals.)
Finally, __params__ would simply be the value of __locals__ before any assignment has been done.
Indeed. As Chris (I think it was) pointed out, we could reduce the cost of this with a bit of compiler magic. A function that never refers to __params__ would run just as it does today: def func(a): print(a) might look something like this: 2 0 LOAD_GLOBAL 0 (print) 2 LOAD_FAST 0 (a) 4 CALL_FUNCTION 1 6 POP_TOP 8 LOAD_CONST 0 (None) 10 RETURN_VALUE just as it does now. But if the compiler sees a reference to __params__ in the body, it could compile in special code like this: def func(a): print(a, __params__) 2 0 LOAD_GLOBAL 0 (locals) 2 CALL_FUNCTION 0 4 STORE_FAST 1 (__params__) 3 6 LOAD_GLOBAL 1 (print) 8 LOAD_FAST 0 (a) 10 LOAD_FAST 1 (__params__) 12 CALL_FUNCTION 2 14 POP_TOP 16 LOAD_CONST 0 (None) 18 RETURN_VALUE Although more likely we'd want a special op-code to populate __params__, rather than calling the built-in locals() function. I don't think that's a bad idea, but it does add more compiler magic, and I'm not sure that there is sufficient justification for it. -- Steve

Steve Barnes suggested adding __params__, as in
Steve D'Aprano commented
I'm puzzled here. Steve B provided code fragment for k,v in __params__ while Steve D provided code fragment function.__params__ by which I think he meant in terms of Steve B's example a_method.__params__ Perhaps Steve D thought Steve B wrote def a_method(self, cr, uid, ids, values, context=None): ... params = {k:v for k,v in a_method.__params__ # Is this what Steve D thought Steve B wrote? if k in parent.a_method.keys() } # Possibly add some additional entries here! super(self, parent).a_method(**params) If Steve B had written this, then I would agree with Steve D's comment. But as it is, I see no race condition problem, should __params__ be properly implemented as a keyword identifier. Steve D: Please clarify or explain you use of function.__params__ Perhaps it was a misunderstanding. By the way: I've made a similar mistake, on this very thread. So I hope no great shame is attached to such errors. <quote> https://mail.python.org/pipermail/python-ideas/2018-September/053224.html Summary: I addressed the DEFINING problem. My mistake. Some rough ideas for the CALLING problem. Anders has kindly pointed out to me, off-list, that I solved the wrong problem. His problem is CALLING the function fn, not DEFINING fn. Thank you very much for this, Anders. </quote> -- Jonathan

On Wed, Sep 12, 2018 at 02:23:34PM +0100, Jonathan Fine wrote:
In context, what Steve Barnes said was If this [__params__] was accessible externally, as fn.__defaults__ is [...] https://mail.python.org/pipermail/python-ideas/2018-September/053322.html Here is the behaviour of fn.__defaults__: py> def fn(a=1, b=2, c=3): ... pass ... py> fn.__defaults__ (1, 2, 3) Notice that it is an externally acessible attribute of the function object. If that's not what Steve Barnes meant, then I have no idea why fn.__defaults__ is relevant or what he meant. I'll confess that I couldn't work out what Steve's code snippet was supposed to mean: params = {k:v for k,v in __params__ if k in parent.a_method.keys()} Does __params__ refer to the currently executing a_method, or the superclass method being called later on in the line? Why doesn't parent.a_method have parens? Since parent.a_method probably isn't a dict, why are we calling keys() on a method object? The whole snippet was too hard for me to comprehend, so I went by the plain meaning of the words he used to describe the desired semantics. If __params__ is like fn.__defaults__, then that would require setting fn.__params__ on each call. Perhaps I'm reading too much into the "accessible externally" part, since Steve's example doesn't seem to actually be accessing it externally. -- Steve

Hi Steve Thank you for your prompt reply. You wrote:
I'll confess that I couldn't work out what Steve B's code snippet was supposed to mean:
params = {k:v for k,v in __params__ if k in parent.a_method.keys()}
The Zen of Python (which might not apply here) says: In the face of ambiguity, refuse the temptation to guess. Now that we have more clarity, Steve D'A, please let me ask you a direct question. My question is about correctly implementing of __params__ as a keyword identifier, with semantics as in Steve B's code snippet above. Here's my question: Do you think implementing this requires the avoidance of a race hazard? Or perhaps it can be done, as I suggested, entirely within the execution frame on the stack? -- Jonathan

On Wed, Sep 12, 2018 at 03:58:25PM +0100, Jonathan Fine wrote:
My question is about correctly implementing of __params__ as a keyword identifier, with semantics as in Steve B's code snippet above.
The semantics of Steve's code snippet are ambiguous.
Here's my question: Do you think implementing this requires the avoidance of a race hazard?
I don't know what "this" is any more. I thought Steve wanted an externally accessible fn.__params__ dict, as that's what he said he wanted, but his code snippet doesn't show that. If there is no externally accessible fn.__params__ dict, then there's no race hazard. I see no reason why a __params__ local variable would be subject to race conditions. But as you so rightly quoted the Zen at me for guessing in the face of ambiguity, without knowing what Steve intends, I can't answer your question. As a purely internal local variable, it would still have the annoyance that writing to the dict might not actually effect the local values, the same issue that locals() has. But if we cared enough, we could make the dict a proxy rather than a real dict. I see no reason why __params__ must be treated as special keyword, like __debug__, although given that it is involved in special compiler magic, that might be prudent. (Although, in sufficient old versions of Python, even __debug__ was just a regular name.)
Or perhaps it can be done, as I suggested, entirely within the execution frame on the stack?
Indeed. Like I said right at the start, there shouldn't be any problem for the compiler adding a local variable to each function (or just when required) containing the initial arguments bound to the function parameters. *How* the compiler does it, whether it is done during compilation or on entry to the function call, or something else, is an implementation detail which presumably each Python interpreter can choose for itself. All of this presumes that it is a desirable feature. -- Steve

On 12/09/2018 16:38, Steven D'Aprano wrote:
Hi, My intent with the __params__, (or whatever it might end up being called), was to provide a mechanism whereby we could: a) find out, before calling, which parameters a function/method accepts, (just as __defaults__ gives us which values the function/method has default values for so does not require in every call with its defaults). Since this would normally be a compile time operation I do not anticipate any race conditions. I suspect that this would also be of great use to IDE authors and others as well as the use case on this thread. b) a convenient mechanism for accessing all of the supplied parameters/arguments, (whether actually given or from defaults), from within the function/method both the parameter names and the values supplied at the time of the specific call. The example I gave was a rough and ready filtering of the outer functions parameters down to those that are accepted by the function that is about to be called, (I suspect that __locals__() might have been a better choice here). I don't anticipate race conditions here either as the values would be local at this point. The idea was to provide a similar mechanism to the examples of functions that accept a list and dictionary in addition to the parameters that they do consume so as to be able to work with parameter lists/dictionaries that exceed the requirements. The difference is that, since we can query the function/method for what parameters it accepts and filter what we have to match, we do not need to alter the signature of the called item. This is important when providing wrappers for code that we do not have the freedom to alter. I have done a little testing and found that: a) if we have a fn(a, b, c) and call it with fn(b=2, c=3, a=1) it is quite happy and assigns the correct values so constructing a dictionary that satisfies all of the required parameters and calling with fn(**the_dict) is fine. b) Calling dir() or __locals__() on the first line of the function gives the required information (but blocks the docstring which would be a bad idea). The one worry is how to get the required parameter/argument list for overloaded functions or methods but AFAIK these are all calls to wrapped C/C++/other items so already take (*arg, **argv) inputs. I would guess that we would need some sort of indicator for this type of function. I hope I have made my thoughts clearer rather than muddier :-) thank you all for taking the time to think about this. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com

On Wed, Sep 12, 2018 at 06:59:44AM -0700, Ethan Furman wrote:
[...]
I'm finding it hard to understand the documentation for threading.local(): https://docs.python.org/3/library/threading.html#threading.local as there isn't any *wink* although it does refer to the docstring of a private implementation module. But I can't get it to work. Perhaps I'm doing something wrong: import time from threading import Thread, local def func(): pass def attach(value): func.__params__ = local() func.__params__.value = value def worker(i): print("called from thread %s" % i) attach(i) assert func.__params__.value == i time.sleep(3) value = func.__params__.value if value != i: print("mismatch", i, value) for i in range(5): t = Thread(target=worker, args=(i,)) t.start() print() When I run that, each of the threads print their "called from ..." message, the assertions all pass, then a couple of seconds later they consistently all raise exceptions: Exception in thread Thread-1: Traceback (most recent call last): File "/usr/local/lib/python3.5/threading.py", line 914, in _bootstrap_inner self.run() File "/usr/local/lib/python3.5/threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "<stdin>", line 5, in worker AttributeError: '_thread._local' object has no attribute 'value' In any case, if Steve Barnes didn't actually intend for the __params__ to be attached to the function object as an externally visible attribute, the whole point is moot. -- Steve

On Mon, Sep 10, 2018 at 11:00 PM, Ethan Furman <ethan@stoneleaf.us> wrote:
hmm -- this is a trick -- in those cases, I find myself using *args, **kwargs when overloading methods. But that does hide the method signature, which is really unfortunate. IT works pretty well for things like GUI toolkits, where you might be subclassing a wx.Window, and the docs for wx.Window are pretty easy to find, but for you own custom classes with nested subclassing, it does get tricky. For this case, I kinda like Steve Barnes idea (I think it is his) to have a "magic object of some type, so you can have BOTH specified parameters, and easy access to the *args, **kwargs objects. Though I'm also wary of the magic... Perhaps there's some way to make it explicit, like "self": def fun(a, b, c, d=something, e=something, &args, &&kwargs): (I'm not sure I like the &, so think of it as a placeholder) In this case, then &args would be the *args tuple, and &&kwargs would be the **kwargs dict (as passed in) -- completely redundant with the position and keyword parameters. So the above could be: def a_method(self, cr, uid, ids, values, context=None, &args, &&kwargs): super(self, parent).a_method(*args, **kwargs) do_things_with(cr, uid, ...) So you now have a clear function signature, access to the parameters, and also a clear an easy way to pass the whole batch on to the superclass' method. I just came up with this off teh top of my head, so Im sure there are big issues, but maybe it can steer us in a useful direction. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

Another possiblity would be to be able to have alternative signatures for a single function, the first being the one shown in inspection and for auto-completion, the other one(s?) just creating new references to the same variables. Like this: def fun(a, b, c, d=something1, e=something2, f=something3)(_, *args, e=something2, **kwargs): # do whatever you need assert args[0] == b assert kwargs["d"] == something1 super().fun("foo", *args, e="bar", **kwargs) I'm not sure what would happen if we didn't provide the same defaults for `e` in the two signatures (probably an exception).

On Tue, Sep 11, 2018 at 10:12:56AM +0200, Chris Barker via Python-ideas wrote:
Do we need to solve this in the interpreter? Surely this is an argument for better tooling. A sophisticated IDE should never be a *requirement* for coding in Python, but good tools can make a big difference in the pleasantness or otherwise of coding. Those tools don't have to be part of the language. At least for methods, code completers ought to be able to search the MRO for the first non-**kwargs signature and display parameters from further up the MRO: class Parent: def method(self, spam): pass class Child(Parent): def method(self, **kwargs): pass Now when I type Child().method(<TAB>) the IDE could search the MRO and find "spam" is the parameter. That becomes a "quality of IDE" issue, and various editors and IDEs can compete to have the best implementation. Or perhaps we could have an officially blessed way to give tools a hint as to what the real signature is. class Child(Parent): @signature_hint(Parent.method) def method(self, **kwargs): pass Statically, that tells the IDE that "true" signature of Child.method can be found from Parent.method; dynamically, the decorator might copy that signature into Child.method.__signature_hint__ for runtime introspection by tools like help(). The beauty of this is that it is independent of inheritance. We could apply this decorator to any function, and point it to any other function or method, or even a signature object. @signature_hint(open) def my_open(*args, **kwargs): ... And being optional, it won't increase the size of any functions unless you specifically decorate them. -- Steve

On Tue, Sep 11, 2018 at 08:53:55PM +1000, Steven D'Aprano wrote: [...]
Here's an untested implementation: import inspect def signature_hint(callable_or_sig, *, follow_wrapped=True): if isinstance(callable_or_sig, inspect.Signature): sig = callable_or_sig else: sig = inspect.signature(callable_or_sig, follow_wrapped=follow_wrapped) def decorator(func): func.__signature_hint__ = sig return func return decorator inspect.signature would need to become aware of these hints too: def f(a, b=1, c=2): pass @signature_hint(f) def g(*args): pass @signature_hint(g) def h(*args): pass At this point h.__signature_hint__ ought to give <Signature (a, b=1, c=2)> (Note that this is not quite the same as the existing follow_wrapped argument of inspect.signature.) This doesn't directly help Ander's problem of having to make calls like func(a=a, b=b, c=c) # apologies for the toy example but at least it reduces the pain of needing to Repeat Yourself when overriding methods, which indirectly may help in some (but not all) of Ander's cases. -- Steve

(nitpick: we're passing arguments, not parameters) I don't see how this could be confusing. Do you think it's confusing how many parameters a function has in python now because of the keyword only marker? This suggestion follows the same rules you should already be familiar with when counting parameters, why would you now have trouble counting when the line doesn't begin with "def " and end with ":"?
I expect this to be common enough to warrant nicer language constructs (like OCaml has). I expect people today to use positional arguments to get concise code, and I think python pushes people in this direction. This is a bad direction imo.
Run my analysis tool. Check the numbers. It's certainly true at work, and it's true for Django for example.

On 07/09/18 03:38, Anders Hovmöller wrote:
potayto, potahto
I counted commas. I came up with the wrong number. Simple. For what it's worth, I don't like the keyword-only marker or the proposed positional-only marker for exactly the same reason.
I disagree. Keyword arguments are a fine and good thing, but they are best used for optional arguments IMHO. Verbosity for the sake of verbosity is not a good thing.
OK, then your assertion didn't mean what I thought it means, and I'm very confused about what it does mean. Could you try that again? -- Rhodri James *-* Kynesim Ltd

There's also potentially trailing commas to confuse you further :P I'm not a big fan of the keyword argument only syntax either, but that ship has sailed long ago, so now I think we should consider it Pythonic and judge future suggestions accordingly. I do like the feature of keyword only and understand the tradeoffs made to make the syntax work, so I'm quite happy overall.
Hmm.. it seems to me like there are some other caveats to your position here. Like "no functions with more than two arguments!" or similar? Personally I think readability suffers greatly already at two arguments if none of the parameters are named. Sometimes you can sort of fix the readability with function names like do_something_with_a_foo_and_bar(foo, bar), but that is usually more ugly than just using keyword arguments.
Functions in real code have > 2 arguments. Often when reading the code the only way to know what those arguments are is by reading the names of the parameters on the way in, because it's positional arguments. But those aren't checked. To me it's similar to bracing for indent: you're telling the human one thing and the machine something else and no one is checking that those two are in sync. I have seen beginners try: def foo(b, a): pass a = 1 b = 2 foo(a, b) and then be confused because a and b are flipped. I have no idea if any of that made more sense :P Email is hard. / Anders

On 07/09/18 14:59, Anders Hovmöller wrote:
No.
I'd have said three arguments in the general case, more if you've chosen your function name to make it obvious (*not* by that nasty foo_and_bar method!), though that's pretty rare. That said, I don't often find I need more than a few mandatory arguments.
I'll repeat; surprisingly few of my function have more than three mandatory (positional) arguments. Expecting to understand functions by just reading the function call and not the accompanying documentation (or code) is IMHO hopelessly optimistic, and just having keyword parameters will not save you from making mistaken assumptions.
I have seen teachers get their students to do that deliberately, to give them practical experience that the variable names they use in function calls are not in any way related to the names used in the function definition. I've not seen those students make the same mistake twice :-) I wonder if part of my dislike of your proposal is that you are deliberately blurring that disconnect? -- Rhodri James *-* Kynesim Ltd

On Fri, Sep 07, 2018 at 06:59:45AM -0700, Anders Hovmöller wrote:
Personally I think readability suffers greatly already at two arguments if none of the parameters are named.
*At* two arguments? As in this example? map(len, sequence) I'll admit that I struggle to remember the calling order of list.insert, I never know which of these I ought to write: mylist.insert(0, 1) mylist.insert(1, 0) but *in general* I don't think two positional arguments is confusing.
It is difficult to judge the merit of that made-up example. Real examples are much more convincing and informative.
Functions in real code have > 2 arguments.
Functions in real code also have <= 2 arguments.
I don't understand that sentence. If taken literally, the way to tell what the arguments are is to look at the arguments. I think you might mean the only way to tell the mapping from arguments supplied by the caller to the parameters expected by the called function is to look at the called function's signature. If so, then yes, I agree. But why is this relevent? You don't have to convince us that for large, complex signatures (a hint that you may have excessively complex, highly coupled code!) keyword arguments are preferable to opaque positional arguments. That debate was won long ago. If a complex calling signature is unavoidable, keyword args are nicer.
But those aren't checked.
I don't understand this either. Excess positional arguments aren't silently dropped, and missing ones are an error.
No, you're telling the reader and the machine the same thing. func(a, b, c) tells both that the first parameter is given the argument a, the second is given argument b, and the third is given argument c. What's not checked is the *intention* of the writer, because it can't be. Neither the machine nor the reader has any insight into what I meant when I wrote the code (not even if I am the reader, six weeks after I wrote the code). Keywords help a bit with that... it's harder to screw up open(filename, 'r', buffering=-1, encoding='utf-8', errors='strict') than: open(filename, 'r', -1, 'utf-8', 'strict') but not impossible. But again, this proposal isn't for keyword arguments. You don't need to convince us that keyword arguments are good.
How would they know? Beginners are confused by many things. Coming from a background in Pascal, which has no keyword arguments, it took me a while to get to grips with keyword arguments: def spam(a, b): print("a is", a) print("b is", b) a = 1 b = 2 spam(a=b, b=a) print(a, b) The effect of this, and the difference between the global a, b and local a, b, is not intuitively obvious. -- Steve

It’s often enough. But yes, map seems logical positional to me too but I can’t tell if it’s because I’ve programmed in positional languages for many years, or that I’m a Swedish and English native speaker. I don’t see why map would be clear and insert not so I’m guessing it has to do with language somehow. I think it’s a good thing to be more explicit in border cases. I don’t know what the intuitions of future readers are.
It is difficult to judge the merit of that made-up example. Real examples are much more convincing and informative.
Agreed. I just could only vaguely remember doing this sometimes but I had no idea what to grep for so couldn’t find a real example :P
Functions in real code have > 2 arguments.
Functions in real code also have <= 2 arguments.
Yea and they are ok as is.
Good to see we have common ground here. I won’t try to claim the code base at work doesn’t have way too many functions with way too many parameters :P It’s a problem that we are working to ameliorate but it’s also a problem my suggested feature would help with. I think we should accept that such code bases exists even when managed by competent teams. Adding one parameter is often ok but only over time you can create a problem. Refactoring to remove a substantial amount of parameters is also not always feasible or with the effort. I think we should expect such code bases to be fairly common and be more common in closed source big business line apps. I think it’s important to help for these uses, but I’m biased since it’s my job :P “We” did add @ for numerical work after all and that’s way more niche than the types of code bases I’m discussing here. I think you’d also agree on that point?
Yea the arity is checked but if a refactor removes one parameter and adds another all the existing call sites are super obviously wrong if you look at the definition and the call at the same time, but Python doesn’t know.
Just like with bracing and misleading indents yes. It blames the user for a design flaw of the language.
What's not checked is the *intention* of the writer, because it can't be.
That’s my point yes. And of course it can be. With keyword arguments it is. Today. If people used them drastically more the computer would check intention more.
I’m not convinced I’m not in fact arguing this point :P There is a big and unfair advantage positional has over kw today due to the conciseness of one over the other. My suggestion cuts down this advantage somewhat, or drastically in some cases.
and then be confused because a and b are flipped.
How would they know?
How would they know what? They know it’s broken because their program doesn’t work. How would they know the computer didn’t understand a is a and b is b when it’s blatantly obvious to a human? That’s my argument isn’t it? :P / Anders

I disagree, when you have more than one parameter it's sometimes complicated to remember the order. Therefore, when you name your args, you have way less probability of passing the wrong variable, even with only one arg. Verbosity adds redundancy, so that both caller and callee are sure they mean the same thing. That's why Java has types everywhere, such that the "declaration part" and the "use" part agree on the same idea (same type).

Here's a function found online (I'm too lazy to write my own, but it would be mostly the same). Tell me how keyword arguments could help this... Or WHAT names you'd give. 1. def quad(a,b,c): 2. """solves quadratic equations of the form 3. aX^2+bX+c, inputs a,b,c, 4. works for all roots(real or complex)""" 5. root=b**2-4*a*c 6. if root <0: 7. root=abs(complex(root)) 8. j=complex(0,1) 9. x1=(-b+j+sqrt(root))/2*a 10. x2=(-b-j+sqrt(root))/2*a 11. return x1,x2 12. else: 13. x1=(-b+sqrt(root))/2*a 14. x2=(-b-sqrt(root))/2*a 15. return x1,x2 After that, explain why forcing all callers to name their local variables a, b, c would be a good thing. On Fri, Sep 7, 2018, 12:18 PM Robert Vanden Eynde <robertve92@gmail.com> wrote:

If you want to force using pos args, go ahead and use Python docstring notation we'd write def quad(a,b,c, /) The names should not be renamed because they already have a normal ordering x ** n. This notation is standard, so it would be a shame to use something people don't use. However, I recently used a quad function in one of my uni course where the different factors are computed with a long expression, so keyword arguments, so I'd call: Vout = quad( a=... Some long expression spanning a lot of lines ..., b=... Same thing ..., c=... Same thing...) Without the a= reminder, one could count the indentation. And if you'd think it's a good idea to refactor it like that ... a = ... Some long expression spanning a lot of lines ... b = ... Same thing ... c = ... Same thing... Vout = quad(a,b,c) Then you're in the case of quad(*, a, b, c) (even if here, one would never def quad(c,b,a)). Wheter or not this refactor is more clear is a matter of "do you like functional programming". However, kwargs arz more useful in context where some parameters are optional or less frequentely used. But it makes sense (see Pep about mandatory kwargs). Kwargs is a wonderful invention in Python (or, lisp). Le ven. 7 sept. 2018 à 18:54, David Mertz <mertz@gnosis.cx> a écrit :

Top posting for once, since no one is quoting well in this thread: Does this in any way answer David's question? I'm serious; you've spent a lot of words that, as best I can tell, say exactly nothing about how keyword arguments would help that quadratic function. If I'm missing something, please tell me. On 07/09/18 18:17, Robert Vanden Eynde wrote:
-- Rhodri James *-* Kynesim Ltd

On Fri, Sep 7, 2018 at 2:22 PM Rhodri James <rhodri@kynesim.co.uk> wrote:
I read Robert's response as saying, 1. The quadratic formula and its parameter list are well-known enough that you shouldn't use different names or orders. 2. Even still, there are cases where the argument expressions are long enough that you might want to bind them to local variable names. However, I don't think David's example/question is fair in the first place. Robert said that passing as keywords can be useful in cases where the order is hard to remember, and David responded with an example where the argument order is standardized (so you wouldn't forget order), then talked about "forcing" callers to use certain variable names (which I don't think is warranted). On Fri, Sep 7, 2018 at 2:22 PM Rhodri James <rhodri@kynesim.co.uk> wrote:

Do you want to change my PEP suggestion to be about forcing stuff? Because otherwise I don’t see why you keep being that up. We’ve explained to you two times (three counting the original mail) that no one is saying anything about forcing anything.

On 09/06/2018 07:05 AM, Anders Hovmöller wrote:
On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote:
On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovmöller wrote:
Direct disagreement is not uncivil, just direct. You asked a yes/no question and got a yes/no answer. D'Aprano's comments further down are also not uncivil, just explicative (not expletive ;) ) of his position. As for your proposal, I agree with D'Aprano -- this is a lot machinery to support a use-case that doesn't feel compelling to me, and I do tend to name my variables the same when I can. -- ~Ethan~

On Thu, 6 Sep 2018 at 09:51 Ethan Furman <ethan@stoneleaf.us> wrote:
It also wouldn't have hurt to say "I don't think so" versus the hard "no" as it means the same thing. You're right that blunt isn't necessarily uncivil, but bluntness is also interpreted differently in various cultures so it's something to avoid if possible. -Brett

On Thursday, September 6, 2018 at 6:51:12 PM UTC+2, Ethan Furman wrote:
It's a rhetorical question in a PR sense, not an actual yes/no question.
It's not a lot of machinery. It's super tiny. Look at my implementation. Generally these arguments against sound like the arguments against f-strings to me. I personally think f-strings are the one of the best things to happen to python in at least a decade, I don't know if people on this list agree?

On Thu, Sep 06, 2018 at 07:05:57AM -0700, Anders Hovmöller wrote:
On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote:
[...]
You are suggesting special syntax which encourages people to name their local variables the same as the parameters to functions which they call. That makes a value judgement that it is not just a good thing to match those names, but that it is *such* a good thing that the language ought to provide syntax to make it easier. If we make this judgement that consistency of names is Good, then naturally *inconsistency* of names is, if not outright Bad, at least *Less* Good and therefore to be avoided. If this suggestion is accepted, it's likely that there will be peer pressure to treat this as more Pythonic (i.e. better quality code) than the older explicit name=name style, which will quickly become unPythonic. See, for example, how quickly people have moved to the implicit f-strings over the explicit string.format form. Laziness and conciseness trumps the Zen. Whether this is a good thing or a bad thing, I leave to people to make up their own mind. If we believe that this consistency is desirable then maybe this would be a good thing. Linters could warn when you use "name=spam" instead of "*, name"; style guides can demand that code always uses this idiom whenever practical, tutorials and blog posts will encourage it, and the peer pressure to rename variables to match the called function's parameters would be a good thing too. But if consistency for consistency's sake is not generally a good thing, then we ought not to add such syntax just for conciseness.
If library authors are choosing bad names for their parameters, how would this syntax change that practice? If they care so little for their callers that they choose poorly-named parameters, I doubt this will change their practice. But I'm not actually talking about library authors choosing bad names. I only used "a" as the name following your example. I presumed it was a stand-in for a more realistic name. There's no reason to expect that there's only one good name that works equally well as a formal parameter and as a local argument. Formal parameters are often more generic, local arguments can be more specific to the caller's context. Of course I understand that with this proposal, there's nothing *forcing* people to use it. But it shifts the *preferred* idiom from explicit "name=spam" to implicit "*, name" and puts the onus on people to justify why they aren't naming their local variables the same as the function parameter, instead of treating "the same name" as just another name. [...]
Let's not :-) Regarding it being a code-smell: https://refactoring.guru/smells/long-parameter-list http://wiki.c2.com/?TooManyParameters For a defence of long parameter lists, see the first answer here: http://wiki.c2.com/?LongParameterList but that active preference for long parameter lists seems to be a very rare, more common is the view that *at best* long parameter lists is a necessary evil that needs mitigation. I think this is an extreme position to take: https://www.matheus.ro/2018/01/29/clean-code-avoid-many-arguments-functions/ and I certainly wouldn't want to put a hard limit on the number of parameters allowed. But in general, I think it is unquestionable that long parameter lists are a code-smell. It is also relevant in this sense. Large, complex function calls are undoubtably painful. We have mitigated that pain somewhat by various means, probably the best of which are named keyword arguments, and sensible default values. The unintended consequence of this is that it has reduced the pressure on developers to redesign their code to avoid long function signatures, leading to more technical debt in the long run. Your suggestion would also reduce the pain of functions that require many arguments. That is certainly good news if the long argument list is *truly necessary* but it does nothing to reduce the amount of complexity or technical debt. The unintended consequence is likewise that it reduces the pressure on developers to avoid designing such functions in the first place. This might sound like I am a proponent of hair-shirt programming where everything is made as painful as possible so as to force people to program the One True Way. That's not my intention at all. I love my syntactic sugar as much as the next guy. But I'd rather deal with the trap of technical debt and excessive complexity by avoiding it in the first place, not by making it easier to fall into. The issue I have is that the problem you are solving is *too narrow*: it singles out a specific special case of "function call is too complex with too many keyword arguments", namely the one where the arguments are simple names which duplicate the parameter exactly, but without actually reducing or mitigating the underlying problems with such code. (On the contrary, I fear it will *encourage* such code.) So I believe this feature would add complexity to the language, making keyword arguments implicit instead of explicit, for very little benefit. (Not withstanding your statement that 30% of function calls would benefit. That doesn't match my experience, but we're looking at different code bases.)
Indeed. And I'm sympathetic that some tasks are inherently complex and require many arguments. Its a matter of finding a balance between being able to use them, without encouraging them.
You claimed the benefit of "conciseness", but that doesn't actually exist unless your arguments are already local variables named the same as the parameters of the function you are calling. Getting those local variables is not always free: sometimes they're natually part of your function anyway, and then your syntax would be a genuine win for conciseness. But often they're not, and you have to either forgo the benefit of your syntax, or add complexity to your function in order to gain that benefit. Pointing out that weakness in your argument is not a straw man. -- Steve

Steven's point is the same as my impression. It's not terribly uncommon in code I write or read to use the same name for a formal parameter (whether keyword or positional) in the calling scope. But it's also far from universal. Almost all the time where it's not the case, it's for a very good reason. Functions by their nature are *generic* in some sense. That is, they allow themselves to be called from many other places. Each of those places has its own semantic context where different names are relevant to readers of the code in that other place. As a rule, the names used in function parameters are less specific or descriptive because they have to be neutral about that calling context. So e.g. a toy example: for record in ledger: if record.amount > 0: bank_transaction(currency=currencies[record.country], deposit=record.amount, account_number=record.id) Once in a while the names in the two scopes align, but it would be code obfuscation to *force* them to do so (either by actual requirement or because "it's shorter"). On Thu, Sep 6, 2018 at 9:11 AM Steven D'Aprano <steve@pearwood.info> wrote:
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

On Thursday, September 6, 2018 at 4:13:45 PM UTC+2, David Mertz wrote:
Pythons normal arguments already gives people an option to write something else "because it's shorter" though: just use positional style. So your example is a bit dishonest because it would be: bank_transaction(currencies[record.country], record.amount, record.id) ...in many many or even most code bases. And I would urge you to try out my analysis tool on some large code base you have access to. I do have numbers to back up my claims. I don't have numbers on all the places where the names don't align but would be *better* if they did align though, because that's a huge manual task, but I think it's pretty obvious these places exists.

I have encountered situations like this, and generally I just use **kwargs for non-critical and handle the parameter management in the body of the function. This also makes it easier to pass the arguments to another function. You can use a dict comprehension to copy over the keys you want, then unpack them as arguments to the next function. On Thu, Sep 6, 2018 at 6:16 AM Anders Hovmöller <boxed@killingar.net> wrote:

Hi Anders Thank you for your interesting message. I'm sure it's based on a real need. You wrote:
I assume you're talking about defining functions. Here's something that already works in Python. >>> def fn(*, a, b, c, d, e): return locals() >>> fn.__kwdefaults__ = dict(a=1, b=2, c=3, d=4, e=5) >>> fn() {'d': 4, 'b': 2, 'e': 5, 'c': 3, 'a': 1} And to pick up something from the namespace >>> eval('aaa', fn.__globals__) 'telltale' Aside: This is short, simple and unsafe. Here's a safer way >>> __name__ '__main__' >>> import sys >>> getattr(sys.modules[__name__], 'aaa') 'telltale'
From this, it should be easy to construct exactly the dict() that you want for the kwdefaults.
-- Jonathan

I missed an important line of code. Here it is: >>> aaa = 'telltale' Once you have that, these will work: >>> eval('aaa', fn.__globals__) 'telltale' >>> __name__ '__main__' >>> import sys >>> getattr(sys.modules[__name__], 'aaa') 'telltale' -- Jonathan

Summary: I addressed the DEFINING problem. My mistake. Some rough ideas for the CALLING problem. Anders has kindly pointed out to me, off-list, that I solved the wrong problem. His problem is CALLING the function fn, not DEFINING fn. Thank you very much for this, Anders. For calling, we can use https://docs.python.org/3/library/functions.html#locals >>> lcls = locals() >>> a = 'apple' >>> b = 'banana' >>> c = 'cherry' >>> dict((k, lcls[k]) for k in ('a', 'b', 'c')) {'b': 'banana', 'c': 'cherry', 'a': 'apple'} So in his example foo(a=a, b=b, c=c, d=3, e=e) one could instead write foo(d=3, **helper(locals(), ('a', 'b', 'c', 'e'))) or perhaps better helper(locals(), 'a', 'b', 'c', 'e')(foo, d=3) where the helper() picks out items from the locals(). And in the second form, does the right thing with them. Finally, one might be able to use >>> def fn(*, a, b, c, d, e): f, g, h = 3, 4, 5 >>> fn.__code__.co_kwonlyargcount 5 >>> fn.__code__.co_varnames ('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h') >>> fn.__code__.co_argcount 0 to identify the names of all keyword arguments of the function foo(), and they provide the values in locals() as the defaults. Of course, this is somewhat magical, and requires strict conformance to conventions. So might not be a good idea. The syntax could then be localmagic(foo, locals())(d=3) which, for magicians, might be easier. But rightly in my opinion, Python is reluctant to use magic. On the other hand, for a strictly controlled Domain Specific Language, it might, just might, be useful. And this list is for "speculative language ideas" (see https://mail.python.org/mailman/listinfo/python-ideas). -- Jonathan

Sure. This was the argument against f-strings too. In any case I'm not trying to solve a problem of how to extract things from the local namespace anymore than "foo(a, b)" is. I'm trying to minimize the advantage positional arguments have over keyword arguments in brevity. If that makes sense?

Le 06/09/2018 à 03:15, Anders Hovmöller a écrit :
It will make code harder to read. Indeed, now your brain has to make the distinction between: foo(a, *, b, c) and: foo(a, b, *, c) Which is very subtle, yet not at all the same thing. All in all, this means: - you have to stop to get the meaning of this. Scanning the lines doesn't work anymore. - this is a great opportunity for mistakes, and hence bugs. - the combination of the two makes bugs that are hard to spot and fix. -1

I agree that this is a familiar pattern, but I long since forgot the specifics of the domain it happens in. I borrowed your code, and added filename tracking to see what source files had high `could_have_been_a_matched_kwarg`. Here is the top one: https://github.com/django/django/blob/master/tests/migrations/test_autodetec... The argument-name-matches-the-local-variable-name pattern does appear to happen in many test files. I assume programmers are more agnostic about variable names in a test because they have limited impact on the rest of the program; matching the argument names makes sense. There are plenty of non-test files that can use this pattern, here are two intense ones: https://github.com/django/django/blob/master/django/contrib/admin/options.py (212 call parameters match) https://github.com/django/django/blob/master/django/db/backends/base/schema.... (69 call parameters match) Opening these in an IDE, and looking at the function definitions, there is a good chance you find a call where the local variable and argument names match. It is interesting to see this match, but I not sure how I feel about it. For example, the options.py has a lot of small methods that deal with (request, obj) pairs: eg `has_view_or_change_permission(self, request, obj=None)` Does that mean there should be a namedtuple("request_on_object", ["request", "obj"]) to "simplify" all these calls? There are also many methods that accept a single `request` argument; but I doubt they would benefit from the new syntax. On 2018-09-06 06:15, Anders Hovmöller wrote:

Hi, I'd like to reopen this discussion if anyone is interested. Some things have changed since I wrote my original proposal so I'll first summarize: 1. People seem to prefer the syntax `foo(=a)` over the syntax I suggested. I believe this is even more trivial to implement in CPython than my original proposal anyway... 2. I have updated my analysis tool: https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c It will now also give you statistics on the number of arguments function calls have. I would love to see some statistics for other closed source programs you might be working on and how big those code bases are. 3. I have made a sort-of implementation with MacroPy: https://github.com/boxed/macro-kwargs/blob/master/test.py I think this is a dead end, but it was easy to implement and fun to try! 4. I have also recently had the idea that a foo=foo type pattern could be handled in for example PyCharm as a code folding feature (and maybe as a completion feature). I still think that changing Pythons syntax is the right way to go in the long run but with point 4 above one could experience what this feature would feel like without running a custom version of Python and without changing your code. I admit to a lot of trepidation about wading into PyCharms code though, I have tried to do this once before and I gave up. Any thoughts? / Anders

On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovmöller wrote:
No.
This would not just be shorter but would create an incentive for consistent naming across the code base.
You say that as if consistent naming is *in and of itself* a good thing, merely because it is consistent. I'm in favour of consistent naming when it helps the code, when the names are clear and relevant. But why should I feel bad about failing to use the same names as the functions I call? If some library author names the parameter to a function "a", why should I be encouraged to use that same name *just for the sake of consistency*?
It's certainly more concise, provided those named variables already exist, but how often does that happen? You say 30% in your code base. (By the way, well done for writing an analysis tool! I mean it, I'm not being sarcastic. We should have more of those.) I disagree that f(*, page) is more readable than an explicit named keyword argument f(page=page). My own feeling is that this feature would encourage what I consider a code-smell: function calls requiring large numbers of arguments. Your argument about being concise makes a certain amount of sense if you are frequently making calls like this: # chosing a real function, not a made-up example open(file, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, closefd=closefd, opener=opener) If 30% of your function calls look like that, I consider it a code-smell. The benefit is a lot smaller if your function calls look more like this: open(file, encoding=encoding) and even less here: open(file, 'r', encoding=self.encoding or self.default_encoding, errors=self.errors or self.default_error_handler) for example. To get benefit from your syntax, I would need to extract out the arguments into temporary variables: encoding = self.encoding or self.default_encoding errors = self.errors or self.default_error_handler open(file, 'r', *, encoding, errors) which completely cancels out the "conciseness" argument. First version, with in-place arguments: 1 statement 2 lines 120 characters including whitespace Second version, with temporary variables: 3 statements 3 lines 138 characters including whitespace However you look at it, it's longer and less concise if you have to create temporary variables to make use of this feature. -- Steve

On Thu, Sep 6, 2018 at 9:11 AM Steven D'Aprano <steve@pearwood.info> wrote:
I've been asking this same question on the Javascript/ES6 side of my work ever since unpacking was introduced there which baked hash-lookup into the unpacking at a syntax level. In that world its impacted this same encouragement of "consistency" between local variable names and parameters of called functions and it certainly seems popular in that ecosystem. The practice still feels weird to me and I'm on the fence about it. Although, to be honest, I'm definitely leaning towards the "No, actually, it is a good thing." I grew up, development-speaking, in the Python world with a strong emphasis drilled into me that style constraints make better code and maybe this is just an extension of that. Of course, you might not always want the same name, but it is only encouraged not required. You can always rename variables. That said... I'm not actually a fan of the specific suggested syntax:
foo(*, a, b, c, d=3, e)
I just wanted to give my two cents on the name consistency issue.

I'm trying to see how it can be done with current python. from somelib import auto auto(locals(), function, 'a', 'b', 'c', d=5) auto(locals(), function).call('a', 'b', 'c', d=5) auto(locals(), function)('a', 'b', 'c', d=5) auto(locals()).bind(function).call('a', 'b', 'c', d=5) One of those syntax for a class auto could be chosen but it allows you to give locals in the call. However, locals() gives a copy of the variables so it must be given as this code illustrates : def f(x): y = x+1 a = locals() g = 4 print(a) f(5) # {'y': 6, 'x': 5} Le jeu. 6 sept. 2018 à 15:18, Calvin Spealman <cspealma@redhat.com> a écrit :

On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote:
Heh. I did expect the first mail to be uncivil :P
If it's the same thing yes. Otherwise no.
I'm in favour of consistent naming when it helps the code, when the names are clear and relevant.
Which is what I'm saying.
But why should I feel bad about failing to use the same names as the functions I call?
Yea, why would you feel bad? If you should have different names, then do. Of course.
It would encourage library authors to name their parameters well. It wouldn't do anything else.
(Caveat: 30% of the cases where my super simple and stupid tool can find.) It's similar for django btw.
I disagree that f(*, page) is more readable than an explicit named keyword argument f(page=page).
People prefer f(page) today. For some reason. That might refute your statement or not, depending on why they do it.
I don't see how that's relevant (or true, but let's stick with relevant). There are actual APIs that have lots of arguments. GUI toolkits are a great example. Another great example is to send a context dict to a template engine. To get benefit from your syntax, I would need to
Ok. Sure, but that's a straw man.... / Anders

On 06/09/18 15:05, Anders Hovmöller wrote:
For comparison, my reaction did indeed involve awe. It was full of it, in fact :-p Sorry, but that syntax looks at best highly misleading -- how many parameters are we passing? I don't like it at all.
Actually you are not. Adding specific syntax support is a strong signal that you expect people to use it and (in this case) use consistent naming. Full stop. It's a much stronger statement than you seem to think.
Evidence? -- Rhodri James *-* Kynesim Ltd

Rhodri James wrote:
that syntax looks at best highly misleading -- how many parameters are we passing? I don't like it at all.
Maybe something like this would be better: f(=a, =b, =c) Much more suggestive that you're passing a keyword argument. As for whether consistent naming is a good idea, seems to me it's the obvious thing to do when e.g. you're overriding a method, to keep the signature the same for people who want to pass arguments by keyword. You'd need to have a pretty strong reason *not* to keep the parameter names the same. Given that, it's natural to want a way to avoid repeating yourself so much when passing them on. So I think the underlying idea has merit, but the particular syntax proposed is not the best. -- Greg

Maybe something like this would be better:
f(=a, =b, =c)
Haha. Look at my PEP, it's under "rejected alternative syntax", because of the super angry replies I got on this very mailing list when I suggested this syntax a few years ago :P I think that syntax is pretty nice personally, but me and everyone at work I've discussed this with think that f(*, a, b, c) syntax is even nicer since it mirrors "def f(*, a, b, c)" so nicely. Most replies to my new syntax has been along the lines of "seems obvious" and "ooooh" :P

Op vr 7 sep. 2018 om 04:49 schreef Anders Hovmöller <boxed@killingar.net>:
I must say I like the idea of being able to write it the way you propose. Sometimes we make a function only to be called once at a specific location, more because of factoring out some functions for clarity. Been doing that myself lately for scripting, and I think it'd increase clarity. However, it's really alike to f(a, b, c), which does something totally different. It -might- become something of a newb trap, as myfunc(*, a, b, c) would be 100% equal to myfunc(*, c, a, b) but that's not true for the f(c, a, b) case. I dislike the f(=arg) syntax.

I've seen beginners make the mistake of calling f(c, a, b) and being confused why it doesn't work the way they expected, so I think the newb trap might go in the other direction. If by "newb" one means "totally new to programming" then I think the keyword style is probably less confusing but if you come from a language with only positional arguments (admittedly most languages!) then the trap goes in the other direction. Of course, I don't have the resources or time to make a study about this to figure out which is which, but I agree it's an interesting question.

On Fri, Sep 7, 2018, 12:00 AM Jacco van Dorp <j.van.dorp@deonet.nl> wrote:
Sometimes we make a function only to be called once at a specific location, more because of factoring out some functions for clarity.
I've found myself making the opposite refactoring recently, improving clarity by eliminating unnecessary extra functions, where the local scope is passed to the helper function.

On Fri, Sep 07, 2018 at 10:39:07AM +1200, Greg Ewing wrote:
But the proposal isn't just for a way to avoid repeating oneself when overriding methods: class Parent: def spam(self, spam, eggs, cheese): ... class Child(Parent): def spam(self, foo, bar, baz): # why the change in names? ... I agree that inconsistency here is a strange thing to do, and its a minor annoyance to have to manually repeat the names each time you override a class. Especially during rapid development, when the method signatures haven't yet reached a stable API. (But I don't know of any alternative which isn't worse, given that code is read far more often than its written and we don't design our language to only be usable for people using IntelliSense.) The proposal is for syntax to make one specific pattern shorter and more concise when *calling arbitrary functions*. Nothing to do with inheritance at all, except as a special case. It is pure syntactic sugar for one specific case, "name=name" when calling a function. Syntactic sugar is great, in moderation. I think this is too much sugar for not enough benefit. But I acknowledge that's because little of my code uses that name=name idiom. (Most of my functions take no more than three arguments, I rarely need to use keywords, but when I do, they hardly ever end up looking like name=name. A quick and dirty manual search of my code suggests this would be useful to me in less than 1% of function calls.) But for those who use that idiom a lot, this may seem more appealing. With the usual disclaimer that I understand it will never be manditory to use this syntax, nevertheless I can see it leading to the "foolish consistency" quote from PEP 8. "We have syntax to write shorter code, shorter code is better, so if we want to be Pythonic we must design our functions to use the same names for local variables as the functions we call." -- hypothetical blog post, Stackoverflow answer, opinionated tutorial, etc. I don't think this is a pattern we want to encourage. We have a confluence of a few code smells, each of which in isolation are not *necessarily* bad but often represent poor code: - complex function signatures; - function calls needing lots of arguments; - needing to use keyword arguments (as otherwise the function call is too hard to read); - a one-to-one correspondence between local variables and arguments; and syntax designed to make this case easier to use, and hence discourage people from refactoring to remove the pain. (If they can.) I stress that none of these are necessarily poor code, but they are frequently seen in poor code. As a simplified example: def function(alpha, beta, gamma): ... # later, perhaps another module def do_something_useful(spam, eggs, cheese): result = function(alpha=eggs, beta=spam, gamma=cheese) ... In this case, the proposed syntax cannot be applied, but the argument from consistency would suggest that I ought change the signature of do_something_useful to this so I can use the syntax: # consistency is good, m'kay? def do_something_useful(beta, alpha, gamma): result = function(*, alpha, beta, gamma) ... Alternatively, I could keep the existing signature: def do_something_useful(spam, eggs, cheese): alpha, beta, gamma = eggs, spam, cheese result = function(*, alpha, beta, gamma) ... To save seventeen characters on one line, the function call, we add an extra line and thirty-nine characters. We haven't really ended up with more concise code. In practice, I think the number of cases where people *actually can* take advantage of this feature by renaming their own local variables or function parameters will be pretty small. (Aside from inheritance.) But given the "consistency is good" meme, I reckon people would be always looking for opportunities to use it, and sad when they can't. (I know that *I* would, if I believed that consistency was a virtue for its own sake. I think that DRY is a virtue, and I'm sad when I have to repeat myself.) We know from other proposals [don't mention assignment expressions...] that syntax changes can be accepted even when they have limited applicability and can be misused. It comes down to a value judgement as to whether the pros are sufficiently pro and the cons insufficiently con. I don't think they do: Pros: - makes one specific, and (probably unusual) pain-point slightly less painful; - rewards consistency in naming when consistency in naming is justified. Cons: - creates yet another special meaning for * symbol; - implicit name binding instead of explicit; - discourages useful refactoring; - potentially encourages a bogus idea that consistency is a virtue for its own sake, regardless of whether it makes the code better or not; - similarly, it rewards consistency in naming even when consistency in naming is not needed or justified; - it's another thing for people to learn, more documentation needed, extra complexity in the parser, etc; - it may simply *shift* complexity, being even more verbose than the status quo under some circumstances. -- Steve

Steve wrote:
-- hypothetical blog post, Stackoverflow answer, opinionated tutorial, etc.
I don't think this is a pattern we want to encourage.
Steve's "hypothetical blog post" is a pattern he doesn't like, and he said that it's not a pattern we want to encourage. And he proceeds to demolish this pattern, in the rest of his post. According to https://en.wikipedia.org/wiki/Straw_man <quote> The typical straw man argument creates the illusion of having completely refuted or defeated an opponent's proposition through the covert replacement of it with a different proposition (i.e., "stand up a straw man") and the subsequent refutation of that false argument ("knock down a straw man") instead of the opponent's proposition. </quote> So what was the original proposition. I summarise from the original post. It was to allow foo(*, a, b, c, d=3, e) as a shorthand for foo(a=a, b=b, c=c, d=3, e=e) And also that on two big code bases about 30% of all arguments would benefit from this syntax. And also that it would create an incentive for consistent naming across the code base. To me, the "30% of all arguments" deserves more careful examination. Does the proposal significant improve the reading and writing of this code? And are there other, perhaps better, ways of improving this code? I'm very keen to dig into this. I'll start a new thread for this very topic. -- Jonathan

Maybe my tool should be expanded to produce more nuanced data? Like how many of those 30% are: - arity 1,2,3, etc? (Arity 1 maybe should be discarded as being counted unfairly? I don’t think so but some clearly do) - matches 1 argument, 2,3,4 etc? Matching just one is of less value than matching 5. Maybe some other statistics? / Anders

A finer grained analysis tool would be helpful. I'm -0 on the idea because I believe it would discourage more expressive names in calling contexts in order to enable the proposed syntax. But I also see a big difference between cases where all keywords match calling names and cases where only a few of them do. I.e. this is probably a small win: # function (a=a, b=b, c=c, d=d) function(*, a, b, c, d) But this feels like it invites confusion and bugs: # function (a=my_a, b=b, c=my_c, d=d) function(*, a=my_a, b, c=my_c, d) I recognize that if the syntax were added it wouldn't force anyone to use the second version... But that means no one who WRITES the code. As a reader I would certainly have to parse some of the bad uses along with the good ones. I know these examples use simplified and artificial names, but I think the case is even stronger with more realistic names or expressions. On Sat, Sep 8, 2018, 8:24 AM Anders Hovmöller <boxed@killingar.net> wrote:

A finer grained analysis tool would be helpful. I'm -0 on the idea because I believe it would discourage more expressive names in calling contexts in order to enable the proposed syntax. But I also see a big difference between cases where all keywords match calling names and cases where only a few of them do.
I’ll try to find some time to tune it when I get back to work then.
That example could also be rewritten as function(a=my_a, c=my_c, *, b, d) or function(*, b, c, d, a=my_a, c=my_c) Both are much nicer imo. Hmmm... maybe my suggestion is actually better if the special case is only after * so the first of those is legal and the rest not. Hadn’t considered that option before now.
I know these examples use simplified and artificial names, but I think the case is even stronger with more realistic names or expressions.
Stronger in what direction? :P / Anders

On Sat, Sep 8, 2018 at 9:34 AM Anders Hovmöller <boxed@killingar.net> wrote:
function(a=my_a, c=my_c, *, b, d) function(*, b, c, d, a=my_a, c=my_c)
Yes, those look less bad. They are also almost certainly should get this message rather than working: TypeError: function() got multiple values for keyword argument 'c' But they also force changing the order of keyword arguments in the call. That doesn't do anything to the *behavior* of the call, but it often affects readability. For functions with lots of keyword arguments there is often a certain convention about the order they are passed in that readers expect to see. Those examples of opening and reading files that several people have given are good examples of this. I.e. most optional arguments are not used, but when they are used they have certain relationships among them that lead readers to expect them in a certain order. Here's a counter-proposal that does not require any new syntax. Is there ANYTHING your new syntax would really get you that this solution does not accomplish?! (other than save 4 characters; fewer if you came of with a one character name for the helper)
We could implement this helper function like this:
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

I'm not sure whether my toy function is better to assume None for a name that is "used" but does not exist, or to raise a NameError. I can see arguments in both directions, but either behavior is a very small number of lines (and the same decision exists for the proposed syntax). You might also allow the `use()` function to take some argument(s) other than a space-separated string, but that's futzing with a demonstration API. On Sat, Sep 8, 2018 at 10:05 AM David Mertz <mertz@gnosis.cx> wrote:
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

On Sat, Sep 8, 2018, 6:34 AM Anders Hovmöller <boxed@killingar.net> wrote:
Even better would be to show full context on one or a few cases where this syntax helps. I've found that many proposals in this mailing list have better solutions when one can see the complete code. If your proposal seems like the best solution after seeing the context, that can be more compelling than some assertion about 30% of parameters. If you can't share proprietary code, why not link to a good example in the Django project? If nothing else, maybe Django could get a pull request out of this.

I've updated the tool to also print statistics on how many arguments there are for the places where it can perform the analysis. I also added statistics for how long variable names it finds. I'm pretty sure almost all places with the length 1 or 2 for variable names passed would be better if they had been synchronized. Those places are also an argument for my suggestion I think, because if you gain something to synchronize then that will make you less likely to shorten variable names down to 1 or 2 characters to get brevity. Maybe... If you exclude calls to functions with just one argument (not parameters) then the hit percentage on the code base at work drops from ~36% to ~31%. Not a big difference overall. I've updated the gist: https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c <https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c> / Anders

On Sat, Sep 08, 2018 at 12:05:33PM +0100, Jonathan Fine wrote:
This is called Poisoning the Well. You have carefully avoided explicitly accusing me of making a straw man argument while nevertheless making a completely irrelevant mention of it, associating me with the fallacy. That is not part of an honest or open discussion. Anders made a proposal for a change in syntax. I made a prediction of the possible unwelcome consequences of that suggested syntax. In no way, shape or form is that a straw man. To give an analogy: Politician A: "We ought to invade Iranistan, because reasons." Politician B: "If we do that, it will cost a lot of money, people will die, we'll bring chaos to the region leading to more terrorism, we might not even accomplish our aims, and our international reputation will be harmed." Politician A: "That's a straw-man! I never argued for those bad things. I just want to invade Iranistan." Pointing out unwelcome consequences of a proposal is not a Straw Man. -- Steve

I read that as him accusing you very directly.
You kept saying I was “forcing” to use the new syntax. You said it over and over even after we pointed out this was not the actual suggestion. This is classic straw man. But ok, let’s be more charitable and interpret it as you wrote it later: that it won’t be forcing per se, but that the feature will be *so compelling* it will be preferred at all times over both normal keyword arguments *and* positional arguments. For someone who doesn’t like the proposal you seem extremely convinced that everyone else will think it’s so super awesome they will actually try to force it on their colleagues etc. I like my proposal obviously but even I don’t think it’s *that* great. It would almost certainly become the strongly preferred way to do it for some cases like .format() and sending a context to a template renderer in web apps. But that’s because in those cases it is very important to match the names. / Anders

On Sun, Sep 9, 2018 at 3:37 PM, Anders Hovmöller <boxed@killingar.net> wrote:
Creating a new and briefer syntax for something is not actually *forcing* people to use it, but it is an extremely strong encouragement. It's the language syntax yelling "HERE! DO THIS!". I see it all the time in JavaScript, where ES2015 introduced a new syntax {name} equivalent to {"name":name} - people will deliberately change their variable names to match the desired object keys. So saying "forcing" is an exaggeration, but a very slight one. ChrisA

On Sun, Sep 9, 2018 at 5:32 PM, Anders Hovmöller <boxed@killingar.net> wrote:
Often neutral, sometimes definitely evil. Pretty much never good. That said, my analysis is skewed towards the times when (as an instructor) I am asked to assist - the times when a student has run into trouble. But even compensating for that, I would say that the balance still tips towards the bad. ChrisA

On Sun, Sep 09, 2018 at 07:37:21AM +0200, Anders Hovmöller wrote:
Okay.
Over and over again, you say. Then it should be really easy for you to link to a post from me saying that. I've only made six posts in this thread (seven including this one) so it should only take you a minute to justify (or retract) your accusation: https://mail.python.org/pipermail/python-ideas/2018-September/author.html Here are a couple of quotes to get you started: Of course I understand that with this proposal, there's nothing *forcing* people to use it. https://mail.python.org/pipermail/python-ideas/2018-September/053282.html With the usual disclaimer that I understand it will never be manditory [sic] to use this syntax ... https://mail.python.org/pipermail/python-ideas/2018-September/053257.html
Vigorous debate is one thing. Misrepresenting my position is not. This isn't debate club where the idea is to win by any means, including by ridiculing exaggerated versions of the other side's argument. (There's a name for that fallacy, you might have heard of it.) We're supposed to be on the same side, trying to determine what is the best features for the language. We don't have to agree on what those features are, but we do have to agree to treat each other's position with fairness. -- Steve

Can we all just PLEASE stop the meta-arguments enumerating logical fallacies and recriminating about who made it personal first?! Yes, let's discuss specific proposals and alternatives, and so on. If someone steps out of line of being polite and professional, just ignore it. On Sun, Sep 9, 2018, 8:52 AM Steven D'Aprano <steve@pearwood.info> wrote:

On Sun, Sep 9, 2018 at 7:37 AM, Anders Hovmöller <boxed@killingar.net> wrote: I've spent this whole thread thinking: "who in the world is writing code with a lot of spam=spam arguments? If you are transferring that much state in a function call, maybe you should have a class that holds that state? Or pass in a **kwargs dict? Note: I write a lot of methods (mostly __init__) with a lot of keyword parameters -- but they all tend have sensible defaults, and/or will have many values specified by literals. Then this:
OK -- those are indeed good use cases, but: for .format() -- that's why we now have f-strings -- done. for templates -- are you really passing all that data in from a bunch of variables?? as opposed to, say, a dict? That strikes me as getting code and data confused (which is sometimes hard not to do...) So still looking for a compelling use-case -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On 09/10/2018 12:52 PM, Chris Barker via Python-ideas wrote:
So still looking for a compelling use-case
In my day job I spend a lot of time writing/customizing modules for a framework called OpenERP (now Odoo*). Those modules are all subclasses, and most work will require updating at least a couple parent metheds -- so most calls look something like: def a_method(self, cr, uid, ids, values, context=None): ... super(self, parent).a_method(cr, uid, ids, values, context=context) Not a perfect example as these can all be positional, but it's the type of code where this syntax would shine. I think, however, that we shouldn't worry about a lead * to activate it, just use a leading '=' and let it show up anywhere and it follows the same semantics/restrictions as current positional vs keyword args: def example(filename, mode, spin, color, charge, orientation): pass example('a name', 'ro', =spin, =color, charge=last, =orientation) So +0 with the above proposal. -- ~Ethan~

On 10/09/2018 22:00, Ethan Furman wrote:
Couldn't just about all of the use cases mentioned so far be met in quite a neat manner by providing access to a method, or dictionary, called __params__ which would give access, as a dictionary, to the parameters as supplied in the call, (or filled in by the defaults). If this was accessible externally, as fn.__defaults__ is then examples such as:
would become: def a_method(self, cr, uid, ids, values, context=None): ... params = {k:v for k,v in __params__ if k in parent.a_method.keys()} # Possibly add some additional entries here! super(self, parent).a_method(**params) -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com

Op di 11 sep. 2018 om 06:48 schreef Steve Barnes <gadgetsteve@live.co.uk>:
So...deep black magic ? That's what this looks like. Having =spam for same-named kwargs sounds easier to comprehend for new people than a __magic__ object you can only access in function bodies and will give headaches if you have to write decorators: def other_function_defaults(*args, **kwargs): outer_params = __params__.copy() def deco(func): def inner(self, yo_momma): return func(self, **outer_params, **__params__) # overwrite with specifically provided arguments return deco I think that magic objects like that aren't really pythonic - if it were, "self" would be the same kind of magic, instead of us having to name it on every function call (A decision im really a fan of, tbh)

My 3 cents: 1. My most objective objection against the f(*, foo, bar, baz) syntax is that it looks like positional arguments, and the syntactic marker * which dissuades you of that can be arbitrarily far apart from the keyword. 2. The syntax f(=foo, =bar, =baz) at least solves that problem. Otherwise I find it quite ugly with the unbalanced = but that is obviously more subjective. 3. I still am not convinced it is needed at all. IMHO, if your code is filled with f(foo=foo, bar=bar, baz=baz) then perhaps Python is telling you that foo, bar and baz want to become fields in a new object which you should pass around. 4. (Bonus cent) Somewhat tongue-in-cheek I offer the following Vim mapping for those who find themselves typing longword=longword all the time. :inoremap <F8> =<Esc>hyiwt=lpa Now you can just do longword<F8>. Stephan Op di 11 sep. 2018 om 08:55 schreef Jacco van Dorp <j.van.dorp@deonet.nl>:

On Tue, Sep 11, 2018 at 04:47:37AM +0000, Steve Barnes wrote:
I imagine it would be fairly easy to fill in such a special __params__ local variable when the function is called. The interpreter already has to process the positional and keyword arguments, it probably wouldn't be that hard to add one more implicitly declared local and fill it in: def function(spam, eggs, *args): print( __params__ ) function(2, 6, 99, 100) # prints {'spam': 2, 'eggs': 6, '*args': (99, 100)} But this has some problems: (1) It might be cheap, but it's not free. Function calling in Python is already a minor bottleneck, having to populate one more local whether it is needed or not can only make it slower, not faster. (2) It leads to the same gotchas as locals(). What happens if you assign to the __params__ dict? What happens when the parameters change their local value? The __param__ dict probably won't change. (Like locals(), I expect that will depend on the interpreter.)
If this was accessible externally, as fn.__defaults__ is then examples such as:
Defaults are part of the function definition and are fixed when the function is created. The values assigned to parameters change every time you call the function, whether you need them or not. For non-trivial applications with many function calls, that's likely to add up to a measurable slow-down. Its also going to suffer from race conditions, unless someone much cleverer than me can think of a way to avoid them which doesn't slow down function calls even more. - I call function(a=1, b=2); - function.__params__ is set to {'a': 1, 'b': 2} - meanwhile another thread calls function(a=98, b=99); - setting function.__params__ to {'a': 98, 'b': 99} - and I then access function.__params__, getting the wrong values. I think that __params__ as an implicitly created local variable is just barely justifiable, if you don't care about slowing down all function calls for the benefit of a tiny number of them. But exposing that information as an externally visible attribute of the function object is probably unworkable and unnecessary. -- Steve

On Tue, Sep 11, 2018 at 9:34 PM, Steven D'Aprano <steve@pearwood.info> wrote:
Rather than slowing down ALL function calls, you could slow down only those that use it. The interpreter could notice the use of the name __params__ inside a function and go "oh, then I need to include the bytecode to create that". It'd probably need to be made a keyword, or at least unassignable, to ensure that you never try to close over the __params__ of another function, or declare "global __params__", or anything silly like that. I'm still -1 on adding it, though. ChrisA

Summary: locals() and suggestion __params__ are similar, and roughly speaking each can be implemented from the other. Experts / pedants would prefer not to use the name __params__ for this purpose. Steve D'Aprano wrote:
[snip]
As far as I know, locals() does not suffer from a race condition. But it's not a local variable. Rather, it's a function that returns a dict. Hence avoiding the race condition. Python has some keyword identifiers. Here's one >>> __debug__ = 1 SyntaxError: assignment to keyword Notice that this is a SYNTAX error. If __params__ were similarly a keyword identifier, then it would avoid the race condition. It would simply be a handle that allows, for example, key-value access to the state of the frame on the execution stack. In other words, a lower-level object from which locals() could be built. By the way, according to <quote> https://www.quora.com/What-is-the-difference-between-parameters-and-argument... A parameter is a variable in a method definition. When a method is called, the arguments are the data you pass into the method's parameters. Parameter is variable in the declaration of function. Argument is the actual value of this variable that gets passed to function. </quote> In my opinion, the technically well-informed would prefer something like __args__ or __locals__ instead of __params__, for the current purpose. Finally, __params__ would simply be the value of __locals__ before any assignment has been done. Here's an example >>> def fn(a, b, c): ... lcls = locals() ... return lcls ... >>> fn(1, 2, 3) {'c': 3, 'b': 2, 'a': 1} Note: Even though lcls is the identifier for a local variable, at the time locals() is called the lcls identifier is unassigned, so not picked up by locals(). So far as I can tell, __params__ and locals() can be implemented in terms of each other. There could be practical performance benefits in providing the lower-level command __params__ (but with the name __locals__ or the like). -- Jonathan

I wrote:
Following this up, I did a search for "__locals__" Python. The most interesting link I found was <quote> Implement PEP 422: Simple class initialisation hook https://bugs.python.org/issue17044#msg184195 Nick Coghlan wrote: Oh, that's bizarre - the presence of __locals__ is a side effect of calling locals() in the class body. So perhaps passing the namespace as a separate __init_class__ parameter is a better option. </quote> So it looks like (i) there's some complexity associated with locals(), and (ii) if we wish, it seems that __locals__ is available as a keyword identifier. Finally, another way to see that there's no race condition. The Python debugger supports inspection of stack frames. And it's a pure Python module. https://docs.python.org/3/library/pdb.html https://github.com/python/cpython/tree/3.7/Lib/pdb.py -- Jonathan

On Tue, Sep 11, 2018 at 04:57:16PM +0100, Jonathan Fine wrote:
Summary: locals() and suggestion __params__ are similar, and roughly speaking each can be implemented from the other.
You cannot get a snapshot of the current locals just from the function parameters, since the current locals will include variables which aren't parameters. Likewise you cannot get references to the original function parameters from the current local variables, since the params may have been re-bound since the call was made. (Unless you can guarantee that locals() is immediately called before any new local variables were created, i.e. on entry to the function, before any other code can run. As you point out further below.) There's a similarity only in the sense that parameters of a function are included as local variables, but the semantics of __params__ as proposed and locals() are quite different. They might even share some parts of implementation, but I don't think that really matters one way or another. Whether they do or don't is a mere implementation detail.
Experts / pedants would prefer not to use the name __params__ for this purpose.
I consider myself a pedant (and on a good day I might pass as something close to an expert on some limited parts of Python) and I don't have any objection to the *name* __params__. From the perspective of *inside* a function, it is a matter of personal taste whether you refer to parameter or argument: def func(a): # in the declaration, "a" is a parameter # inside the running function, once "a" has a value set, # its a matter of taste whether you call it a parameter # or an argument or both; I suppose it depends on whether # you are referring to the *variable* or its *value* # but here 1 is the argument bound to the parameter "a" result = func(1) It is the semantics that I think are problematic, not the choice of name.
Indeed. Each time you call locals(), it returns a new dict with a snapshot of the current local namespace. Because it all happens inside the same function call, no external thread can poke inside your current call to mess with your local variables. But that's different from setting function.__params__ to passed in arguments. By definition, each external caller is passing in its own set of arguments. If you have three calls to the function: function(a=1, b=2) # called by A function(a=5, b=8) # called by B function(a=3, b=4) # called by C In single-threaded code, there's no problem here: A makes the first call; the interpreter sets function.__params__ to A's arguments; the function runs with A's arguments and returns; only then can B make its call; the interpreter sets function.__params__ to B's arguments; the function runs with B's arguments and returns; only then can C make its call; the interpreter sets function.__params__ to C's arguments; the function runs with C's arguments and returns but in multi-threaded code, unless there's some form of locking, the three sets can interleave in any unpredictable order, e.g.: A makes its call; B makes its call; the interpreter sets function.__params__ to B's arguments; the interpreter sets function.__params__ to A's arguments; the function runs with B's arguments and returns; C make its call; the interpreter sets function.__params__ to C's arguments; the function runs with A's arguments and returns; the function runs with C's arguments and returns. We could solve this race condition with locking, or by making the pair of steps: the interpreter sets function.__params__ the function runs and returns a single atomic step. But that introduces a deadlock: once A calls function(), threads B and C will pause (potentially for a very long time) waiting for A's call to complete, before they can call the same function. I'm not an expert on threaded code, so it is possible I've missed some non-obvious fix for this, but I expect not. In general, solving race conditions without deadlocks is a hard problem.
The problem isn't because the caller assigns to __params__ manually. At no stage does Python code need to try setting "__params__ = x", in fact that ought to be quite safe because it would only be a local variable. The race condition problem comes from trying to set function.__params__ on each call, even if its the interpreter doing the setting.
That wouldn't have the proposed semantics. __params__ is supposed to be a dict showing the initial values of the arguments passed in to the function, not merely a reference to the current frame. [...]
Oh well, that puts me in my place :-) I have no objection to __args__, but __locals__ would be very inappropriate, as locals refers to *all* the local variables, not just those which are declared as parameters. (Parameters are a *subset* of locals.)
Finally, __params__ would simply be the value of __locals__ before any assignment has been done.
Indeed. As Chris (I think it was) pointed out, we could reduce the cost of this with a bit of compiler magic. A function that never refers to __params__ would run just as it does today: def func(a): print(a) might look something like this: 2 0 LOAD_GLOBAL 0 (print) 2 LOAD_FAST 0 (a) 4 CALL_FUNCTION 1 6 POP_TOP 8 LOAD_CONST 0 (None) 10 RETURN_VALUE just as it does now. But if the compiler sees a reference to __params__ in the body, it could compile in special code like this: def func(a): print(a, __params__) 2 0 LOAD_GLOBAL 0 (locals) 2 CALL_FUNCTION 0 4 STORE_FAST 1 (__params__) 3 6 LOAD_GLOBAL 1 (print) 8 LOAD_FAST 0 (a) 10 LOAD_FAST 1 (__params__) 12 CALL_FUNCTION 2 14 POP_TOP 16 LOAD_CONST 0 (None) 18 RETURN_VALUE Although more likely we'd want a special op-code to populate __params__, rather than calling the built-in locals() function. I don't think that's a bad idea, but it does add more compiler magic, and I'm not sure that there is sufficient justification for it. -- Steve

Steve Barnes suggested adding __params__, as in
Steve D'Aprano commented
I'm puzzled here. Steve B provided code fragment for k,v in __params__ while Steve D provided code fragment function.__params__ by which I think he meant in terms of Steve B's example a_method.__params__ Perhaps Steve D thought Steve B wrote def a_method(self, cr, uid, ids, values, context=None): ... params = {k:v for k,v in a_method.__params__ # Is this what Steve D thought Steve B wrote? if k in parent.a_method.keys() } # Possibly add some additional entries here! super(self, parent).a_method(**params) If Steve B had written this, then I would agree with Steve D's comment. But as it is, I see no race condition problem, should __params__ be properly implemented as a keyword identifier. Steve D: Please clarify or explain you use of function.__params__ Perhaps it was a misunderstanding. By the way: I've made a similar mistake, on this very thread. So I hope no great shame is attached to such errors. <quote> https://mail.python.org/pipermail/python-ideas/2018-September/053224.html Summary: I addressed the DEFINING problem. My mistake. Some rough ideas for the CALLING problem. Anders has kindly pointed out to me, off-list, that I solved the wrong problem. His problem is CALLING the function fn, not DEFINING fn. Thank you very much for this, Anders. </quote> -- Jonathan

On Wed, Sep 12, 2018 at 02:23:34PM +0100, Jonathan Fine wrote:
In context, what Steve Barnes said was If this [__params__] was accessible externally, as fn.__defaults__ is [...] https://mail.python.org/pipermail/python-ideas/2018-September/053322.html Here is the behaviour of fn.__defaults__: py> def fn(a=1, b=2, c=3): ... pass ... py> fn.__defaults__ (1, 2, 3) Notice that it is an externally acessible attribute of the function object. If that's not what Steve Barnes meant, then I have no idea why fn.__defaults__ is relevant or what he meant. I'll confess that I couldn't work out what Steve's code snippet was supposed to mean: params = {k:v for k,v in __params__ if k in parent.a_method.keys()} Does __params__ refer to the currently executing a_method, or the superclass method being called later on in the line? Why doesn't parent.a_method have parens? Since parent.a_method probably isn't a dict, why are we calling keys() on a method object? The whole snippet was too hard for me to comprehend, so I went by the plain meaning of the words he used to describe the desired semantics. If __params__ is like fn.__defaults__, then that would require setting fn.__params__ on each call. Perhaps I'm reading too much into the "accessible externally" part, since Steve's example doesn't seem to actually be accessing it externally. -- Steve

Hi Steve Thank you for your prompt reply. You wrote:
I'll confess that I couldn't work out what Steve B's code snippet was supposed to mean:
params = {k:v for k,v in __params__ if k in parent.a_method.keys()}
The Zen of Python (which might not apply here) says: In the face of ambiguity, refuse the temptation to guess. Now that we have more clarity, Steve D'A, please let me ask you a direct question. My question is about correctly implementing of __params__ as a keyword identifier, with semantics as in Steve B's code snippet above. Here's my question: Do you think implementing this requires the avoidance of a race hazard? Or perhaps it can be done, as I suggested, entirely within the execution frame on the stack? -- Jonathan

On Wed, Sep 12, 2018 at 03:58:25PM +0100, Jonathan Fine wrote:
My question is about correctly implementing of __params__ as a keyword identifier, with semantics as in Steve B's code snippet above.
The semantics of Steve's code snippet are ambiguous.
Here's my question: Do you think implementing this requires the avoidance of a race hazard?
I don't know what "this" is any more. I thought Steve wanted an externally accessible fn.__params__ dict, as that's what he said he wanted, but his code snippet doesn't show that. If there is no externally accessible fn.__params__ dict, then there's no race hazard. I see no reason why a __params__ local variable would be subject to race conditions. But as you so rightly quoted the Zen at me for guessing in the face of ambiguity, without knowing what Steve intends, I can't answer your question. As a purely internal local variable, it would still have the annoyance that writing to the dict might not actually effect the local values, the same issue that locals() has. But if we cared enough, we could make the dict a proxy rather than a real dict. I see no reason why __params__ must be treated as special keyword, like __debug__, although given that it is involved in special compiler magic, that might be prudent. (Although, in sufficient old versions of Python, even __debug__ was just a regular name.)
Or perhaps it can be done, as I suggested, entirely within the execution frame on the stack?
Indeed. Like I said right at the start, there shouldn't be any problem for the compiler adding a local variable to each function (or just when required) containing the initial arguments bound to the function parameters. *How* the compiler does it, whether it is done during compilation or on entry to the function call, or something else, is an implementation detail which presumably each Python interpreter can choose for itself. All of this presumes that it is a desirable feature. -- Steve

On 12/09/2018 16:38, Steven D'Aprano wrote:
Hi, My intent with the __params__, (or whatever it might end up being called), was to provide a mechanism whereby we could: a) find out, before calling, which parameters a function/method accepts, (just as __defaults__ gives us which values the function/method has default values for so does not require in every call with its defaults). Since this would normally be a compile time operation I do not anticipate any race conditions. I suspect that this would also be of great use to IDE authors and others as well as the use case on this thread. b) a convenient mechanism for accessing all of the supplied parameters/arguments, (whether actually given or from defaults), from within the function/method both the parameter names and the values supplied at the time of the specific call. The example I gave was a rough and ready filtering of the outer functions parameters down to those that are accepted by the function that is about to be called, (I suspect that __locals__() might have been a better choice here). I don't anticipate race conditions here either as the values would be local at this point. The idea was to provide a similar mechanism to the examples of functions that accept a list and dictionary in addition to the parameters that they do consume so as to be able to work with parameter lists/dictionaries that exceed the requirements. The difference is that, since we can query the function/method for what parameters it accepts and filter what we have to match, we do not need to alter the signature of the called item. This is important when providing wrappers for code that we do not have the freedom to alter. I have done a little testing and found that: a) if we have a fn(a, b, c) and call it with fn(b=2, c=3, a=1) it is quite happy and assigns the correct values so constructing a dictionary that satisfies all of the required parameters and calling with fn(**the_dict) is fine. b) Calling dir() or __locals__() on the first line of the function gives the required information (but blocks the docstring which would be a bad idea). The one worry is how to get the required parameter/argument list for overloaded functions or methods but AFAIK these are all calls to wrapped C/C++/other items so already take (*arg, **argv) inputs. I would guess that we would need some sort of indicator for this type of function. I hope I have made my thoughts clearer rather than muddier :-) thank you all for taking the time to think about this. -- Steve (Gadget) Barnes Any opinions in this message are my personal opinions and do not reflect those of my employer. --- This email has been checked for viruses by AVG. https://www.avg.com

On Wed, Sep 12, 2018 at 06:59:44AM -0700, Ethan Furman wrote:
[...]
I'm finding it hard to understand the documentation for threading.local(): https://docs.python.org/3/library/threading.html#threading.local as there isn't any *wink* although it does refer to the docstring of a private implementation module. But I can't get it to work. Perhaps I'm doing something wrong: import time from threading import Thread, local def func(): pass def attach(value): func.__params__ = local() func.__params__.value = value def worker(i): print("called from thread %s" % i) attach(i) assert func.__params__.value == i time.sleep(3) value = func.__params__.value if value != i: print("mismatch", i, value) for i in range(5): t = Thread(target=worker, args=(i,)) t.start() print() When I run that, each of the threads print their "called from ..." message, the assertions all pass, then a couple of seconds later they consistently all raise exceptions: Exception in thread Thread-1: Traceback (most recent call last): File "/usr/local/lib/python3.5/threading.py", line 914, in _bootstrap_inner self.run() File "/usr/local/lib/python3.5/threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "<stdin>", line 5, in worker AttributeError: '_thread._local' object has no attribute 'value' In any case, if Steve Barnes didn't actually intend for the __params__ to be attached to the function object as an externally visible attribute, the whole point is moot. -- Steve

On Mon, Sep 10, 2018 at 11:00 PM, Ethan Furman <ethan@stoneleaf.us> wrote:
hmm -- this is a trick -- in those cases, I find myself using *args, **kwargs when overloading methods. But that does hide the method signature, which is really unfortunate. IT works pretty well for things like GUI toolkits, where you might be subclassing a wx.Window, and the docs for wx.Window are pretty easy to find, but for you own custom classes with nested subclassing, it does get tricky. For this case, I kinda like Steve Barnes idea (I think it is his) to have a "magic object of some type, so you can have BOTH specified parameters, and easy access to the *args, **kwargs objects. Though I'm also wary of the magic... Perhaps there's some way to make it explicit, like "self": def fun(a, b, c, d=something, e=something, &args, &&kwargs): (I'm not sure I like the &, so think of it as a placeholder) In this case, then &args would be the *args tuple, and &&kwargs would be the **kwargs dict (as passed in) -- completely redundant with the position and keyword parameters. So the above could be: def a_method(self, cr, uid, ids, values, context=None, &args, &&kwargs): super(self, parent).a_method(*args, **kwargs) do_things_with(cr, uid, ...) So you now have a clear function signature, access to the parameters, and also a clear an easy way to pass the whole batch on to the superclass' method. I just came up with this off teh top of my head, so Im sure there are big issues, but maybe it can steer us in a useful direction. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

Another possiblity would be to be able to have alternative signatures for a single function, the first being the one shown in inspection and for auto-completion, the other one(s?) just creating new references to the same variables. Like this: def fun(a, b, c, d=something1, e=something2, f=something3)(_, *args, e=something2, **kwargs): # do whatever you need assert args[0] == b assert kwargs["d"] == something1 super().fun("foo", *args, e="bar", **kwargs) I'm not sure what would happen if we didn't provide the same defaults for `e` in the two signatures (probably an exception).

On Tue, Sep 11, 2018 at 10:12:56AM +0200, Chris Barker via Python-ideas wrote:
Do we need to solve this in the interpreter? Surely this is an argument for better tooling. A sophisticated IDE should never be a *requirement* for coding in Python, but good tools can make a big difference in the pleasantness or otherwise of coding. Those tools don't have to be part of the language. At least for methods, code completers ought to be able to search the MRO for the first non-**kwargs signature and display parameters from further up the MRO: class Parent: def method(self, spam): pass class Child(Parent): def method(self, **kwargs): pass Now when I type Child().method(<TAB>) the IDE could search the MRO and find "spam" is the parameter. That becomes a "quality of IDE" issue, and various editors and IDEs can compete to have the best implementation. Or perhaps we could have an officially blessed way to give tools a hint as to what the real signature is. class Child(Parent): @signature_hint(Parent.method) def method(self, **kwargs): pass Statically, that tells the IDE that "true" signature of Child.method can be found from Parent.method; dynamically, the decorator might copy that signature into Child.method.__signature_hint__ for runtime introspection by tools like help(). The beauty of this is that it is independent of inheritance. We could apply this decorator to any function, and point it to any other function or method, or even a signature object. @signature_hint(open) def my_open(*args, **kwargs): ... And being optional, it won't increase the size of any functions unless you specifically decorate them. -- Steve

On Tue, Sep 11, 2018 at 08:53:55PM +1000, Steven D'Aprano wrote: [...]
Here's an untested implementation: import inspect def signature_hint(callable_or_sig, *, follow_wrapped=True): if isinstance(callable_or_sig, inspect.Signature): sig = callable_or_sig else: sig = inspect.signature(callable_or_sig, follow_wrapped=follow_wrapped) def decorator(func): func.__signature_hint__ = sig return func return decorator inspect.signature would need to become aware of these hints too: def f(a, b=1, c=2): pass @signature_hint(f) def g(*args): pass @signature_hint(g) def h(*args): pass At this point h.__signature_hint__ ought to give <Signature (a, b=1, c=2)> (Note that this is not quite the same as the existing follow_wrapped argument of inspect.signature.) This doesn't directly help Ander's problem of having to make calls like func(a=a, b=b, c=c) # apologies for the toy example but at least it reduces the pain of needing to Repeat Yourself when overriding methods, which indirectly may help in some (but not all) of Ander's cases. -- Steve

(nitpick: we're passing arguments, not parameters) I don't see how this could be confusing. Do you think it's confusing how many parameters a function has in python now because of the keyword only marker? This suggestion follows the same rules you should already be familiar with when counting parameters, why would you now have trouble counting when the line doesn't begin with "def " and end with ":"?
I expect this to be common enough to warrant nicer language constructs (like OCaml has). I expect people today to use positional arguments to get concise code, and I think python pushes people in this direction. This is a bad direction imo.
Run my analysis tool. Check the numbers. It's certainly true at work, and it's true for Django for example.

On 07/09/18 03:38, Anders Hovmöller wrote:
potayto, potahto
I counted commas. I came up with the wrong number. Simple. For what it's worth, I don't like the keyword-only marker or the proposed positional-only marker for exactly the same reason.
I disagree. Keyword arguments are a fine and good thing, but they are best used for optional arguments IMHO. Verbosity for the sake of verbosity is not a good thing.
OK, then your assertion didn't mean what I thought it means, and I'm very confused about what it does mean. Could you try that again? -- Rhodri James *-* Kynesim Ltd

There's also potentially trailing commas to confuse you further :P I'm not a big fan of the keyword argument only syntax either, but that ship has sailed long ago, so now I think we should consider it Pythonic and judge future suggestions accordingly. I do like the feature of keyword only and understand the tradeoffs made to make the syntax work, so I'm quite happy overall.
Hmm.. it seems to me like there are some other caveats to your position here. Like "no functions with more than two arguments!" or similar? Personally I think readability suffers greatly already at two arguments if none of the parameters are named. Sometimes you can sort of fix the readability with function names like do_something_with_a_foo_and_bar(foo, bar), but that is usually more ugly than just using keyword arguments.
Functions in real code have > 2 arguments. Often when reading the code the only way to know what those arguments are is by reading the names of the parameters on the way in, because it's positional arguments. But those aren't checked. To me it's similar to bracing for indent: you're telling the human one thing and the machine something else and no one is checking that those two are in sync. I have seen beginners try: def foo(b, a): pass a = 1 b = 2 foo(a, b) and then be confused because a and b are flipped. I have no idea if any of that made more sense :P Email is hard. / Anders

On 07/09/18 14:59, Anders Hovmöller wrote:
No.
I'd have said three arguments in the general case, more if you've chosen your function name to make it obvious (*not* by that nasty foo_and_bar method!), though that's pretty rare. That said, I don't often find I need more than a few mandatory arguments.
I'll repeat; surprisingly few of my function have more than three mandatory (positional) arguments. Expecting to understand functions by just reading the function call and not the accompanying documentation (or code) is IMHO hopelessly optimistic, and just having keyword parameters will not save you from making mistaken assumptions.
I have seen teachers get their students to do that deliberately, to give them practical experience that the variable names they use in function calls are not in any way related to the names used in the function definition. I've not seen those students make the same mistake twice :-) I wonder if part of my dislike of your proposal is that you are deliberately blurring that disconnect? -- Rhodri James *-* Kynesim Ltd

On Fri, Sep 07, 2018 at 06:59:45AM -0700, Anders Hovmöller wrote:
Personally I think readability suffers greatly already at two arguments if none of the parameters are named.
*At* two arguments? As in this example? map(len, sequence) I'll admit that I struggle to remember the calling order of list.insert, I never know which of these I ought to write: mylist.insert(0, 1) mylist.insert(1, 0) but *in general* I don't think two positional arguments is confusing.
It is difficult to judge the merit of that made-up example. Real examples are much more convincing and informative.
Functions in real code have > 2 arguments.
Functions in real code also have <= 2 arguments.
I don't understand that sentence. If taken literally, the way to tell what the arguments are is to look at the arguments. I think you might mean the only way to tell the mapping from arguments supplied by the caller to the parameters expected by the called function is to look at the called function's signature. If so, then yes, I agree. But why is this relevent? You don't have to convince us that for large, complex signatures (a hint that you may have excessively complex, highly coupled code!) keyword arguments are preferable to opaque positional arguments. That debate was won long ago. If a complex calling signature is unavoidable, keyword args are nicer.
But those aren't checked.
I don't understand this either. Excess positional arguments aren't silently dropped, and missing ones are an error.
No, you're telling the reader and the machine the same thing. func(a, b, c) tells both that the first parameter is given the argument a, the second is given argument b, and the third is given argument c. What's not checked is the *intention* of the writer, because it can't be. Neither the machine nor the reader has any insight into what I meant when I wrote the code (not even if I am the reader, six weeks after I wrote the code). Keywords help a bit with that... it's harder to screw up open(filename, 'r', buffering=-1, encoding='utf-8', errors='strict') than: open(filename, 'r', -1, 'utf-8', 'strict') but not impossible. But again, this proposal isn't for keyword arguments. You don't need to convince us that keyword arguments are good.
How would they know? Beginners are confused by many things. Coming from a background in Pascal, which has no keyword arguments, it took me a while to get to grips with keyword arguments: def spam(a, b): print("a is", a) print("b is", b) a = 1 b = 2 spam(a=b, b=a) print(a, b) The effect of this, and the difference between the global a, b and local a, b, is not intuitively obvious. -- Steve

It’s often enough. But yes, map seems logical positional to me too but I can’t tell if it’s because I’ve programmed in positional languages for many years, or that I’m a Swedish and English native speaker. I don’t see why map would be clear and insert not so I’m guessing it has to do with language somehow. I think it’s a good thing to be more explicit in border cases. I don’t know what the intuitions of future readers are.
It is difficult to judge the merit of that made-up example. Real examples are much more convincing and informative.
Agreed. I just could only vaguely remember doing this sometimes but I had no idea what to grep for so couldn’t find a real example :P
Functions in real code have > 2 arguments.
Functions in real code also have <= 2 arguments.
Yea and they are ok as is.
Good to see we have common ground here. I won’t try to claim the code base at work doesn’t have way too many functions with way too many parameters :P It’s a problem that we are working to ameliorate but it’s also a problem my suggested feature would help with. I think we should accept that such code bases exists even when managed by competent teams. Adding one parameter is often ok but only over time you can create a problem. Refactoring to remove a substantial amount of parameters is also not always feasible or with the effort. I think we should expect such code bases to be fairly common and be more common in closed source big business line apps. I think it’s important to help for these uses, but I’m biased since it’s my job :P “We” did add @ for numerical work after all and that’s way more niche than the types of code bases I’m discussing here. I think you’d also agree on that point?
Yea the arity is checked but if a refactor removes one parameter and adds another all the existing call sites are super obviously wrong if you look at the definition and the call at the same time, but Python doesn’t know.
Just like with bracing and misleading indents yes. It blames the user for a design flaw of the language.
What's not checked is the *intention* of the writer, because it can't be.
That’s my point yes. And of course it can be. With keyword arguments it is. Today. If people used them drastically more the computer would check intention more.
I’m not convinced I’m not in fact arguing this point :P There is a big and unfair advantage positional has over kw today due to the conciseness of one over the other. My suggestion cuts down this advantage somewhat, or drastically in some cases.
and then be confused because a and b are flipped.
How would they know?
How would they know what? They know it’s broken because their program doesn’t work. How would they know the computer didn’t understand a is a and b is b when it’s blatantly obvious to a human? That’s my argument isn’t it? :P / Anders

I disagree, when you have more than one parameter it's sometimes complicated to remember the order. Therefore, when you name your args, you have way less probability of passing the wrong variable, even with only one arg. Verbosity adds redundancy, so that both caller and callee are sure they mean the same thing. That's why Java has types everywhere, such that the "declaration part" and the "use" part agree on the same idea (same type).

Here's a function found online (I'm too lazy to write my own, but it would be mostly the same). Tell me how keyword arguments could help this... Or WHAT names you'd give. 1. def quad(a,b,c): 2. """solves quadratic equations of the form 3. aX^2+bX+c, inputs a,b,c, 4. works for all roots(real or complex)""" 5. root=b**2-4*a*c 6. if root <0: 7. root=abs(complex(root)) 8. j=complex(0,1) 9. x1=(-b+j+sqrt(root))/2*a 10. x2=(-b-j+sqrt(root))/2*a 11. return x1,x2 12. else: 13. x1=(-b+sqrt(root))/2*a 14. x2=(-b-sqrt(root))/2*a 15. return x1,x2 After that, explain why forcing all callers to name their local variables a, b, c would be a good thing. On Fri, Sep 7, 2018, 12:18 PM Robert Vanden Eynde <robertve92@gmail.com> wrote:

If you want to force using pos args, go ahead and use Python docstring notation we'd write def quad(a,b,c, /) The names should not be renamed because they already have a normal ordering x ** n. This notation is standard, so it would be a shame to use something people don't use. However, I recently used a quad function in one of my uni course where the different factors are computed with a long expression, so keyword arguments, so I'd call: Vout = quad( a=... Some long expression spanning a lot of lines ..., b=... Same thing ..., c=... Same thing...) Without the a= reminder, one could count the indentation. And if you'd think it's a good idea to refactor it like that ... a = ... Some long expression spanning a lot of lines ... b = ... Same thing ... c = ... Same thing... Vout = quad(a,b,c) Then you're in the case of quad(*, a, b, c) (even if here, one would never def quad(c,b,a)). Wheter or not this refactor is more clear is a matter of "do you like functional programming". However, kwargs arz more useful in context where some parameters are optional or less frequentely used. But it makes sense (see Pep about mandatory kwargs). Kwargs is a wonderful invention in Python (or, lisp). Le ven. 7 sept. 2018 à 18:54, David Mertz <mertz@gnosis.cx> a écrit :

Top posting for once, since no one is quoting well in this thread: Does this in any way answer David's question? I'm serious; you've spent a lot of words that, as best I can tell, say exactly nothing about how keyword arguments would help that quadratic function. If I'm missing something, please tell me. On 07/09/18 18:17, Robert Vanden Eynde wrote:
-- Rhodri James *-* Kynesim Ltd

On Fri, Sep 7, 2018 at 2:22 PM Rhodri James <rhodri@kynesim.co.uk> wrote:
I read Robert's response as saying, 1. The quadratic formula and its parameter list are well-known enough that you shouldn't use different names or orders. 2. Even still, there are cases where the argument expressions are long enough that you might want to bind them to local variable names. However, I don't think David's example/question is fair in the first place. Robert said that passing as keywords can be useful in cases where the order is hard to remember, and David responded with an example where the argument order is standardized (so you wouldn't forget order), then talked about "forcing" callers to use certain variable names (which I don't think is warranted). On Fri, Sep 7, 2018 at 2:22 PM Rhodri James <rhodri@kynesim.co.uk> wrote:

Do you want to change my PEP suggestion to be about forcing stuff? Because otherwise I don’t see why you keep being that up. We’ve explained to you two times (three counting the original mail) that no one is saying anything about forcing anything.

On 09/06/2018 07:05 AM, Anders Hovmöller wrote:
On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote:
On Thu, Sep 06, 2018 at 12:15:46PM +0200, Anders Hovmöller wrote:
Direct disagreement is not uncivil, just direct. You asked a yes/no question and got a yes/no answer. D'Aprano's comments further down are also not uncivil, just explicative (not expletive ;) ) of his position. As for your proposal, I agree with D'Aprano -- this is a lot machinery to support a use-case that doesn't feel compelling to me, and I do tend to name my variables the same when I can. -- ~Ethan~

On Thu, 6 Sep 2018 at 09:51 Ethan Furman <ethan@stoneleaf.us> wrote:
It also wouldn't have hurt to say "I don't think so" versus the hard "no" as it means the same thing. You're right that blunt isn't necessarily uncivil, but bluntness is also interpreted differently in various cultures so it's something to avoid if possible. -Brett

On Thursday, September 6, 2018 at 6:51:12 PM UTC+2, Ethan Furman wrote:
It's a rhetorical question in a PR sense, not an actual yes/no question.
It's not a lot of machinery. It's super tiny. Look at my implementation. Generally these arguments against sound like the arguments against f-strings to me. I personally think f-strings are the one of the best things to happen to python in at least a decade, I don't know if people on this list agree?

On Thu, Sep 06, 2018 at 07:05:57AM -0700, Anders Hovmöller wrote:
On Thursday, September 6, 2018 at 3:11:46 PM UTC+2, Steven D'Aprano wrote:
[...]
You are suggesting special syntax which encourages people to name their local variables the same as the parameters to functions which they call. That makes a value judgement that it is not just a good thing to match those names, but that it is *such* a good thing that the language ought to provide syntax to make it easier. If we make this judgement that consistency of names is Good, then naturally *inconsistency* of names is, if not outright Bad, at least *Less* Good and therefore to be avoided. If this suggestion is accepted, it's likely that there will be peer pressure to treat this as more Pythonic (i.e. better quality code) than the older explicit name=name style, which will quickly become unPythonic. See, for example, how quickly people have moved to the implicit f-strings over the explicit string.format form. Laziness and conciseness trumps the Zen. Whether this is a good thing or a bad thing, I leave to people to make up their own mind. If we believe that this consistency is desirable then maybe this would be a good thing. Linters could warn when you use "name=spam" instead of "*, name"; style guides can demand that code always uses this idiom whenever practical, tutorials and blog posts will encourage it, and the peer pressure to rename variables to match the called function's parameters would be a good thing too. But if consistency for consistency's sake is not generally a good thing, then we ought not to add such syntax just for conciseness.
If library authors are choosing bad names for their parameters, how would this syntax change that practice? If they care so little for their callers that they choose poorly-named parameters, I doubt this will change their practice. But I'm not actually talking about library authors choosing bad names. I only used "a" as the name following your example. I presumed it was a stand-in for a more realistic name. There's no reason to expect that there's only one good name that works equally well as a formal parameter and as a local argument. Formal parameters are often more generic, local arguments can be more specific to the caller's context. Of course I understand that with this proposal, there's nothing *forcing* people to use it. But it shifts the *preferred* idiom from explicit "name=spam" to implicit "*, name" and puts the onus on people to justify why they aren't naming their local variables the same as the function parameter, instead of treating "the same name" as just another name. [...]
Let's not :-) Regarding it being a code-smell: https://refactoring.guru/smells/long-parameter-list http://wiki.c2.com/?TooManyParameters For a defence of long parameter lists, see the first answer here: http://wiki.c2.com/?LongParameterList but that active preference for long parameter lists seems to be a very rare, more common is the view that *at best* long parameter lists is a necessary evil that needs mitigation. I think this is an extreme position to take: https://www.matheus.ro/2018/01/29/clean-code-avoid-many-arguments-functions/ and I certainly wouldn't want to put a hard limit on the number of parameters allowed. But in general, I think it is unquestionable that long parameter lists are a code-smell. It is also relevant in this sense. Large, complex function calls are undoubtably painful. We have mitigated that pain somewhat by various means, probably the best of which are named keyword arguments, and sensible default values. The unintended consequence of this is that it has reduced the pressure on developers to redesign their code to avoid long function signatures, leading to more technical debt in the long run. Your suggestion would also reduce the pain of functions that require many arguments. That is certainly good news if the long argument list is *truly necessary* but it does nothing to reduce the amount of complexity or technical debt. The unintended consequence is likewise that it reduces the pressure on developers to avoid designing such functions in the first place. This might sound like I am a proponent of hair-shirt programming where everything is made as painful as possible so as to force people to program the One True Way. That's not my intention at all. I love my syntactic sugar as much as the next guy. But I'd rather deal with the trap of technical debt and excessive complexity by avoiding it in the first place, not by making it easier to fall into. The issue I have is that the problem you are solving is *too narrow*: it singles out a specific special case of "function call is too complex with too many keyword arguments", namely the one where the arguments are simple names which duplicate the parameter exactly, but without actually reducing or mitigating the underlying problems with such code. (On the contrary, I fear it will *encourage* such code.) So I believe this feature would add complexity to the language, making keyword arguments implicit instead of explicit, for very little benefit. (Not withstanding your statement that 30% of function calls would benefit. That doesn't match my experience, but we're looking at different code bases.)
Indeed. And I'm sympathetic that some tasks are inherently complex and require many arguments. Its a matter of finding a balance between being able to use them, without encouraging them.
You claimed the benefit of "conciseness", but that doesn't actually exist unless your arguments are already local variables named the same as the parameters of the function you are calling. Getting those local variables is not always free: sometimes they're natually part of your function anyway, and then your syntax would be a genuine win for conciseness. But often they're not, and you have to either forgo the benefit of your syntax, or add complexity to your function in order to gain that benefit. Pointing out that weakness in your argument is not a straw man. -- Steve

Steven's point is the same as my impression. It's not terribly uncommon in code I write or read to use the same name for a formal parameter (whether keyword or positional) in the calling scope. But it's also far from universal. Almost all the time where it's not the case, it's for a very good reason. Functions by their nature are *generic* in some sense. That is, they allow themselves to be called from many other places. Each of those places has its own semantic context where different names are relevant to readers of the code in that other place. As a rule, the names used in function parameters are less specific or descriptive because they have to be neutral about that calling context. So e.g. a toy example: for record in ledger: if record.amount > 0: bank_transaction(currency=currencies[record.country], deposit=record.amount, account_number=record.id) Once in a while the names in the two scopes align, but it would be code obfuscation to *force* them to do so (either by actual requirement or because "it's shorter"). On Thu, Sep 6, 2018 at 9:11 AM Steven D'Aprano <steve@pearwood.info> wrote:
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

On Thursday, September 6, 2018 at 4:13:45 PM UTC+2, David Mertz wrote:
Pythons normal arguments already gives people an option to write something else "because it's shorter" though: just use positional style. So your example is a bit dishonest because it would be: bank_transaction(currencies[record.country], record.amount, record.id) ...in many many or even most code bases. And I would urge you to try out my analysis tool on some large code base you have access to. I do have numbers to back up my claims. I don't have numbers on all the places where the names don't align but would be *better* if they did align though, because that's a huge manual task, but I think it's pretty obvious these places exists.

I have encountered situations like this, and generally I just use **kwargs for non-critical and handle the parameter management in the body of the function. This also makes it easier to pass the arguments to another function. You can use a dict comprehension to copy over the keys you want, then unpack them as arguments to the next function. On Thu, Sep 6, 2018 at 6:16 AM Anders Hovmöller <boxed@killingar.net> wrote:

Hi Anders Thank you for your interesting message. I'm sure it's based on a real need. You wrote:
I assume you're talking about defining functions. Here's something that already works in Python. >>> def fn(*, a, b, c, d, e): return locals() >>> fn.__kwdefaults__ = dict(a=1, b=2, c=3, d=4, e=5) >>> fn() {'d': 4, 'b': 2, 'e': 5, 'c': 3, 'a': 1} And to pick up something from the namespace >>> eval('aaa', fn.__globals__) 'telltale' Aside: This is short, simple and unsafe. Here's a safer way >>> __name__ '__main__' >>> import sys >>> getattr(sys.modules[__name__], 'aaa') 'telltale'
From this, it should be easy to construct exactly the dict() that you want for the kwdefaults.
-- Jonathan

I missed an important line of code. Here it is: >>> aaa = 'telltale' Once you have that, these will work: >>> eval('aaa', fn.__globals__) 'telltale' >>> __name__ '__main__' >>> import sys >>> getattr(sys.modules[__name__], 'aaa') 'telltale' -- Jonathan

Summary: I addressed the DEFINING problem. My mistake. Some rough ideas for the CALLING problem. Anders has kindly pointed out to me, off-list, that I solved the wrong problem. His problem is CALLING the function fn, not DEFINING fn. Thank you very much for this, Anders. For calling, we can use https://docs.python.org/3/library/functions.html#locals >>> lcls = locals() >>> a = 'apple' >>> b = 'banana' >>> c = 'cherry' >>> dict((k, lcls[k]) for k in ('a', 'b', 'c')) {'b': 'banana', 'c': 'cherry', 'a': 'apple'} So in his example foo(a=a, b=b, c=c, d=3, e=e) one could instead write foo(d=3, **helper(locals(), ('a', 'b', 'c', 'e'))) or perhaps better helper(locals(), 'a', 'b', 'c', 'e')(foo, d=3) where the helper() picks out items from the locals(). And in the second form, does the right thing with them. Finally, one might be able to use >>> def fn(*, a, b, c, d, e): f, g, h = 3, 4, 5 >>> fn.__code__.co_kwonlyargcount 5 >>> fn.__code__.co_varnames ('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h') >>> fn.__code__.co_argcount 0 to identify the names of all keyword arguments of the function foo(), and they provide the values in locals() as the defaults. Of course, this is somewhat magical, and requires strict conformance to conventions. So might not be a good idea. The syntax could then be localmagic(foo, locals())(d=3) which, for magicians, might be easier. But rightly in my opinion, Python is reluctant to use magic. On the other hand, for a strictly controlled Domain Specific Language, it might, just might, be useful. And this list is for "speculative language ideas" (see https://mail.python.org/mailman/listinfo/python-ideas). -- Jonathan

Sure. This was the argument against f-strings too. In any case I'm not trying to solve a problem of how to extract things from the local namespace anymore than "foo(a, b)" is. I'm trying to minimize the advantage positional arguments have over keyword arguments in brevity. If that makes sense?

Le 06/09/2018 à 03:15, Anders Hovmöller a écrit :
It will make code harder to read. Indeed, now your brain has to make the distinction between: foo(a, *, b, c) and: foo(a, b, *, c) Which is very subtle, yet not at all the same thing. All in all, this means: - you have to stop to get the meaning of this. Scanning the lines doesn't work anymore. - this is a great opportunity for mistakes, and hence bugs. - the combination of the two makes bugs that are hard to spot and fix. -1

I agree that this is a familiar pattern, but I long since forgot the specifics of the domain it happens in. I borrowed your code, and added filename tracking to see what source files had high `could_have_been_a_matched_kwarg`. Here is the top one: https://github.com/django/django/blob/master/tests/migrations/test_autodetec... The argument-name-matches-the-local-variable-name pattern does appear to happen in many test files. I assume programmers are more agnostic about variable names in a test because they have limited impact on the rest of the program; matching the argument names makes sense. There are plenty of non-test files that can use this pattern, here are two intense ones: https://github.com/django/django/blob/master/django/contrib/admin/options.py (212 call parameters match) https://github.com/django/django/blob/master/django/db/backends/base/schema.... (69 call parameters match) Opening these in an IDE, and looking at the function definitions, there is a good chance you find a call where the local variable and argument names match. It is interesting to see this match, but I not sure how I feel about it. For example, the options.py has a lot of small methods that deal with (request, obj) pairs: eg `has_view_or_change_permission(self, request, obj=None)` Does that mean there should be a namedtuple("request_on_object", ["request", "obj"]) to "simplify" all these calls? There are also many methods that accept a single `request` argument; but I doubt they would benefit from the new syntax. On 2018-09-06 06:15, Anders Hovmöller wrote:

Hi, I'd like to reopen this discussion if anyone is interested. Some things have changed since I wrote my original proposal so I'll first summarize: 1. People seem to prefer the syntax `foo(=a)` over the syntax I suggested. I believe this is even more trivial to implement in CPython than my original proposal anyway... 2. I have updated my analysis tool: https://gist.github.com/boxed/610b2ba73066c96e9781aed7c0c0b25c It will now also give you statistics on the number of arguments function calls have. I would love to see some statistics for other closed source programs you might be working on and how big those code bases are. 3. I have made a sort-of implementation with MacroPy: https://github.com/boxed/macro-kwargs/blob/master/test.py I think this is a dead end, but it was easy to implement and fun to try! 4. I have also recently had the idea that a foo=foo type pattern could be handled in for example PyCharm as a code folding feature (and maybe as a completion feature). I still think that changing Pythons syntax is the right way to go in the long run but with point 4 above one could experience what this feature would feel like without running a custom version of Python and without changing your code. I admit to a lot of trepidation about wading into PyCharms code though, I have tried to do this once before and I gave up. Any thoughts? / Anders
participants (21)
-
Anders Hovmöller
-
Brett Cannon
-
Brice Parent
-
Calvin Spealman
-
Chris Angelico
-
Chris Barker
-
David Mertz
-
Ethan Furman
-
Franklin? Lee
-
Greg Ewing
-
Jacco van Dorp
-
Jonathan Fine
-
Kyle Lahnakoski
-
Michael Selik
-
Michel Desmoulin
-
Rhodri James
-
Robert Vanden Eynde
-
Stephan Houben
-
Steve Barnes
-
Steven D'Aprano
-
Todd