Default arguments in Python - the return
/ def f(a, L=None): />/ if L is None: />/ L = [] /to get what we want (and if None was also a possible value ? what other value should we put as a placeholder for "I'd like None or a fresh new
Hello, I'm surely not original in any way there, but I'd like to put back on the table the matter of "default argument values". Or, more precisely, the "one shot" handling of default values, which makes that the same mutable objects, given once as default arguments, come back again and again at each function call. They thus become some kinds of "static variables", which get polluted by the previous calls, whereas many-many-many python users still believe that they get a fresh new value at each function call. I think I understand how default arguments are currently implemented (and so, "why" - technically - it does behave this way), but I'm still unsure of "why" - semantically - this must be so. I've browsed lots of google entries on that subject, but as far as I'm concerned, I've found nothing in favor current semantic. I've rather found dozens, hundreds of posts of people complaining that they got biten by this gotcha, many of them finishing with a "Never put mutable values in default arguments, unless you're very very sure of what you're doing !". And no one seemed to enjoy the possibilities of getting "potentially static variables" this way. Static variables are imo a rather bad idea, since they create "stateful functions", that make debugging and maintenance more difficult ; but when such static variable are, furthermore, potentially non-static (i.e when the corresponding function argument is supplied), I guess they become totally useless and dangerous - a perfect way to get hard-to-debug behaviours. On the other hand, when people write "def func(mylist=[]):", they basically DO want a fresh new list at each call, be it given by the caller or the default argument system. So it's really a pity to need tricks like list but I can't say it directly ?"). So I'd like to know : are there other "purely intellectual" arguments for/against the current semantic of default arguments (I might have missed some discussion on this subject, feel free to point them ? Currently, this default argument handling looks, like a huge gotcha for newcomers, and, I feel, like an embarrassing wart to most pythonistas. Couldn't it be worth finding a new way of doing it ? Maybe there are strong arguments against a change at that level ; for example, performance issues (I'm not good in those matters). But I need to ensure. So here are my rough ideas on what we might do - if after having the suggestions from expert people, it looks like it's worth writting a PEP, I'll be willing to particpateon it. Basically, I'd change the python system so that, when a default argument expression is encountered, instead of being executed, it's wrapped in some kind of zero-argument lambda expression, which gets pushed in the "func_defaults" attribute of the function. And then, each time a default argument is required in a function call, this lambda expression gets evaluated and gives the expected value. I guess this will mean some overhead during function call, so this might become another issue. It's also a non retrocompatible change, so I assume we'd have to use a "from __future__ import XXX" until Python4000. But I think the change is worth the try, because it's a trap which waits for all the python beginners. So, if this matters hasn't already been marked somewhere as a no-go, I eagerly await the feedback of users and core developpers on the subject. :) By the way, I'm becoming slightly allergical to C-like languages (too much hassle for too little gain, compared to high level dynamic languages), but if that proposition goes ahead, and no one wants to handle the implementation details, I'll put the hands in the engine ^^ Regards, Pascal
On Fri, May 8, 2009 at 1:31 PM, Pascal Chambon <chambon.pascal@wanadoo.fr> wrote:
Hello,
I'm surely not original in any way there, but I'd like to put back on the table the matter of "default argument values". Or, more precisely, the "one shot" handling of default values, which makes that the same mutable objects, given once as default arguments, come back again and again at each function call. They thus become some kinds of "static variables", which get polluted by the previous calls, whereas many-many-many python users still believe that they get a fresh new value at each function call. I think I understand how default arguments are currently implemented (and so, "why" - technically - it does behave this way), but I'm still unsure of "why" - semantically - this must be so. <snip> So I'd like to know : are there other "purely intellectual" arguments for/against the current semantic of default arguments (I might have missed some discussion on this subject, feel free to point them ?
Point-point: http://mail.python.org/pipermail/python-ideas/2007-January/000121.html And see also the links below.
Currently, this default argument handling looks, like a huge gotcha for newcomers, and, I feel, like an embarrassing wart to most pythonistas. Couldn't it be worth finding a new way of doing it ? Maybe there are strong arguments against a change at that level ; for example, performance issues (I'm not good in those matters). But I need to ensure.
So here are my rough ideas on what we might do - if after having the suggestions from expert people, it looks like it's worth writting a PEP, I'll be willing to particpateon it. Basically, I'd change the python system so that, when a default argument expression is encountered, instead of being executed, it's wrapped in some kind of zero-argument lambda expression, which gets pushed in the "func_defaults" attribute of the function. And then, each time a default argument is required in a function call, this lambda expression gets evaluated and gives the expected value.
I guess this will mean some overhead during function call, so this might become another issue. It's also a non retrocompatible change, so I assume we'd have to use a "from __future__ import XXX" until Python4000. But I think the change is worth the try, because it's a trap which waits for all the python beginners.
So, if this matters hasn't already been marked somewhere as a no-go, I eagerly await the feedback of users and core developpers on the subject. :)
It's basically been rejected. See GvR Pronouncement: http://mail.python.org/pipermail/python-3000/2007-February/005715.html regarding the pre-PEP "Default Argument Expressions": http://mail.python.org/pipermail/python-3000/2007-February/005704.html Unless your exact idea somehow differs significantly from my pre-PEP (sounds like it doesn't IMHO), it's not gonna happen. It's basically too magical. Cheers, Chris -- http://blog.rebertia.com
On Fri, May 8, 2009 at 5:12 PM, Chris Rebert <pyideas@rebertia.com> wrote:
On Fri, May 8, 2009 at 1:31 PM, Pascal Chambon
So, if this matters hasn't already been marked somewhere as a no-go, I eagerly await the feedback of users and core developpers on the subject. :)
It's basically been rejected. See GvR Pronouncement: http://mail.python.org/pipermail/python-3000/2007-February/005715.html regarding the pre-PEP "Default Argument Expressions": http://mail.python.org/pipermail/python-3000/2007-February/005704.html
Unless your exact idea somehow differs significantly from my pre-PEP (sounds like it doesn't IMHO), it's not gonna happen. It's basically too magical.
FWIW I don't find the dual semantics, with explicit syntax for the new semantics ("def foo(bar=new baz)") mentioned in the PEP too magical. If even C, a relatively small language, affords two calling semantics, why would it be too confusing for Python ? Perhaps that PEP might had had better luck if it didn't propose replacing the current semantics with the new. George
Pascal Chambon wrote:
I'm surely not original in any way there, but I'd like to put back on the table the matter of "default argument values".
There have been two proposals: 1. Evaluate the expression once, store the result, and copy on each function call. - Expensive. - Nearly always not needed. - Not always possible. 2. Store the expression and evaluate on each function call (your re-proposal). - Expensive. - The result may be different for each function call, and might raise an exception. - This is the job of the suite !!!!!!!!!!!!!!!! - Which is to say, run-time code belongs in the function body, not the header.
And no one seemed to enjoy the possibilities of getting "potentially static variables" this way.
You did not search hard enough.
Static variables are imo a rather bad idea,
So you want to take them away from everyone else. I think *that* is a rather bad idea ;-). No one is forcing you to use them.
On the other hand, when people write "def func(mylist=[]):", they basically DO want a fresh new list at each call,
Maybe, maybe not.
be it given by the caller or the default argument system. So it's really a pity to need tricks like
/ def f(a, L=None): />/ if L is None: />/ L = []
Or don't supply a default arg if that is not what you really want. Putting call time code in the function body is not a trick.
/to get what we want (and if None was also a possible value ?
__none = object() def(par = __none): if par == __none: ... as had been posted each time this question has been asked.
I guess this will mean some overhead during function call,
I absolutely guarantee that this will. Functions calls are expensive. Adding a function call for each default arg (and many functions have more than one) multiplies the calling overhead.
so this might become another issue.
Is and always has been. Terry Jan Reedy
Thanks everyone for the feedback and the links (I was obviously too confident in Google's first pages, to miss such things >_<) Terry Reedy a écrit :
And no one seemed to enjoy the possibilities of getting "potentially static variables" this way.
You did not search hard enough.
Well, for sure some people here and there used that semantic to have, for example, a "default cache" handling the requests for which a specific cache isn't provided. But that behavior can as easily be obtained with a much more explicit way, which furthermore lets you access your default cache easily from inside the function code, even when a specific cache is provided : class a: cache=[1,2,3] def func(self, x, newcache=cache): print "Current cache state :",y y.append(x) print "The static, default cache is ", cache So I don't see the default argument trick as a "neat feature", rather as a way of doing simple things obscure.
Static variables are imo a rather bad idea,
So you want to take them away from everyone else. I think *that* is a rather bad idea ;-). No one is forcing you to use them.
I don't want to annihilate all traces of static variables :p ; I just find them ugly, because they create stateful functions whose state is hidden in them (like some do with free variables, too), and that's imo not a "robust code best practice". But what kills me with current default arguments is that those aren't even real static variables : they're "potentially static variables", and as far as I've seen, you have no easy way to check whether, for instance, the argument value that you've gotten is the default, static one, or a new one provided by the caller (of course, you can store the default value somewhere else for reference, but it's lamely redundant). If people want static variables in python, for example to avoid OO programming and still have stateful functions, we can add an explicit "static" keyword or its equivalent. But using the ambiguous value given via a default-valued argument is not pretty, imo. Unless we have a way to access, from inside a code block, the function object in which this code block belongs. Does it exist ? Do we have any way, from inside a call block, to browse the default arguments that this code block might receive ?
I guess this will mean some overhead during function call,
I absolutely guarantee that this will. Functions calls are expensive. Adding a function call for each default arg (and many functions have more than one) multiplies the calling overhead.
so this might become another issue.
Is and always has been.
Well, if, like it was proposed in previous threads, the expression is only reevaluated in particular circumstances (i.e, if the user asks it with a special syntax), it won't take more time than the usual "if myarg is None : myarg = []" ; but I agree that alternate syntaxes have led to infinite and complex discussions, and that the simpler solution I provided is likely to be too CPU intensive, more than I expected...
/to get what we want (and if None was also a possible value ?
__none = object() def(par = __none): if par == __none: ...
as had been posted each time this question has been asked.
Well, I didn't turn my rhetorical question properly it seems ^^. I wholly agree that you can always use another object as a placeholder, but I don't quite like the idea of creating new instances just to signify "that's not a valid value that you can use, create one brand new" On the other hand, would anyone support my alternative wish, of having a builtin "NotGiven", similar to "NotImplemented", and dedicated to this somehow usual taks of "placeholder" ? There would be two major pros for this, imo : - giving programmers a handy object for all unvanted "mutable default argument" situations, without having to think "is None a value I might want to get ?" - *Important* : by appearing in the beginning of the doc near True and False, this keyword would be much more visible to beginners than the deep pages on "default argument handling" ; thus, they'd have much more chances to cross warnings on this Gotcha, than they currently have (and seeing "NotGiven" in tutorials would force them to wonder why it's so, it's imo much more explicit than seeing "None" values instead) So, since reevaluation of arguments actually *is* a no-go, and forbidding mutable arguments is obviously a no-go too, would you people support this integrating of "NotGiven" (or any other name) in the builtins ? It'd sound to me like a good practice. Regards, Pascal
On Sat, 9 May 2009 08:56:20 pm Pascal Chambon wrote:
But what kills me with current default arguments is that those aren't even real static variables : they're "potentially static variables", and as far as I've seen, you have no easy way to check whether, for instance, the argument value that you've gotten is the default, static one, or a new one provided by the caller (of course, you can store the default value somewhere else for reference, but it's lamely redundant).
I'm not really sure why you would want to do that. The whole point of default values is to avoid needing to care whether or not the caller has provided an argument or not. [...]
Does it exist ? Do we have any way, from inside a call block, to browse the default arguments that this code block might receive ?
def spam(n=42): ... return "spam "*n ... spam.func_defaults (42,)
dir(func_object) is your friend :) [...]
but I agree that alternate syntaxes have led to infinite and complex discussions, and that the simpler solution I provided is likely to be too CPU intensive, more than I expected...
I would support... no, that's too strong. I wouldn't oppose the suggestion that Python grow syntax for "evaluate this default argument every time the function is called (unless the argument is given by the caller)". The tricky part is coming up with good syntax and a practical mechanism. [...]
On the other hand, would anyone support my alternative wish, of having a builtin "NotGiven", similar to "NotImplemented", and dedicated to this somehow usual taks of "placeholder" ?
There already is such a beast: None is designed to be used as a placeholder for Not Given, Nothing, No Result, etc. If None is not suitable, NotImplemented is also a perfectly good built-in singleton object which can be used as a sentinel. It's already used as a sentinel for a number of built-in functions and operators. There's no reason you can't use it as well.
There would be two major pros for this, imo : - giving programmers a handy object for all unvanted "mutable default argument" situations, without having to think "is None a value I might want to get ?"
But then they would need to think "Is NotGiven a value I might want to get, so I can pass it on to another function unchanged?", and you would then need to create another special value ReallyNotGiven. And so on.
- *Important* : by appearing in the beginning of the doc near True and False, this keyword would be much more visible to beginners than the deep pages on "default argument handling" ; thus, they'd have much more chances to cross warnings on this Gotcha, than they currently have (and seeing "NotGiven" in tutorials would force them to wonder why it's so, it's imo much more explicit than seeing "None" values instead)
Heh heh heh, he thinks beginners read manuals :-)
So, since reevaluation of arguments actually *is* a no-go, and forbidding mutable arguments is obviously a no-go too, would you people support this integrating of "NotGiven" (or any other name) in the builtins ? It'd sound to me like a good practice.
-1 on an extra builtin. There's already two obvious ones, and if for some reason you need to accept None and NotImplemented as valid data, then you can create an unlimited number of sentinels with object(). The best advantage of using object() is that because the sentinel is unique to your module, you can guarantee that nobody can accidentally pass it, or expect to use it as valid data. -- Steven D'Aprano
Well well well, lots of interesting points have flowed there, I won't have any chance of reacting to each one ^^
And no one seemed to enjoy the possibilities of getting "potentially static variables" this way. You did not search hard enough.
Would anyone mind pointing me to people that have made a sweet use of mutable default arguments ? At the moment I've only run into "memoization" thingies, which looked rather awkward to me : either the "memo" has to be used from elsewhere in the program, and having to access it by browsing the "func_defaults" looks not at all "KISS", or it's really some private data from the function, and having it exposed in its interface is rather error-prone.
[...]
Does it exist ? Do we have any way, from inside a call block, to browse the default arguments that this code block might receive ?
[...]
dir(func_object) is your friend :)
Whoups, I meant "from inside a *code* block" :p But that's the "self function" thread you noted, I missed that one... thanks for pointing it ^^
On the other hand, would anyone support my alternative wish, of having a builtin "NotGiven", similar to "NotImplemented", and dedicated to this somehow usual taks of "placeholder" ?
There already is such a beast: None is designed to be used as a placeholder for Not Given, Nothing, No Result, etc.
If None is not suitable, NotImplemented is also a perfectly good built-in singleton object which can be used as a sentinel. It's already used as a sentinel for a number of built-in functions and operators. There's no reason you can't use it as well.
Well, to me there was like a ternary semantic for arguments : *None -> make without this argument / don't use this feature *"NotGiven" -> create the parameter yourself *"Some value" -> use this value as a parameter But after reflexion, it's quite rare that the three meanings have to be used in the same place, so I guess it's ok like it is... Even though, still, I'd not be against new, more explicit builtins. "None" has too many meanings to be "self documenting", and I feel "NotImplemented" doesn't really fit where we mean "notGiven" That's the same thing for exceptions : I've seen people forced to make ugly workarounds because they got "ValueError" where they would have loved to get "EmptyIterableError" or other precise exceptions. But maybe I'm worrying on details there :p
Heh heh heh, he thinks beginners read manuals :-)
^^ I'm maybe the only one, but I've always found the quickest way to learn a language/library was to read the doc. Wanna learn python ? Read the language reference, then the library reference. Wanna know what's the object model of PHP5 versus PHP4 ? Read the 50 pages chapter on that matter. Wanna play with Qt ? Read the class libraries first. :p Good docs get read like novels, and it's fun to cross most of the gotchas and implementation limitations without having to be biten by them first. People might have the feeling that they gain time by jumping to the practice, I've rather the feeling that they lose hell a lot of it, that way. Back to the "mutable default arguments" thingy : I think I perfectly see the points in favor of having them as attributes of the function method, as it's the case currently. It does make sense in many ways, even if less sense than "class attributes", for sure (the object model of python is rock solid to me, whereas default arguments are on a thin border that few languages had the courage to explore - most of them forbidding non-litteral constants). But imo, it'd be worth having a simple and "consensual way" of obtaining dynamic evaluation of default arguments, without the "if arg==none:" pattern. The decorators that people have submitted have sweet uses, but they either deepcopy arguments, use dynamic evaluation of code strings, or force to lamba-wrap all arguments ; so they're not equivalent to what newbies would most of the time expect - a reevaluation of the python code they entered after '='. So the best, imo, would really be a keyword or some other form that reproduces with an easy syntax the "lambda-wrapping" we had. If adding keywords is too violent, what would you people think of some notation similar to what we already have in the "function arguments world ", i.e stars ? def func(a, c = *[]): pass Having 1, 2 or 3 stars in the "default argument" expression, wouldn't it be OK ? I guess they have no meaning there at the moment, so we could give them one : "keep that code as a lamda functio nand evaluate it at each function call". Couldn't we ? The risk would be confusion with the other "*" and "**", but in this case we might put 3 stars (yeah, that's much but...). Any comment on this ? Regards, Pascal PS : Has anyone read Dale Carneghie's books here ? That guy is a genius of social interactions, and I guess that if 1/3 of posters/mailers had read him, there would never be any flame anymore on forums and mailing lists. Contrarily to what his titles might look like, he doesn't promote hypocrisy or cowardness ; he simply points lots of attitudes (often unconscious) that ruin discussions without helping in any way the matter go forward. I'm myself used to letting scornful or aggressives sentences pass by ; but others don't like them, and I fully understand why. So could't we smoothen edges, in order to keep the discusion as it's supposed to be - a harmless sharing of pros and cons arguments, which endangers no one's life - instead of having it randomly turned into a confrontation of egos, pushing each other down as in an attempt not to drown ? http://en.wikipedia.org/wiki/How_to_Win_Friends_and_Influence_People
On Mon, May 11, 2009 at 12:49 PM, Pascal Chambon <chambon.pascal@wanadoo.fr> wrote: <snip>
So the best, imo, would really be a keyword or some other form that reproduces with an easy syntax the "lambda-wrapping" we had.
If adding keywords is too violent, what would you people think of some notation similar to what we already have in the "function arguments world ", i.e stars ?
def func(a, c = *[]): pass
Having 1, 2 or 3 stars in the "default argument" expression, wouldn't it be OK ? I guess they have no meaning there at the moment, so we could give them one : "keep that code as a lamda functio nand evaluate it at each function call". Couldn't we ? The risk would be confusion with the other "*" and "**", but in this case we might put 3 stars (yeah, that's much but...).
Any comment on this ?
Seems unnecessarily confusing and sufficiently unrelated to the current use of stars in Python. -1 on this syntax. I'd look for a different punctuation/keyword. Cheers, Chris -- http://blog.rebertia.com
Pascal Chambon writes:
So could't we smoothen edges, in order to keep the discusion as it's supposed to be - a harmless sharing of pros and cons arguments, which endangers no one's life -
In discussions about Python development, misuse of the term "Pythonic" to support one's personal preference is not harmless. It leads to confusion of newbies, and ambiguity in a term that is already rather precise, and becoming more so with every PEP (though it is hard to express in a few words as a definition). The result is that the BDFL may use that term at his pleasure, but the rest of us risk being brought up short by somebody who knows better.
instead of having it randomly turned into a confrontation of egos,
This was not a random event. It was triggered by, *and responded only to*, the misuse of the word "Pythonic".
pushing each other down as in an attempt not to drown ?
Hey, I haven't seen one claim that "this feature would look good in Perl" yet. The gloves are still on.<wink>
On Tue, May 12, 2009 at 1:36 PM, Stephen J. Turnbull <stephen@xemacs.org>wrote:
Pascal Chambon writes:
So could't we smoothen edges, in order to keep the discusion as it's supposed to be - a harmless sharing of pros and cons arguments, which endangers no one's life -
In discussions about Python development, misuse of the term "Pythonic" to support one's personal preference is not harmless. It leads to confusion of newbies, and ambiguity in a term that is already rather precise, and becoming more so with every PEP (though it is hard to express in a few words as a definition). The result is that the BDFL may use that term at his pleasure, but the rest of us risk being brought up short by somebody who knows better.
instead of having it randomly turned into a confrontation of egos,
This was not a random event. It was triggered by, *and responded only to*, the misuse of the word "Pythonic".
I guess it's never occurred to me, and I wouldn't have thought it would be immediately clear to everyone, that Pythonic simply means "Whatever BDFL thinks". I've always thought it meant "elegant and in keeping with the design philosophy of Python", and up for discussion and interpretation by everyone. I never thought that it would be used as a means of *preventing* discussion about what was or was not 'Pythonic'. *Obviously*, BDFL's opinions on the language are authoritative, but that doesn't make them beyond discussion. This is the Python Ideas list, not the dev-list, and I was discussing my own interpretation, not trying to force anyone on anything. To recall a quote I heard once, "You are entitled to your own opinion, but not your own facts". I would have thought that expressing ones' opinion about what is or is not Pythonic is a wonderful thing to encourage. It's like encouraging people to discuss what elegant code looks like, or the merits of a piece of writing. I thought was very clear that I was talking about my interpretation of what was Pythonic, and clear that I was in no way talking about trying to claim authority. I feel a bit like I've been targetted by the thought police, truth be told, although that overstates matters. I didn't think I was in any way saying "My way is absolutely more Pythonic, you should all think like me", but much more along the lines of, "Hey, I think my solution captures something elegant and Pythonic, surely that's worth talking about even if there are some practical considerations involved". I just thought I'd be clear in saying "seems to me to be more Pythonic" rather than "is more Pythonic". Where are people going to talk freely about their interpretation of what is and isn't Pythonic, if not the ideas list? I'm also subscribed to the python-dev list, and I've never attempted to force an opinion there. Isn't *this* list the right place to have conversations about these concepts? I don't think people should be pulled short for talking about Pythonicity, just for trying to impose their world-view. That's what rubs wrongly -- being told you're not even supposed to *talk* about something, or not be entitled to an opinion on something. I would have thought that getting involved in discussing the Zen of Python is something that should be a part of everyone's learning and growth, rather than something which is delivered like a dogma. That's not to say there isn't a right answer on many issues, but it has to be acceptable to discuss the issues and to hold personal opinions. Cheers, -T
On Tue, May 12, 2009, Tennessee Leeuwenburg wrote:
I thought was very clear that I was talking about my interpretation of what was Pythonic, and clear that I was in no way talking about trying to claim authority. I feel a bit like I've been targetted by the thought police, truth be told, although that overstates matters. I didn't think I was in any way saying "My way is absolutely more Pythonic, you should all think like me", but much more along the lines of, "Hey, I think my solution captures something elegant and Pythonic, surely that's worth talking about even if there are some practical considerations involved". I just thought I'd be clear in saying "seems to me to be more Pythonic" rather than "is more Pythonic".
That may have been your intent, but it sure isn't what I read in your original post. I suggest you re-read it looking for what might get interpreted as obstreperous banging on the table: http://mail.python.org/pipermail/python-ideas/2009-May/004601.html If you still don't see it, I'll discuss it with you (briefly!) off-list; that kind of tone discussion is really off-topic for this list. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "It is easier to optimize correct code than to correct optimized code." --Bill Harlan
On Tue, May 12, 2009 at 2:42 PM, Aahz <aahz@pythoncraft.com> wrote:
On Tue, May 12, 2009, Tennessee Leeuwenburg wrote:
I thought was very clear that I was talking about my interpretation of what was Pythonic, and clear that I was in no way talking about trying to claim authority. I feel a bit like I've been targetted by the thought police, truth be told, although that overstates matters. I didn't think I was in any way saying "My way is absolutely more Pythonic, you should all think like me", but much more along the lines of, "Hey, I think my solution captures something elegant and Pythonic, surely that's worth talking about even if there are some practical considerations involved". I just thought I'd be clear in saying "seems to me to be more Pythonic" rather than "is more Pythonic".
That may have been your intent, but it sure isn't what I read in your original post. I suggest you re-read it looking for what might get interpreted as obstreperous banging on the table:
http://mail.python.org/pipermail/python-ideas/2009-May/004601.html
If you still don't see it, I'll discuss it with you (briefly!) off-list; that kind of tone discussion is really off-topic for this list.
Agreed. Anyone else who wants to chime in, feel free to email me off-list. Regardless of the rights and wrongs, I'll of course be extra-careful in future to be crystal clear about my meaning. As far as I can tell, my original email is littered with terms like 'seems to me', 'in my opinion', 'personally' etc which I would think would convey to anyone that I'm talking about a personal opinion and not trying to discredit any one or any thing. Not that it's about counting, but I count no fewer than seven occasions where I point out that I am advancing a personal opinion rather than making a universal statement. I'm not sure what else I should have done. Cheers, -T
Well, since adding new keywords or operators is very sensitive, and the existing ones are rather exhausted, it won't be handy to propose a new syntax... One last idea I might have : what about something like * def myfunc(a, b, c = yield []): pass* I'm not expert in english, but I'd say the following "equivalents" of yield (dixit WordWeb) are in a rather good semantic area : *Be the cause or source of *Give or supply *Cause to happen or be responsible for *Bring in Of course the behaviour of this yield is not so close from the one we know, but there is no interpretation conflict for the parser, and we might quickly get used to it : * yield in default argument => reevaluate the expression each time * yield in function body => return value and prepare to receive one How do you people feel about this ? Regards, Pascal PS : I've heard some month ago someone who compared the new high level languages as new religions - with the appearance of notions like "py-evangelistes" and stuffs - whereas it had (in his opinion) never been so for older languages :p That's imo somehow true (and that's rather a good sign for those languages) ; I feel phrases like "pythonic" or perl's "TIMTOWTDI" have gained some kind of sacred aura ^^
Pascal Chambon writes:
One last idea I might have : what about something like
* def myfunc(a, b, c = yield []): pass*
As syntax, I'd be -0.5. But my real problem is that the whole concept seems to violate several of the "Zen" tenets. Besides those that Steven D'Aprano mentioned, I would add two more. First is "explicit is better than implicit" at the function call. True, for the leading cases of "[]" and "{}", "def foo(bar=[])" is an easy mistake for novice Python programmers to make with current semantics. And I agree that it is very natural to initialize an unspecified list or dictionary with the empty object. But those also have a convenient literal syntax that makes it clear that the object is constructed at function call time: "myfunc([])" and "myfunc({})". Since that syntax is always available even with the proposed new semantics, I feel this proposal also violates "there's one -- and preferably only one -- obvious way to do it". I understand the convenience argument, but that is frequently rejected on Python-Dev with "your editor can do that for you; if it doesn't, that's not a problem with Python, it's a problem with your editor." I also see the elegance and coherence of Tennessee's proposal to *always* dynamically evaluate, but I don't like it. Given that always evaluating dynamically is likely to have performance impact that is as surprising as the behavior of "def foo(bar=[])", I find it easy to reject that proposal on the grounds of "although practicality beats purity". I may be missing something, but it seems to me that the proponents of this change have yet to propose any concrete argument that it is more Pythonic than the current behavior of evaluating default expressions at compile time, and expressing dynamic evaluation by explicitly invoking the expression conditionally in the function's suite, or as an argument. Regarding the syntax, the recommended format for argument defaults is def myfunc(a, b, c=None): pass Using a keyword (instead of an operator) to denote dynamically evaluated defaults gives: def myfunc(a, b, c=yield []): pass which looks like a typo to me. I feel there should be a comma after "yield", or an index in the brackets. (Obviously, there can't be a comma because "[]" can't be a formal parameter, and yield is a keyword so it can't be the name of a sequence or mapping. So the syntax probably 'works' in terms of the parser. I'm describing my esthetic feeling, not a technical objection.) As for "yield" itself, I would likely be confused as to when the evaluation takes place. "Yield" strikes me as an imperative, "give me the value *now*", ie, at compile time.
Le Wed, 13 May 2009 16:34:02 +0900, "Stephen J. Turnbull" <stephen@xemacs.org> s'exprima ainsi:
But those also have a convenient literal syntax that makes it clear that the object is constructed at function call time: "myfunc([])" and "myfunc({})".
I totaly agree with you, here. And as you say "explicit, etc..." Then, the argument must not be 'defaulted' at all in the function def. This is a semantic change and a pity when the argument has an "obvious & natural" default value. While we're at removing default args and requiring them to be explicit instead, why have default args at all in python? Especially, why should immutable defaults be OK, and mutable ones be wrong and replaced with explicit value at call time? I mean that this distinction makes no sense conceptually: the user just wants a default; this is rather python internal soup detail raising an issue. I would rather require the contrary, namely that static vars should not be messed up with default args. This is not only confusion in code, but also design flaw. here's how I see it various possibilities (with some amount of exageration ;-): def f(arg, lst=[]): # !!! lst is no default arg, !!! # !!! it's a static var instead !!! # !!! that will be updated in code !!! <do things> def f(arg): <do things> f.lst = [] # init static var or def f(arg): # init static var try: assert f.lst except AttributeError: f.lst = [] <do things> Denis ------ la vita e estrany
spir writes:
Especially, why should immutable defaults be OK, and mutable ones be wrong and replaced with explicit value at call time?
"Although practicality beats purity." Consider a wrapper for something like Berkeley DB which has a plethora of options for creating external objects, but which most users are not going to care about. It makes sense for those options to be optional. Otherwise people will be writing def create_db(path): """Create a DB with all default options at `path`.""" create_db_with_a_plethora_of_options(path,a,b,c,d,e,f,g,h,i,j,k,l,m) def create_db_with_nondefault_indexing(path,indexer): """Create a DB with `indexer`, otherwise default options, at `path`.""" create_db_with_a_plethora_of_options(path,indexer,b,c,d,e,f,g,h,i,j,k,l,m) etc, etc, ad nauseum. No, thank you!
I would rather require the contrary, namely that static vars should not be messed up with default args. This is not only confusion in code, but also design flaw. here's how I see it various possibilities (with some amount of exageration ;-):
def f(arg, lst=[]): # !!! lst is no default arg, !!! # !!! it's a static var instead !!! # !!! that will be updated in code !!! <do things>
def f(arg): <do things> f.lst = [] # init static var
Don't generators do the right thing here? def f(arg): lst = [] while True: <do things> yield None
Le Wed, 13 May 2009 16:34:02 +0900, "Stephen J. Turnbull" <stephen@xemacs.org> s'exprima ainsi:
I also see the elegance and coherence of Tennessee's proposal to *always* dynamically evaluate, but I don't like it. Given that always evaluating dynamically is likely to have performance impact that is as surprising as the behavior of "def foo(bar=[])", I find it easy to reject that proposal on the grounds of "although practicality beats purity".
I do not understand why defaults should be evaluated dynamically (at runtime, I mean) in the proposal. This can happen, I guess, only for default that depend on the non-local scope: def f(arg, x=a): <body> If we want this this a changing at runtime, then we'd better be very clear and explicit: def f(arg, x=UNDEF): # case x is not provided, use current value of a if x is UNDEF: x = a <body> (Also, if x is not intended as a user defined argument, then there is no need for a default at all. We simply have a non-referentially transparent func.) Denis ------ la vita e estrany
Pascal Chambon wrote:
One last idea I might have : what about something like
* def myfunc(a, b, c = yield []): pass*
[...], but there is no interpretation conflict for the parser, and we might quickly get used to it
I am surprised that there is no conflict, but it looks like you are technically right. The parentheses around the yield expression are required in the following (valid) code:
def gen(): ... def func(arg=(yield 'starting')): ... return arg ... yield func ... g = gen() g.next() 'starting' f = g.send(42) f() 42
I would hate to see the meaning of the above change depending on whether the parentheses around the yield expression were there or not, so -1 on using "yield" for this. I'm +0 on the general idea of adding a keyword for delayed evaluation of default argument expressions. - Jacob
Jacob Holm wrote:
Pascal Chambon wrote:
One last idea I might have : what about something like
* def myfunc(a, b, c = yield []): pass*
[...], but there is no interpretation conflict for the parser, and we might quickly get used to it
I am surprised that there is no conflict, but it looks like you are technically right. The parentheses around the yield expression are required in the following (valid) code:
def gen(): ... def func(arg=(yield 'starting')): ... return arg ... yield func ... g = gen() g.next() 'starting' f = g.send(42) f() 42
I would hate to see the meaning of the above change depending on whether the parentheses around the yield expression were there or not, so -1 on using "yield" for this.
I'm +0 on the general idea of adding a keyword for delayed evaluation of default argument expressions.
There's the suggestion that Carl Johnson gave: def myfunc(a, b, c else []): pass or there's: def myfunc(a, b, c def []): pass where 'def' stands for 'default' (or "defaults to").
To someone who's a novice to this, could someone explain to me why it has to be an existing keyword at all? Since not identifiers are valid in that context anyway, why couldn't it be a new keyword that can still be used as an identifier in valid contexts? For example (not that I advocate this choice of keyword at all): def foo(bar reinitialize_default []): # <-- it's a keyword here reinitialize_default = "It's an identifier here!" That would be a syntax error now and if it were defined as a keyword only in that context it wouldn't introduce backwards compatibility problems and wouldn't force us to reuse an existing keyword in a context that may be a bit of a stretch. Is there a reason that this wouldn't be a viable approach? On 2009-05-13, MRAB <google@mrabarnett.plus.com> wrote:
Jacob Holm wrote:
Pascal Chambon wrote:
One last idea I might have : what about something like
* def myfunc(a, b, c = yield []): pass*
[...], but there is no interpretation conflict for the parser, and we might quickly get used to it
I am surprised that there is no conflict, but it looks like you are technically right. The parentheses around the yield expression are required in the following (valid) code:
def gen(): ... def func(arg=(yield 'starting')): ... return arg ... yield func ... g = gen() g.next() 'starting' f = g.send(42) f() 42
I would hate to see the meaning of the above change depending on whether the parentheses around the yield expression were there or not, so -1 on using "yield" for this.
I'm +0 on the general idea of adding a keyword for delayed evaluation of default argument expressions.
There's the suggestion that Carl Johnson gave:
def myfunc(a, b, c else []): pass
or there's:
def myfunc(a, b, c def []): pass
where 'def' stands for 'default' (or "defaults to"). _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
On Wed, May 13, 2009 at 10:52 AM, Jeremy Banks <jeremy@jeremybanks.ca> wrote:
To someone who's a novice to this, could someone explain to me why it has to be an existing keyword at all? Since not identifiers are valid in that context anyway, why couldn't it be a new keyword that can still be used as an identifier in valid contexts? For example (not that I advocate this choice of keyword at all):
def foo(bar reinitialize_default []): # <-- it's a keyword here reinitialize_default = "It's an identifier here!"
That would be a syntax error now and if it were defined as a keyword only in that context it wouldn't introduce backwards compatibility problems and wouldn't force us to reuse an existing keyword in a context that may be a bit of a stretch.
Is there a reason that this wouldn't be a viable approach?
Traditionally, keywords are recognized at the lexer level, which then passes tokens to the parser. Lexers are pretty simple (typically constants and regular expressions) and don't take the context into account. In principle what you're saying could work, but given the significant reworking of the lexer/parser it would require, it's quite unlikely to happen, for better or for worse. George
Le Wed, 13 May 2009 11:52:57 -0300, Jeremy Banks <jeremy@jeremybanks.ca> s'exprima ainsi:
To someone who's a novice to this, could someone explain to me why it has to be an existing keyword at all? Since not identifiers are valid in that context anyway, why couldn't it be a new keyword that can still be used as an identifier in valid contexts? For example (not that I advocate this choice of keyword at all):
def foo(bar reinitialize_default []): # <-- it's a keyword here reinitialize_default = "It's an identifier here!"
That would be a syntax error now and if it were defined as a keyword only in that context it wouldn't introduce backwards compatibility problems and wouldn't force us to reuse an existing keyword in a context that may be a bit of a stretch.
Is there a reason that this wouldn't be a viable approach?
My opinion on this is you're basically right. Even 'print' (for py<3.0) could be an identifier you could use in an assignment (or in any value expression), I guess, for parse patterns are different: print_statement : "print" expression assignment : name '=' expression So you can safely have "print" as name, or inside an expression. Even "print print" should work ! But traditionnally grammars are not built as a single & total definition of the whole language (like is often done using e.g. PEG, see http://en.wikipedia.org/wiki/Parsing_Expression_Grammar) but as a 2-layer definition: one for tokens (lexicon & morphology) and one for higher-level patterns (syntax & structure). The token layer is performed by a lexer that will not take the context into account to recognize tokens, so that it could not distinguish several, syntactically & semantically different, occurrences of "print" like above. As a consequence, in most languages, key word = reserved word There may be other reasons I'm not aware of. Denis ------ la vita e estrany
spir wrote:
My opinion on this is you're basically right. Even 'print' (for py<3.0) could be an identifier you could use in an assignment (or in any value expression), I guess, for parse patterns are different: print_statement : "print" expression assignment : name '=' expression So you can safely have "print" as name, or inside an expression. Even "print print" should work !
But you would not want print print and print(print) to have two different meanings. In Python, extra parens are fair around expressions, and print(print) is clearly a function call. --Scott David Daniels Scott.Daniels@Acm.Org
Le Wed, 13 May 2009 12:30:04 -0700, Scott David Daniels <Scott.Daniels@Acm.Org> s'exprima ainsi:
spir wrote:
My opinion on this is you're basically right. Even 'print' (for py<3.0) could be an identifier you could use in an assignment (or in any value expression), I guess, for parse patterns are different: print_statement : "print" expression assignment : name '=' expression So you can safely have "print" as name, or inside an expression. Even "print print" should work !
But you would not want print print and print(print) to have two different meanings. In Python, extra parens are fair around expressions, and print(print) is clearly a function call.
You're right ;-) Denis ------ la vita e estrany
print(print) is not a function call in 2.x:
import types def f(): pass ... isinstance(f, types.FunctionType) True isinstance(print, types.FunctionType) File "<stdin>", line 1 isinstance(print, types.FunctionType) ^ SyntaxError: invalid syntax p = "hi there" print p hi there print(p) hi there
(print_) is interpreted as an expression, which is then passed to the print statement On Wed, May 13, 2009 at 3:30 PM, Scott David Daniels <Scott.Daniels@acm.org> wrote:
spir wrote:
My opinion on this is you're basically right. Even 'print' (for py<3.0) could be an identifier you could use in an assignment (or in any value expression), I guess, for parse patterns are different: print_statement : "print" expression assignment : name '=' expression So you can safely have "print" as name, or inside an expression. Even "print print" should work !
But you would not want print print and print(print) to have two different meanings. In Python, extra parens are fair around expressions, and print(print) is clearly a function call.
--Scott David Daniels Scott.Daniels@Acm.Org
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- Gerald Britton
Typo:
(print_) is interpreted as an expression, which is then passed to the print statement
should be: (p) is interpreted as an expression, which is then passed to the print statement On Thu, May 14, 2009 at 10:29 AM, Gerald Britton <gerald.britton@gmail.com> wrote:
print(print) is not a function call in 2.x:
import types def f(): pass ... isinstance(f, types.FunctionType) True isinstance(print, types.FunctionType) File "<stdin>", line 1 isinstance(print, types.FunctionType) ^ SyntaxError: invalid syntax p = "hi there" print p hi there print(p) hi there
(print_) is interpreted as an expression, which is then passed to the print statement
On Wed, May 13, 2009 at 3:30 PM, Scott David Daniels <Scott.Daniels@acm.org> wrote:
spir wrote:
My opinion on this is you're basically right. Even 'print' (for py<3.0) could be an identifier you could use in an assignment (or in any value expression), I guess, for parse patterns are different: print_statement : "print" expression assignment : name '=' expression So you can safely have "print" as name, or inside an expression. Even "print print" should work !
But you would not want print print and print(print) to have two different meanings. In Python, extra parens are fair around expressions, and print(print) is clearly a function call.
--Scott David Daniels Scott.Daniels@Acm.Org
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- Gerald Britton
-- Gerald Britton
Jeremy Banks wrote:
To someone who's a novice to this, could someone explain to me why it has to be an existing keyword at all? Since not identifiers are valid in that context anyway, why couldn't it be a new keyword that can still be used as an identifier in valid contexts? For example (not that I advocate this choice of keyword at all):
def foo(bar reinitialize_default []): # <-- it's a keyword here reinitialize_default = "It's an identifier here!"
That would be a syntax error now and if it were defined as a keyword only in that context it wouldn't introduce backwards compatibility problems and wouldn't force us to reuse an existing keyword in a context that may be a bit of a stretch.
Is there a reason that this wouldn't be a viable approach?
At one time, 'as' was only a keyword in the context of import. So it is 'viable'. But it was a bit confusing for programmers and messy implementation-wise and I think the developers were glad to promote 'as' to a full keyword and would be reluctant to go down that road again.
MRAB wrote:
There's the suggestion that Carl Johnson gave:
def myfunc(a, b, c else []): pass
or there's:
def myfunc(a, b, c def []): pass
where 'def' stands for 'default' (or "defaults to").
I had the idea of def f(c=:[]): where ':' is intended to invoke the idea of lambda, since the purpose is to turn the expression into a function that is automatically called (which is why lambda alone is not enough). So I would prefer c = def [] where def reads 'auto function defined by...'. or c = lambda::[] where the extra ':' indicates that that the function is auto-called or c = lambda():[], (now illegal), where () is intended to show that the default arg is the result of calling the function defined by the expression. lambda:[]() (now legal) would mean to (uselessly) call the function immediately. Thinking about it, I think those who want a syntax to indicate that the expression should be compiled into a function and called at runtime should build on the existing syntax (lambda...) for indicating that an expression should be compiled into a function, rather than inventing a replacement for that. Terry Jan Reedy
On May 12, 3:56 pm, Pascal Chambon <chambon.pas...@wanadoo.fr> wrote:
Well, since adding new keywords or operators is very sensitive, and the existing ones are rather exhausted, it won't be handy to propose a new syntax...
One last idea I might have : what about something like
* def myfunc(a, b, c = yield []): pass*
I'm not expert in english, but I'd say the following "equivalents" of yield (dixit WordWeb) are in a rather good semantic area : *Be the cause or source of *Give or supply *Cause to happen or be responsible for *Bring in
Of course the behaviour of this yield is not so close from the one we know, but there is no interpretation conflict for the parser, and we might quickly get used to it : * yield in default argument => reevaluate the expression each time * yield in function body => return value and prepare to receive one
How do you people feel about this ? Regards, Pascal
I'm not a fan. If you thought not reevaluating function expressions was confusing for newbies, wait until you see what making up a new kind of yield will do for them. Why not just push for some decorators that do this to be included in stdlib? I see the utility, but not the point of adding extra syntax.
@Runtime ... def f(x=a**2+2b+c): ... return x ... a = 1 b = 2 c = 3 f() 8
This seems much more intuitive and useful to me than adding new meanings to yield. Geremy Condra
On 13 May 2009, at 20:18, CTO wrote:
Why not just push for some decorators that do this to be included in stdlib? I see the utility, but not the point of adding extra syntax.
@Runtime ... def f(x=a**2+2b+c): ... return x ... a = 1 b = 2 c = 3 f() 8
This seems much more intuitive and useful to me than adding new meanings to yield.
This is not possible. def f(x=a**2+2*b+c): return x is compiled to something very much like: _tmp = x**2+2*b+c def f(x=_tmp): return x So it is impossible to find out what expression yields the default value of x by just looking at f. You have to use lambda or use George Sakkis' idea of using strings for defaults and evaluating them at call- time (but I'm not sure this will work reliably with nested functions). -- Arnaud
On May 13, 3:44 pm, Arnaud Delobelle <arno...@googlemail.com> wrote:
On 13 May 2009, at 20:18, CTO wrote:
Why not just push for some decorators that do this to be included in stdlib? I see the utility, but not the point of adding extra syntax.
@Runtime ... def f(x=a**2+2b+c): ... return x ... a = 1 b = 2 c = 3 f() 8
This seems much more intuitive and useful to me than adding new meanings to yield.
This is not possible.
def f(x=a**2+2*b+c): return x
is compiled to something very much like:
_tmp = x**2+2*b+c def f(x=_tmp): return x
So it is impossible to find out what expression yields the default value of x by just looking at f. You have to use lambda or use George Sakkis' idea of using strings for defaults and evaluating them at call- time (but I'm not sure this will work reliably with nested functions).
-- Arnaud
Thanks for the input, but I've already written the code to do this. It is available at <URL:http://code.activestate.com/recipes/576751/>. For those with hyperlink allergies, the snippet posted above reevaluates the function whenever it is called, and can be used like so:
from runtime import runtime @runtime ... def example1(x, y=[]): ... y.append(x) ... return y ... example1(1) [1] example1(2) [2]
or, as posted above,
a, b, c = 0, 1, 2 @runtime ... def example2(x=a**2+2*b+c): ... return x ... example2() 4 a = 5 example2() 29
The gode given is slow and ugly, but it does appear- at least to me- to do what is being asked here. Geremy Condra
A caveat on my previously posted code: as mentioned in another thread earlier today, it will not work on functions entered into the interpreter. Geremy Condra
2009/5/14 CTO <debatem1@gmail.com>:
Thanks for the input, but I've already written the code to do this. It is available at <URL:http://code.activestate.com/recipes/576751/>.
I should have said "it's impossible short of looking at the source code or doing some very sophisticated introspection of the bytecode of the module the function is defined in". Even so, your recipe doesn't quite work in several cases, aside from when the source code is not accessible. Two examples: def nesting(): default = 3 @runtime def example3(x=default): return x example3() nesting() @runtime def name(x=a): return x name() * The first one fails because default is not a global variable, thus not accessible from within the runtime decorator. I don't know how if this can be fixed. Note that for the function to exec() at all, you need to e.g. modify remove_decorators so that it also removes initial whitespace, something like: def remove_decorators(source): """Removes the decorators from the given function""" lines = source.splitlines() lines = [line for line in lines if not line.startswith('@')] indent = 0 while lines[0][indent] == ' ': indent += 1 new_source = '\n'.join(line[indent:] for line in lines) return new_source * The second one fails because of a clash of names. I guess that can be fixed by specifying what the locals and globals are explicitely in the calls to exec and eval. -- Arnaud
I should have said "it's impossible short of looking at the source code or doing some very sophisticated introspection of the bytecode of the module the function is defined in".
Any which way you slice this it will require that literal code *not* be interpreted until execution time. There are other ways to do that- storing it in strings, as George Sakkis does, modifying the language itself, as is the proposal here, or reading and parsing the original source. But you're right- more info is needed than what the bytecode contains.
Even so, your recipe doesn't quite work in several cases, aside from when the source code is not accessible.
Obviously, you are quite correct. Scoping in particular is difficult both to understand and to properly handle- had me chasing my tail for about twenty minutes earlier, actually- and I'm sure this is a security nightmare, but it does (generally) what is being asked for here. And it does so without recourse to changing the syntax. Here's another possible mechanism: def runtime(f): """Evaluates a function's annotations at runtime.""" annotations = getfullargspec(f)[-1] @wraps(f) def wrapped(*args, **kwargs): defaults = {k: eval(v) for k, v in annotations.items()} defaults.update(kwargs) return f(*args, **defaults) return wrapped @runtime def example1(x, y:'[]'): y.append(x) return y @runtime def example2(x:'a**2+2*b+c'): return x Pretty simple, although it messes with the call syntax pretty badly, effectively treating a non-keyword argument as a keyword-only argument. There's probably a way around that but I doubt I'm going to see it tonight. The point is, I don't really see the point in adding a new syntax. There are *lots* of incomplete solutions floating around to this issue, and it will probably take a lot less work to make one of those into a complete solution than it will to add a new syntax, if that makes any sense at all. Also, do you mind posting any problems you find in that to the activestate message board so there is a record there? Geremy Condra
On Thu, 14 May 2009 09:03:34 am CTO wrote:
Thanks for the input, but I've already written the code to do this. It is available at <URL:http://code.activestate.com/recipes/576751/>.
[...]
The gode given is slow and ugly, but it does appear- at least to me- to do what is being asked here.
Your code seems to work only if the source to the function is available. That will mean it can't be used by people who want to distribute .pyc files only. -- Steven D'Aprano
On May 14, 6:27 pm, Steven D'Aprano <st...@pearwood.info> wrote:
On Thu, 14 May 2009 09:03:34 am CTO wrote:
Thanks for the input, but I've already written the code to do this. It is available at <URL:http://code.activestate.com/recipes/576751/>.
[...]
The gode given is slow and ugly, but it does appear- at least to me- to do what is being asked here.
Your code seems to work only if the source to the function is available. That will mean it can't be used by people who want to distribute .pyc files only.
-- Steven D'Aprano
I think the list is eating my replies, but suffice to say that there's a new version of the recipe at <URL: http://code.activestate.com/recipes/576754/> that doesn't have that limitation and looks pretty close to the syntax proposed above. Example:
@runtime ... def myfunc(x, y, z: lambda:[]): ... z.extend((x,y)) ... return z ... myfunc(1, 2) [1, 2] myfunc(3, 4) [3, 4] myfunc(1, 2, z=[3, 4]) [3, 4, 1, 2]
Geremy Condra
On Fri, 15 May 2009 09:28:02 am CTO wrote:
On May 14, 6:27 pm, Steven D'Aprano <st...@pearwood.info> wrote:
On Thu, 14 May 2009 09:03:34 am CTO wrote:
Thanks for the input, but I've already written the code to do this. It is available at <URL:http://code.activestate.com/recipes/576751/>.
[...]
The gode given is slow and ugly, but it does appear- at least to me- to do what is being asked here.
Your code seems to work only if the source to the function is available. That will mean it can't be used by people who want to distribute .pyc files only.
-- Steven D'Aprano
I think the list is eating my replies, but suffice to say that there's a new version of the recipe at <URL: http://code.activestate.com/recipes/576754/> that doesn't have that limitation and looks pretty close to the syntax proposed above.
And instead has another limitation, namely that it only works if you pass the non-default argument by keyword. f(123, y=456) # works f(123, 456) # fails if y has been given a default value. -- Steven D'Aprano
On May 14, 8:19 pm, Steven D'Aprano <st...@pearwood.info> wrote:
On Fri, 15 May 2009 09:28:02 am CTO wrote:
On May 14, 6:27 pm, Steven D'Aprano <st...@pearwood.info> wrote:
On Thu, 14 May 2009 09:03:34 am CTO wrote:
Thanks for the input, but I've already written the code to do this. It is available at <URL:http://code.activestate.com/recipes/576751/>.
[...]
The gode given is slow and ugly, but it does appear- at least to me- to do what is being asked here.
Your code seems to work only if the source to the function is available. That will mean it can't be used by people who want to distribute .pyc files only.
-- Steven D'Aprano
I think the list is eating my replies, but suffice to say that there's a new version of the recipe at <URL: http://code.activestate.com/recipes/576754/> that doesn't have that limitation and looks pretty close to the syntax proposed above.
And instead has another limitation, namely that it only works if you pass the non-default argument by keyword.
f(123, y=456) # works f(123, 456) # fails if y has been given a default value.
-- Steven D'Aprano
Correct. However, I remain confident that someone with ever so slightly more skill than myself can correct that problem- since you already seem to have taken a look at it, maybe that's something you could do? Thanks in advance, Geremy Condra
On Thu, 14 May 2009 05:18:37 am CTO wrote:
If you thought not reevaluating function expressions was confusing for newbies, wait until you see what making up a new kind of yield will do for them.
Why not just push for some decorators that do this to be included in stdlib? I see the utility, but not the point of adding extra syntax.
Even if a decorator solution can be made to work, it seems to me that the difficulty with a decorator solution is that it is all-or-nothing -- you can decorate the entire parameter list, or none of the parameters, but not some of the parameters. You can bet that people will say they want delayed evaluation of some default arguments and compile-time evaluation of others, in the same function definition. There are work-arounds, of course, but there are perfectly adequate work-arounds for the lack of delayed evaluation defaults now, and it hasn't stopped the complaints. I'm going to suggest that any syntax should be applied to the formal parameter name, not the default value. This feels right to me -- we're saying that it's the formal parameter that is "special" for using delayed semantics, not that the default object assigned to it is special. Hence it should be the formal parameter that is tagged, not the default value. By analogy with the use of the unary-* operator, I suggest we use a new unary-operator to indicate the new semantics. Inside the parameter list, &x means to delay evaluation of the default argument to x to runtime: def parrot(a, b, x=[], &y=[], *args, **kwargs): As a bonus, this will allow for a whole new series of bike-shedding arguments about which specific operator should be used. *grin* Tagging a parameter with unary-& but failing to specify a default value should be a syntax error: def parrot(&x, &y=[]): Likewise for unary-& outside of a parameter list. Bike-shedding away... *wink* -- Steven D'Aprano
Steven D'Aprano wrote:
On Thu, 14 May 2009 05:18:37 am CTO wrote:
If you thought not reevaluating function expressions was confusing for newbies, wait until you see what making up a new kind of yield will do for them.
Why not just push for some decorators that do this to be included in stdlib? I see the utility, but not the point of adding extra syntax.
Even if a decorator solution can be made to work, it seems to me that the difficulty with a decorator solution is that it is all-or-nothing -- you can decorate the entire parameter list, or none of the parameters, but not some of the parameters. You can bet that people will say they want delayed evaluation of some default arguments and compile-time evaluation of others, in the same function definition.
There are work-arounds, of course, but there are perfectly adequate work-arounds for the lack of delayed evaluation defaults now, and it hasn't stopped the complaints.
I'm going to suggest that any syntax should be applied to the formal parameter name, not the default value. This feels right to me -- we're saying that it's the formal parameter that is "special" for using delayed semantics, not that the default object assigned to it is special. Hence it should be the formal parameter that is tagged, not the default value.
By analogy with the use of the unary-* operator, I suggest we use a new unary-operator to indicate the new semantics. Inside the parameter list, &x means to delay evaluation of the default argument to x to runtime:
def parrot(a, b, x=[], &y=[], *args, **kwargs):
As a bonus, this will allow for a whole new series of bike-shedding arguments about which specific operator should be used. *grin*
Tagging a parameter with unary-& but failing to specify a default value should be a syntax error:
def parrot(&x, &y=[]):
Likewise for unary-& outside of a parameter list.
Bike-shedding away... *wink*
Well, going back to 'def', it could mean 'deferred until call-time': def parrot(a, b, x=[], y=def [], *args, **kwargs):
Steven D'Aprano wrote:
On Thu, 14 May 2009 05:18:37 am CTO wrote:
If you thought not reevaluating function expressions was confusing for newbies, wait until you see what making up a new kind of yield will do for them.
Why not just push for some decorators that do this to be included in stdlib? I see the utility, but not the point of adding extra syntax.
Even if a decorator solution can be made to work, it seems to me that the difficulty with a decorator solution is that it is all-or-nothing -- you can decorate the entire parameter list, or none of the parameters, but not some of the parameters. You can bet that people will say they want delayed evaluation of some default arguments and compile-time evaluation of others, in the same function definition.
Not all or nothing, and selection is easy. A decorator could only call callable objects, and could/should be limited to calling function objects or even function objects named '<lambda>'. And if one wanted the resulting value to such a function, escape the default lambda expression with lambda. x=[1,2] @call_lambdas def f(a=len(x), lst = lambda:[], func = lambda: lambda x: 2*x): # a is int 2, lst is a fresh list, func is a one-parameter function Terry Jan Reedy
On Thu, 14 May 2009 09:02:08 am Terry Reedy wrote:
Steven D'Aprano wrote:
On Thu, 14 May 2009 05:18:37 am CTO wrote:
If you thought not reevaluating function expressions was confusing for newbies, wait until you see what making up a new kind of yield will do for them.
Why not just push for some decorators that do this to be included in stdlib? I see the utility, but not the point of adding extra syntax.
Even if a decorator solution can be made to work, it seems to me that the difficulty with a decorator solution is that it is all-or-nothing -- you can decorate the entire parameter list, or none of the parameters, but not some of the parameters. You can bet that people will say they want delayed evaluation of some default arguments and compile-time evaluation of others, in the same function definition.
Not all or nothing, and selection is easy. A decorator could only call callable objects, and could/should be limited to calling function objects or even function objects named '<lambda>'.
Some people don't like writing: def f(x=SENTINEL): if x is SENTINEL: x = [] and wish to have syntax so they can write something approaching: def f(x=[]): ... but have a fresh [] bound to x. You're supporting the syntax: @call_lambdas # Geremy Condra uses the name 'runtime' def f(x=lambda:[]): ... (For the record, I've suggested creating a unary-& operator so that we can write "def f(&x=[])" to get late-binding of x.) If I were to use the proposed late-binding feature, I would want it to be easy to use and obvious. I don't mind having to learn special syntax -- I'm not asking for it to be intuitive or guessable. But having to define the default value as a function (with or without lambda!) *and* call a decorator doesn't seem either easy or obvious. It feels like a kludge designed to get around a limitation of the language. (If you don't like the negative connotations of 'kludge', read it as 'hack' instead.) In other words, it looks like your suggestion is "let's find another idiom for late-binding default arguments" rather than "let's give Python built-in support for optional late-binding of default arguments". If the first one is your intention, then I'll just walk away from this discussion. I already have a perfectly obvious and explicit idiom for late-binding of default arguments. I don't need a second one, especially one which I find exceedingly inelegant and ugly. If you want to use that in your own code, go right ahead, but I hope it never makes it into any code I ever need to read. -1 from me on any solution which requires both a decorator and special treatment of defaults in the parameter list. In my opinion, only a solution with built-in support from the compiler is worth supporting. Anything else is a heavyweight, complicated solution for a problem that already has a lightweight, simple solution: use a sentinel. We already have a concise, fast, straightforward idiom which is easily learned and easily written, and while it's not intuitively obvious to newbies, neither is the suggested decorator+lambda solution. We don't need a complicated, verbose, hard-to-explain, hard-to-implement solution as well. -- Steven D'Aprano
Some people don't like writing:
def f(x=SENTINEL): if x is SENTINEL: x = []
and wish to have syntax so they can write something approaching:
def f(x=[]): ...
And I understand that. However, I don't think it's important enough to make it worth changing the language, adding to Python's already significant function call overhead, or making the job of parsing function signatures more difficult. If there is a mechanism to do this inside of Python- and there are several- it is my personal opinion that those should be used in preference to modifying the language. As I am neither the smartest nor most competent programmer here, feel free to disregard my opinion- but the code I have produced matches one of the proposed syntaxes very closely, even if it is not the one you prefer.
but have a fresh [] bound to x. You're supporting the syntax:
@call_lambdas # Geremy Condra uses the name 'runtime' def f(x=lambda:[]): ...
For the record, I'm not supporting a syntax. I'm simply stating that this can be done in Python as it currently stands, and that I am most emphatically not in favor of making function signatures any more complex than they already are.
(For the record, I've suggested creating a unary-& operator so that we can write "def f(&x=[])" to get late-binding of x.)
It's simple, short, and concise. If I were to get behind a proposal to change the language to support this feature, I would probably either get behind this one or perhaps a more general system for adding a metaclass equivalent to functions. However, as things stand I remain unconvinced that any of these things are necessary, or even particularly desirable, given the aforementioned complexity of function signatures.
If I were to use the proposed late-binding feature, I would want it to be easy to use and obvious. I don't mind having to learn special syntax -- I'm not asking for it to be intuitive or guessable. But having to define the default value as a function (with or without lambda!) *and* call a decorator doesn't seem either easy or obvious. It feels like a kludge designed to get around a limitation of the language. (If you don't like the negative connotations of 'kludge', read it as 'hack' instead.) In other words, it looks like your suggestion is "let's find another idiom for late-binding default arguments" rather than "let's give Python built-in support for optional late-binding of default arguments".
My suggestion is neither to find another idiom or to build in late-binding support. Some people- yourself included- want a new syntax. I demonstrated that close approximations of some of the mentioned syntaxes were possible in the language already, and while I appreciate that your preferred syntax is not on that list, I remain unconvinced that its purported benefits outweigh what I perceive to be its drawbacks.
If the first one is your intention, then I'll just walk away from this discussion. I already have a perfectly obvious and explicit idiom for late-binding of default arguments. I don't need a second one, especially one which I find exceedingly inelegant and ugly. If you want to use that in your own code, go right ahead, but I hope it never makes it into any code I ever need to read. -1 from me on any solution which requires both a decorator and special treatment of defaults in the parameter list.
If you are satisfied with the existing idiom, then use it. If you're not, my code is out there. If you don't like that, then write your own.
In my opinion, only a solution with built-in support from the compiler is worth supporting.
I'm afraid I'm unconvinced on that point.
Anything else is a heavyweight, complicated solution for a problem that already has a lightweight, simple solution: use a sentinel. We already have a concise, fast, straightforward idiom which is easily learned and easily written, and while it's not intuitively obvious to newbies, neither is the suggested decorator+lambda solution. We don't need a complicated, verbose, hard-to-explain, hard-to-implement solution as well.
-- Steven D'Aprano
I think I've already addressed this point, but once more for the record, I'm just not convinced that any of this- my code or your proposed changes- are needed. Until then you can have my -1. Geremy Condra
On Fri, 15 May 2009 12:37:19 pm CTO wrote:
Some people don't like writing:
def f(x=SENTINEL): if x is SENTINEL: x = []
and wish to have syntax so they can write something approaching:
def f(x=[]): ...
And I understand that. However, I don't think it's important enough to make it worth changing the language, adding to Python's already significant function call overhead, or making the job of parsing function signatures more difficult. If there is a mechanism to do this inside of Python- and there are several- it is my personal opinion that those should be used in preference to modifying the language. As I am neither the smartest nor most competent programmer here, feel free to disregard my opinion- but the code I have produced matches one of the proposed syntaxes very closely, even if it is not the one you prefer.
Your code also "add[s] to Python's already significant function call overhead" as well as "making the job of parsing function signatures more difficult". I don't mean to dump on your code. What you are trying to do is obviously very difficult from pure Python code, and the solutions you have come up with are neat kludges. But a kludge is still a kludge, no matter how neat it is :) [...]
(For the record, I've suggested creating a unary-& operator so that we can write "def f(&x=[])" to get late-binding of x.)
It's simple, short, and concise. If I were to get behind a proposal to change the language to support this feature, I would probably either get behind this one or perhaps a more general system for adding a metaclass equivalent to functions. However, as things stand I remain unconvinced that any of these things are necessary, or even particularly desirable, given the aforementioned complexity of function signatures.
I think we two at least agree. I don't think there's anything wrong with the current sentinel idiom. It's not entirely intuitive to newbies, or those who don't fully understand Python's object-binding model, but I don't consider that a flaw. So I don't see the compile-time binding of default args to be a problem that needs solving. But other people do, and they are loud and consistent in their complaints. Given that the squeaky wheel (sometimes) gets the grease, I'd just like to see a nice solution to a (non-)problem rather than an ugly solution. So I'm +0 on my proposal -- I don't think it solves a problem that needs solving, but other people do. I'm -1 on decorator+lambda solutions, because not only do they not solve a problem that needs solving, but they don't solve it in a particularly ugly and inefficient way *wink*
My suggestion is neither to find another idiom or to build in late-binding support. Some people- yourself included-
I think you've misunderstood my position. I'm one of the people defending the current semantics of default arg binding. But since others want optional late binding, I'm just trying to find a syntax that doesn't bite :)
want a new syntax. I demonstrated that close approximations of some of the mentioned syntaxes were possible in the language already, and while I appreciate that your preferred syntax is not on that list, I remain unconvinced that its purported benefits outweigh what I perceive to be its drawbacks.
Just out of curiosity, what do you see as the drawbacks? The ones that come to my mind are: * people who want late binding to be standard will be disappointed (but that will be true of any solution) * requires changes to Python's parser, to allow unary-& (but that will probably be very simple) * requires charges to Python's compiler, to allow for some sort of late-binding semantics (thunks?) (but that will probably be very hard) * requires people to learn one more feature (so newbies will still be confused that def f(x=[]) doesn't behave as they expect). -- Steven D'Aprano
A thought from another direction... Any chance we could have the interpreter raise a warning for the case def foo(a = []): #stuff ? The empty list and empty dict args would, I imagine, be the two most common mistakes. Showing a warning might, at least, solve the problem of people tripping over the syntax. Cheers, -T
I think this takes the discussion in a more practical direction. Imagine that there were a special method name __immutable__ to be implemented appropriately by all builtin types. Any object passed as a default argument would be checked to see that its type implements __immutable__ and that __immutable__() is True. Failure would mean a warning or even an error in subsequent versions. User-defined types could implement __immutable__ as they saw fit, in the traditional Pythonic consenting-adults-ly way. On Thu, May 14, 2009 at 9:16 PM, Tennessee Leeuwenburg < tleeuwenburg@gmail.com> wrote:
A thought from another direction...
Any chance we could have the interpreter raise a warning for the case
def foo(a = []): #stuff
?
The empty list and empty dict args would, I imagine, be the two most common mistakes. Showing a warning might, at least, solve the problem of people tripping over the syntax.
Cheers, -T
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
On Thu, May 14, 2009 at 9:16 PM, Tennessee Leeuwenburg <tleeuwenburg@gmail.com> wrote:
A thought from another direction...
Any chance we could have the interpreter raise a warning for the case
def foo(a = []): #stuff
?
The empty list and empty dict args would, I imagine, be the two most common mistakes. Showing a warning might, at least, solve the problem of people tripping over the syntax.
On Thu, May 14, 2009 at 9:31 PM, Curt Hagenlocher <curt@hagenlocher.org> wrote:
I think this takes the discussion in a more practical direction. Imagine that there were a special method name __immutable__ to be implemented appropriately by all builtin types. Any object passed as a default argument would be checked to see that its type implements __immutable__ and that __immutable__() is True. Failure would mean a warning or even an error in subsequent versions.
User-defined types could implement __immutable__ as they saw fit, in the traditional Pythonic consenting-adults-ly way.
(A) Python's new Abstract Base Classes would probably be a better way of doing such checking rather than introducing a new special method (B) What about having an __immutable__() that returned an immutable version of the object if possible? Then all default arguments could be converted to immutables at definition-time, with errors if a default cannot be made immutable? It would eliminate the performance concerns since the overhead would only be incurred once (when the function gets defined), rather than with each function call. Cheers, Chris -- http://blog.rebertia.com
On Thu, May 14, 2009 at 9:31 PM, Curt Hagenlocher <curt@hagenlocher.org> wrote:
I think this takes the discussion in a more practical direction. Imagine that there were a special method name __immutable__ to be implemented appropriately by all builtin types.
Python already has something *vaguely* like this; __hash__ is only supposed to be implemented on immutable objects. So if the object supports __hash__ one could declare that it is supposed to be immutable. However, it's occasionally useful to define __hash__ on mutable objects, and indeed it's defined by default on user-defined classes. So __hash__ isn't a viable substitute for __immutable__ (&c). Chris Rebert wrote:
(A) Python's new Abstract Base Classes would probably be a better way of doing such checking rather than introducing a new special method
FWIW, I'm +0.3 for either __immutable__ or an ABC to express the concept. I don't know which would be the "right" way to do it--ultimately I defer to my betters.
(B) What about having an __immutable__() that returned an immutable version of the object if possible?
The "freeze" protocol was proposed in PEP 351. http://www.python.org/dev/peps/pep-0351/ It was rejected in 2006. It's a "can of worms". Reading PEPs is fun, /larry/
2009/5/15 Larry Hastings <larry@hastings.org>:
On Thu, May 14, 2009 at 9:31 PM, Curt Hagenlocher <curt@hagenlocher.org> wrote:
I think this takes the discussion in a more practical direction. Imagine that there were a special method name __immutable__ to be implemented appropriately by all builtin types.
Python already has something *vaguely* like this; __hash__ is only supposed to be implemented on immutable objects. So if the object supports __hash__ one could declare that it is supposed to be immutable.
However, it's occasionally useful to define __hash__ on mutable objects, and indeed it's defined by default on user-defined classes. So __hash__ isn't a viable substitute for __immutable__ (&c).
However immutability is a shallow thing: tuple are immutable but ([]) still can be changed! -- Arnaud
On Fri, May 15, 2009 at 6:52 AM, Arnaud Delobelle <arnodel@googlemail.com> wrote:
2009/5/15 Larry Hastings <larry@hastings.org>:
On Thu, May 14, 2009 at 9:31 PM, Curt Hagenlocher <curt@hagenlocher.org> wrote:
I think this takes the discussion in a more practical direction. Imagine that there were a special method name __immutable__ to be implemented appropriately by all builtin types.
Python already has something *vaguely* like this; __hash__ is only supposed to be implemented on immutable objects. So if the object supports __hash__ one could declare that it is supposed to be immutable.
However, it's occasionally useful to define __hash__ on mutable objects, and indeed it's defined by default on user-defined classes. So __hash__ isn't a viable substitute for __immutable__ (&c).
However immutability is a shallow thing: tuple are immutable but ([]) still can be changed!
More to the point, immutability is *not* the issue as Steven D'Aprano showed. There are perfectly legitimate reasons for using a default value that just happens to be mutable, without mutating it in the function body though. Dict is the most common example (especially since there is no frozendict type that could be used in its place). George
On Fri, May 15, 2009 at 5:59 AM, George Sakkis <george.sakkis@gmail.com> wrote:
More to the point, immutability is *not* the issue as Steven D'Aprano showed. There are perfectly legitimate reasons for using a default value that just happens to be mutable, without mutating it in the function body though. Dict is the most common example (especially since there is no frozendict type that could be used in its place).
There seem to be two separate "wants" that relate to this topic: 1. Preventing the "noob" mistake of saying "def f(x = {})" and expecting that a new empty dictionary will be produced for each call, and 2. Creating a more concise syntax for saying def f(x = UNDEF): if x is UNDEF: x = {} So far, the discussion seems to have revolved entirely around the second request -- which I find by far less compelling than the first; it's simply not a painful-enough pattern to warrant a special bit of syntax. Furthermore, it doesn't do anything to address the first desire. -- Curt Hagenlocher curt@hagenlocher.org
Curt Hagenlocher wrote:
There seem to be two separate "wants" that relate to this topic:
Good observation.
1. Preventing the "noob" mistake of saying "def f(x = {})" and expecting that a new empty dictionary will be produced for each call, and
As a couple of us have suggested, this, like similar jobs, should be handled by program checkers. I leave it to someone else to see if existing programs already check and warn and, if not, suggest this to their authors.
2. Creating a more concise syntax for saying def f(x = UNDEF): if x is UNDEF: x = {}
So far, the discussion seems to have revolved entirely around the second request -- which I find by far less compelling than the first; it's simply not a painful-enough pattern to warrant a special bit of syntax. Furthermore, it doesn't do anything to address the first desire.
The existing pattern explicitly says what one wants done. I suspect editors with a macro facility could be given a macro to do most of the boilerplate writing. tjr
On Fri, May 15, 2009 at 03:33:13PM -0400, Terry Reedy wrote:
Curt Hagenlocher wrote:
1. Preventing the "noob" mistake of saying "def f(x = {})" and expecting that a new empty dictionary will be produced for each call, and
As a couple of us have suggested, this, like similar jobs, should be handled by program checkers. I leave it to someone else to see if existing programs already check and warn
PyLint certainly does. Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd@phd.pp.ru Programmers don't die, they just GOSUB without RETURN.
Terry Reedy a écrit :
Curt Hagenlocher wrote:
1. Preventing the "noob" mistake of saying "def f(x = {})" and expecting that a new empty dictionary will be produced for each call, and
As a couple of us have suggested, this, like similar jobs, should be handled by program checkers. I leave it to someone else to see if existing programs already check and warn and, if not, suggest this to their authors.
Indeed, Pylint does handle this in some way : "W0102: *Dangerous default value %s as argument* Used when a mutable value as list or dictionary is detected in a default value for an argument." Some have talked about thunks or other concepts here ; I too feel that this issue could be the occasion of introducing new features, that exceed the scope of default arguments. Could you peopel develop a little what you think about with "thunk" or "early action" ? Is the former different from an argument-less lambda function ? Else, concerning the syntax for "dynamic" default arguments, I guesse somethng like : def func(a, b @= []): pass would be OK, wouldn't it ? Regards, Pascal
On Mon, May 18, 2009 at 1:53 PM, Pascal Chambon <chambon.pascal@wanadoo.fr> wrote: <snip>
Else, concerning the syntax for "dynamic" default arguments, I guesse somethng like :
def func(a, b @= []): pass
would be OK, wouldn't it ?
The BDFL has condemned introducing new assignment operators. See http://www.python.org/dev/peps/pep-3099/ : "There will be no alternative binding operators such as :=." Cheers, Chris -- http://blog.rebertia.com
On May 18, 5:17 pm, Chris Rebert <pyid...@rebertia.com> wrote:
On Mon, May 18, 2009 at 1:53 PM, Pascal Chambon<chambon.pas...@wanadoo.fr> wrote:
<snip>
Else, concerning the syntax for "dynamic" default arguments, I guesse somethng like :
def func(a, b @= []): pass
would be OK, wouldn't it ?
The BDFL has condemned introducing new assignment operators. Seehttp://www.python.org/dev/peps/pep-3099/:
"There will be no alternative binding operators such as :=."
Cheers, Chris
To be fair, the last discussion on thunks turned into a discussion of metalanguages and macros, which were also pretty thoroughly canned on that list. Geremy Condra
On May 18, 5:17 pm, Chris Rebert <pyid...@rebertia.com> wrote:
The BDFL has condemned introducing new assignment operators. Seehttp://www.python.org/dev/peps/pep-3099/:
"There will be no alternative binding operators such as :=."
Cheers, Chris
That's weird, in the archives quoted, I've found no exchange around the
CTO a écrit : pros and cons of alternative binding operators, except the BDFL's "Brrh". ---> http://mail.python.org/pipermail/python-dev/2006-July/066995.html I guess that the operators rejected there mostly concerned the differentiation between binding and rebinding, although I couldnt be sure. Without new keyword or operator, a good looking solution for dynamic defaults is unlikely to appear, imo. I could content myself of the proposed solution : @dynamic def func (a, b = lambda : []): pass But I just dislike the fact that the "dynamic" applies to all the defaults, even those which weren't supposed to be dynamic (and writing "lambda : lambda : []") doesn't look good). Would there be any way of separating "to-be-called" lambdas from normal ones ? Except with a syntax like "b = dyn(lambda: [])" ? Regards, Pascal
On Tue, May 19, 2009 at 10:31:00PM +0200, Pascal Chambon wrote:
I could content myself of the proposed solution : @dynamic def func (a, b = lambda : []): pass But I just dislike the fact that the "dynamic" applies to all the defaults, even those which weren't supposed to be dynamic (and writing "lambda : lambda : []") doesn't look good). Would there be any way of separating "to-be-called" lambdas from normal ones ? Except with a syntax like "b = dyn(lambda: [])" ?
@dynamic('b') def func (a, b = lambda : []): pass Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd@phd.pp.ru Programmers don't die, they just GOSUB without RETURN.
Oleg Broytmann a écrit :
On Tue, May 19, 2009 at 10:31:00PM +0200, Pascal Chambon wrote:
I could content myself of the proposed solution : @dynamic def func (a, b = lambda : []): pass But I just dislike the fact that the "dynamic" applies to all the defaults, even those which weren't supposed to be dynamic (and writing "lambda : lambda : []") doesn't look good). Would there be any way of separating "to-be-called" lambdas from normal ones ? Except with a syntax like "b = dyn(lambda: [])" ?
@dynamic('b') def func (a, b = lambda : []): pass
Oleg.
*shame on me - /me must be tired this evening* Well, with that solution we don't avoid some amount of boiler plate code, but imo the pros are - we don't have to care about copy/deepcopy constraints, or giving the default argument expression as a string (with all the problems eval() might raise) - most important for me : we have a pattern that could be broadcasted as a "standard practice", and thus warn newbies about the normal behaviour of default arguments. If the "dynamic" (or any other proper name) decorator became part of the builtin ones, like staticmethod or classmethod, in my opinion newbies would quickly run into it, get used to it, and we'd have a common practice instead of the numerous ones currently possible to handle dynamic defaults (sentinels, other decorators...). What do you think ? Is that worth the change compared to the sentinel stuff ? In my opinion, it's clearly more explicit, and the newbie will never think "that programmer is dumb, he has put None as a default whereas we can directly put lists or any other expressions in the signature, we're not restricted to constants like in other languages" as it might currently be the case. I'm eventually +1 on such a decorator/lambda mix (in the absence of more straightforward but language-changing syntax) ++ Pascal
Oleg Broytmann wrote:
On Tue, May 19, 2009 at 10:31:00PM +0200, Pascal Chambon wrote:
I could content myself of the proposed solution : @dynamic def func (a, b = lambda : []): pass But I just dislike the fact that the "dynamic" applies to all the defaults, even those which weren't supposed to be dynamic (and writing
To repeat: I think one-usage default function objects defined by lambda are rather rare. The most obvious is def ident(ob): return ob which, if used once, would likely be used more than once and defined as above, and which always could be so defined. I suspect even more rare is such defaults used in the same function as a mutable default such as [] or {} that needs protecting. And in such rare cases, one could either pull the function definition into a def statement or prefix it by a second lambda.
"lambda : lambda : []") doesn't look good).
and I would not suggest it unless what one wanted was for the default arg for each call, after the lambda call, to be 'lambda: []' and not '[]'.
Would there be any way of separating "to-be-called" lambdas from normal ones ? Except with a syntax like "b = dyn(lambda: [])" ?
@dynamic('b') def func (a, b = lambda : []): pass
Or dynamic could have an optional explicit list. But that option would be rarely needed, I think. tjr
Pascal Chambon wrote:
Without new keyword or operator, a good looking solution for dynamic defaults is unlikely to appear, imo.
I could content myself of the proposed solution : @dynamic def func (a, b = lambda : []): pass But I just dislike the fact that the "dynamic" applies to all the defaults, even those which weren't supposed to be dynamic (and writing "lambda : lambda : []") doesn't look good). Would there be any way of separating "to-be-called" lambdas from normal ones ? Except with a syntax like "b = dyn(lambda: [])" ?
Use function annotations.
def f(a: dynamic.override=list, b=1): pass ... f.__annotations__ {'a': dynamic.override} f.__defaults__ (<class 'list'>, 1)
Just make a decorator that looks in the annotations to figure out what to replace and what to leave be. Also "lambda: []" is clearly inferior to "list". -- Carl
On May 19, 9:04 pm, Carl Johnson <cmjohnson.mailingl...@gmail.com> wrote:
Pascal Chambon wrote:
Without new keyword or operator, a good looking solution for dynamic defaults is unlikely to appear, imo.
I could content myself of the proposed solution : @dynamic def func (a, b = lambda : []): pass But I just dislike the fact that the "dynamic" applies to all the defaults, even those which weren't supposed to be dynamic (and writing "lambda : lambda : []") doesn't look good). Would there be any way of separating "to-be-called" lambdas from normal ones ? Except with a syntax like "b = dyn(lambda: [])" ?
Use function annotations.
def f(a: dynamic.override=list, b=1): pass ... f.__annotations__
{'a': dynamic.override}>>> f.__defaults__
(<class 'list'>, 1)
Just make a decorator that looks in the annotations to figure out what to replace and what to leave be.
Also "lambda: []" is clearly inferior to "list".
-- Carl
Already done, as mentioned further up in this list: <URL: http://code.activestate.com/recipes/576754/> Geremy Condra
On Sat, 16 May 2009 01:52:44 am Curt Hagenlocher wrote:
There seem to be two separate "wants" that relate to this topic:
1. Preventing the "noob" mistake of saying "def f(x = {})" and expecting that a new empty dictionary will be produced for each call, and 2. Creating a more concise syntax for saying def f(x = UNDEF): if x is UNDEF: x = {}
So far, the discussion seems to have revolved entirely around the second request -- which I find by far less compelling than the first; it's simply not a painful-enough pattern to warrant a special bit of syntax. Furthermore, it doesn't do anything to address the first desire.
Agreed. Sort of. I don't believe that we can do anything about #1. Changing the current behaviour of default arguments will almost certainly not happen -- I think Guido has ruled No on that one, although I might be mistaken. But even if it was changed, it would just lead to a different set of newbie mistakes. I personally don't find the boilerplate code in #2 particularly onerous, but it *is* boilerplate, and as a general rule, boilerplate is a bad thing. If we could agree on a syntax, then we could move that boilerplate out of our code (where it marginally complicates the structure of the function), into the bytecode, which would be a small but valuable win for readability. There's also a #3: one possible solution to this is to use thunks. Do thunks have uses outside of default arguments? I imagine they do -- Algol used them extensively. What else can thunks be used for? -- Steven D'Aprano
On Thu, May 14, 2009 at 9:16 PM, Tennessee Leeuwenburg <tleeuwenburg@gmail.com> wrote:
A thought from another direction...
Any chance we could have the interpreter raise a warning for the case
def foo(a = []): #stuff
?
The empty list and empty dict args would, I imagine, be the two most common mistakes. Showing a warning might, at least, solve the problem of people tripping over the syntax.
+1 on throwing a ValueError for non-hash()-able (and thus probably mutable) default argument values. It's by no means perfect since objects are hash()-able by default using their ID, but it would at least help in the frequent "well-behaved mutable container object" cases. The barrier to this idea would be the code breakage involved; IMHO, code exploiting mutable defaults as static variables is in poor style anyway, but backward compatibility is a significant concern of the BDFL and Python devs; though I would hope the breakage might be seen as justifiable in this case. Cheers, Chris -- http://blog.rebertia.com
On Fri, 15 May 2009 04:13:23 pm Chris Rebert wrote:
+1 on throwing a ValueError for non-hash()-able (and thus probably mutable) default argument values.
You're not serious are you? What could possibly be either surprising or objectionable about a function like this? def f(data,substitutions = {}): ... name = data['name'] obj.name = substitutions.get(name,name) ... Example taken from: http://mail.python.org/pipermail/python-list/2008-August/504811.html -- Steven D'Aprano
Chris Rebert writes:
+1 on throwing a ValueError for non-hash()-able (and thus probably mutable) default argument values. It's by no means perfect since objects are hash()-able by default using their ID, but it would at least help in the frequent "well-behaved mutable container object" cases.
-1 nonet_default_options = { 'dryrun': False, 'verbose': False } net_default_options = { 'dryrun': True, 'verbose': True } def command1(options=nonet_default_options): pass def command2(options=net_default_options): pass def command3(options=net_default_options): pass
from mystuff import command1, command2, net_default_options command1() command2() command3() net_default_options['dryrun'] = False command2() command3()
is a common use-case for me, both during development and in scripts for occasional personal use. AFAICS this would break under your suggestion. Really, the only commonly-encountered problematic cases I can think of are the anonymous empty objects, because they're typically used not for their contents (d'oh), but rather as containers. pylint and friends can easily detect [] and {} as default values for arguments.
The barrier to this idea would be the code breakage involved; IMHO, code exploiting mutable defaults as static variables is in poor style anyway,
Sure, but it also breaks "globals" as above.
Le Fri, 15 May 2009 18:32:32 +0900, "Stephen J. Turnbull" <stephen@xemacs.org> s'exprima ainsi:
Chris Rebert writes:
+1 on throwing a ValueError for non-hash()-able (and thus probably mutable) default argument values. It's by no means perfect since objects are hash()-able by default using their ID, but it would at least help in the frequent "well-behaved mutable container object" cases.
-1
nonet_default_options = { 'dryrun': False, 'verbose': False } net_default_options = { 'dryrun': True, 'verbose': True }
def command1(options=nonet_default_options): pass
def command2(options=net_default_options): pass
def command3(options=net_default_options): pass
Yop. The main issue is not that a default be mutable -- and the example above is perfect to show this -- but that it is changed in the func body, this change affects the default back and propagates to later calls. /This/ could be used as criteria for a warning (or error), but probably it's much more costly to detect (parameter name on left side of "="). Denis ------ la vita e estrany
On Fri, 15 May 2009 02:16:04 pm Tennessee Leeuwenburg wrote:
A thought from another direction...
Any chance we could have the interpreter raise a warning for the case
def foo(a = []): #stuff
?
The empty list and empty dict args would, I imagine, be the two most common mistakes. Showing a warning might, at least, solve the problem of people tripping over the syntax.
I made that same suggestion nine months ago: http://mail.python.org/pipermail/python-list/2008-August/504701.html Responses were mixed, some people supported it, others did not, but it went nowhere. -- Steven D'Aprano
Tennessee Leeuwenburg wrote:
A thought from another direction...
Any chance we could have the interpreter raise a warning for the case
def foo(a = []): #stuff
This would be appropriate for any of the code check programs; PyChecker, PyLint, whatever. they already warn about things that are legal, might be wanted, but have a good chance of being an error.
[super-snip]
Just out of curiosity, what do you see as the drawbacks? [snip]
1) It adds to the complexity (and therefore overhead) of calling functions- not just the functions which use it, but even functions which operate as normal. Python already has a hefty penalty for calling functions, and I really don't want it to get any heavier. My 'solutions', as incomplete as they are, at least don't slow down anything else. 2) It adds to the complexity of introspecting functions. Take a good look at inspect.getfullargspec- its a nightmare, and either it gets worse under this (bad) or it doesn't include information that is available to the compiler (even worse). In addition to those minuses, it doesn't actually add to the capabilities of the language. If this were a proposal to add early action to Python (the equivalent of metaclasses or, to some extent, macro replacement) I would be much more likely to support it, despite the heavier syntax. So, the existing idiom works pretty well, there doesn't seem to be a very good substitute, it slows the whole language down to implement, and it doesn't add any power if you do. Like I say, I'm unconvinced. Geremy Condra
On Fri, 15 May 2009 02:48:53 pm CTO wrote:
[super-snip]
Just out of curiosity, what do you see as the drawbacks?
[snip]
1) It adds to the complexity (and therefore overhead) of calling functions- not just the functions which use it, but even functions which operate as normal.
Without an implementation, how can you possibly predict the cost of it?
Python already has a hefty penalty for calling functions,
I think you're badly mistaken. Python has a hefty cost for looking up names, but the overhead to *call* a function once you have looked up the name is minimal.
from timeit import Timer def f(): ... pass ... min(Timer('f', 'from __main__ import f').repeat()) 0.32181000709533691 min(Timer('f()', 'from __main__ import f').repeat()) 0.35797882080078125
No significant difference between looking up f and looking up f and calling it. Even if you give the function a complex signature, it's still relatively lightweight:
def g(a=1, b=2, c=3, d=4, e=5, f=6, g=7, h=8, *args, **kwargs): ... pass ... min(Timer('g()', 'from __main__ import g').repeat()) 0.55176901817321777
and I really don't want it to get any heavier. My 'solutions', as incomplete as they are, at least don't slow down anything else.
Oh the irony. Decorators are very heavyweight. Here's a decorator that essentially does nothing at all, and it triples the cost of calling the function:
from functools import wraps def decorator(f): ... @wraps(f) ... def inner(*args, **kwargs): ... return f(*args, **kwargs) ... return inner ... @decorator ... def h(): ... pass ... min(Timer('h()', 'from __main__ import h').repeat()) 1.1645870208740234
I think, before making claims as to what's costly and what isn't, you should actually do some timing measurements.
2) It adds to the complexity of introspecting functions. Take a good look at inspect.getfullargspec- its a nightmare, and either it gets worse under this (bad) or it doesn't include information that is available to the compiler (even worse).
Well obviously this is going to make getfullargspec more complicated. But tell me, what do you think your solution using decorators does to getfullargspec?
In addition to those minuses, it doesn't actually add to the capabilities of the language.
It's an incremental improvement. Currently, late-binding of defaults requires boilerplate code. This will eliminate that boilerplate code.
If this were a proposal to add early action to Python (the equivalent of metaclasses or, to some extent, macro replacement) I would be much more likely to support it, despite the heavier syntax.
So, the existing idiom works pretty well,
100% agreed!
there doesn't seem to be a very good substitute,
Not without support in the compiler.
it slows the whole language down to implement,
You can't know that.
and it doesn't add any power if you do.
It reduces boilerplate, which is a good thing. Probably the *only* good thing, but still a good thing. -- Steven D'Aprano
On May 15, 4:14 am, Steven D'Aprano <st...@pearwood.info> wrote:
On Fri, 15 May 2009 02:48:53 pm CTO wrote:
[super-snip]
Just out of curiosity, what do you see as the drawbacks?
[snip]
1) It adds to the complexity (and therefore overhead) of calling functions- not just the functions which use it, but even functions which operate as normal.
Without an implementation, how can you possibly predict the cost of it?
Python already has a hefty penalty for calling functions,
I think you're badly mistaken. Python has a hefty cost for looking up names, but the overhead to *call* a function once you have looked up the name is minimal.
from timeit import Timer def f():
... pass ...>>> min(Timer('f', 'from __main__ import f').repeat()) 0.32181000709533691
min(Timer('f()', 'from __main__ import f').repeat())
0.35797882080078125
No significant difference between looking up f and looking up f and calling it.
Even if you give the function a complex signature, it's still relatively lightweight:
def g(a=1, b=2, c=3, d=4, e=5, f=6, g=7, h=8, *args, **kwargs):
... pass ...>>> min(Timer('g()', 'from __main__ import g').repeat())
0.55176901817321777
and I really don't want it to get any heavier. My 'solutions', as incomplete as they are, at least don't slow down anything else.
Oh the irony. Decorators are very heavyweight. Here's a decorator that essentially does nothing at all, and it triples the cost of calling the function:
from functools import wraps def decorator(f):
... @wraps(f) ... def inner(*args, **kwargs): ... return f(*args, **kwargs) ... return inner ...>>> @decorator
... def h(): ... pass ...>>> min(Timer('h()', 'from __main__ import h').repeat())
1.1645870208740234
I think, before making claims as to what's costly and what isn't, you should actually do some timing measurements.
2) It adds to the complexity of introspecting functions. Take a good look at inspect.getfullargspec- its a nightmare, and either it gets worse under this (bad) or it doesn't include information that is available to the compiler (even worse).
Well obviously this is going to make getfullargspec more complicated. But tell me, what do you think your solution using decorators does to getfullargspec?
In addition to those minuses, it doesn't actually add to the capabilities of the language.
It's an incremental improvement. Currently, late-binding of defaults requires boilerplate code. This will eliminate that boilerplate code.
If this were a proposal to add early action to Python (the equivalent of metaclasses or, to some extent, macro replacement) I would be much more likely to support it, despite the heavier syntax.
So, the existing idiom works pretty well,
100% agreed!
there doesn't seem to be a very good substitute,
Not without support in the compiler.
it slows the whole language down to implement,
You can't know that.
and it doesn't add any power if you do.
It reduces boilerplate, which is a good thing. Probably the *only* good thing, but still a good thing.
-- Steven D'Aprano _______________________________________________ Python-ideas mailing list Python-id...@python.orghttp://mail.python.org/mailman/listinfo/python-ideas
On May 15, 4:14 am, Steven D'Aprano <st...@pearwood.info> wrote: [snip]
Without an implementation, how can you possibly predict the cost of it?
[snip] You're right. Please provide code. Geremy Condra
On Fri, 15 May 2009 06:45:16 pm CTO wrote:
On May 15, 4:14 am, Steven D'Aprano <st...@pearwood.info> wrote: [snip]
Without an implementation, how can you possibly predict the cost of it?
[snip]
You're right. Please provide code.
I think that should be up to some person who actually wants delayed evaluation of default arguments. As I've said repeatedly in the past, I'm a very strong -1 on removing the current behaviour, and +0 on allowing delayed evaluation of defaults as an optional feature. But since so many people want it, if the Python-Dev team decide to add it to the language I will need to live with whatever syntax is chosen. -- Steven D'Aprano
On May 15, 4:57 am, Steven D'Aprano <st...@pearwood.info> wrote:
On Fri, 15 May 2009 06:45:16 pm CTO wrote:
On May 15, 4:14 am, Steven D'Aprano <st...@pearwood.info> wrote: [snip]
Without an implementation, how can you possibly predict the cost of it?
[snip]
You're right. Please provide code.
I think that should be up to some person who actually wants delayed evaluation of default arguments. As I've said repeatedly in the past, I'm a very strong -1 on removing the current behaviour, and +0 on allowing delayed evaluation of defaults as an optional feature. But since so many people want it, if the Python-Dev team decide to add it to the language I will need to live with whatever syntax is chosen.
-- Steven D'Aprano
Hmm. Well, that doesn't sound like a productive way forward to me, but as I say I'm neither the most intelligent nor the most experienced programmer here, so maybe it's the right way to go. I guess in that case, my current stance is -1 on this, both in added syntax and decorator form, with the caveat that I'd be happy to change my vote if anybody can produce code that does this without mangling performance or introspection. Geremy Condra
Le Fri, 15 May 2009 13:38:43 +1000, Steven D'Aprano <steve@pearwood.info> s'exprima ainsi:
Just out of curiosity, what do you see as the drawbacks? The ones that come to my mind are:
[...]
* requires people to learn one more feature (so newbies will still be confused that def f(x=[]) doesn't behave as they expect).
That's the relevant drawback for me. A solution that does not solve the issue. A new syntactic pattern to allow call time evaluation of defaults is a (costly) solution for people who don't need it. Denis ------ la vita e estrany
On Sat, 16 May 2009 01:51:31 am spir wrote:
* requires people to learn one more feature (so newbies will still be confused that def f(x=[]) doesn't behave as they expect).
That's the relevant drawback for me. A solution that does not solve the issue. A new syntactic pattern to allow call time evaluation of defaults is a (costly) solution for people who don't need it.
There is no solution to the problem of newbies' confusion. The standard behaviour will remain in Python 2.x and almost certainly Python 3.x. The earliest it could change is Python 3.3: it could be introduced with a "from __future__ import defaults" in 3.2 and become standard in 3.3. (It almost certainly will never be the standard behaviour, but if it did, that would be the earliest it could happen.) And even if it did change, then newbies will be surprised and upset that def f(x=y) doesn't behave as they expect. Here's the current behaviour:
y = result_of_some_complex_calculation() # => 11 def f(x=y): ... return x+1 ... f() 12 y = 45 f() 12
Given the proposed behaviour, that second call to f() would surprisingly return 46, or worse, raise a NameError if y is no longer in scope. The real problem is that people don't have a consistent expectation for default arguments. No matter what behaviour Python uses, people will be caught out by it sometimes. -- Steven D'Aprano
Le Sat, 16 May 2009 12:21:13 +1000, Steven D'Aprano <steve@pearwood.info> s'exprima ainsi:
On Sat, 16 May 2009 01:51:31 am spir wrote:
* requires people to learn one more feature (so newbies will still be confused that def f(x=[]) doesn't behave as they expect).
That's the relevant drawback for me. A solution that does not solve the issue. A new syntactic pattern to allow call time evaluation of defaults is a (costly) solution for people who don't need it.
Let me expand on 'costly'. You (nicely) totally rejected a previous proposal of mine, but it adressed the issues pointed here (propably not clearly enough, though). I stated that defaults are part of a func def, and should be cached when the definition is evaluated, in a way that they are separate from local vars. This means that a local var should not point to the same object as the one cached. I did not enter implementation stuff, but this obviously requires, I guess, that defaults are (deep?)copied into locals at call time, when the object is mutable. Pseudo code for def f(arg=whatever) # at definition time f.__defaults__["arg"] = whatever # at call time if <arg not provided by caller> if <cached object is safely immutable> arg = f.__defaults__["arg"] else: arg = copy(f.__defaults__["arg"]) The advantage is that if ever "whatever" is a complex expression, it will not be re-evaluated on each call. Unlike with the late-binding proposal. As I see it, re-evaluating 'whatever' at call time does not serve any purpose -- except possibly that the result may change at runtime intentionally, which is another topic. Actually, a default may be changed from inside (a mutable object updated through local var in the func's own body) or from outside (when the expression holds variable items). In the latter case, this may be intentional to get a kind of runtime-changing default value. See also below.
There is no solution to the problem of newbies' confusion. The standard behaviour will remain in Python 2.x and almost certainly Python 3.x. The earliest it could change is Python 3.3: it could be introduced with a "from __future__ import defaults" in 3.2 and become standard in 3.3.
(It almost certainly will never be the standard behaviour, but if it did, that would be the earliest it could happen.)
I agree with that, it well certainly never change; I would never have brought this topic back again myself. (It's such an obvious issue that I was sure it had been hundred times discussed since the late eighties ;-) Bit it seems to come back regularly anyway.
And even if it did change, then newbies will be surprised and upset that def f(x=y) doesn't behave as they expect. Here's the current behaviour:
y = result_of_some_complex_calculation() # => 11 def f(x=y): ... return x+1 ... f() 12 y = 45 f() 12
Given the proposed behaviour, that second call to f() would surprisingly return 46, or worse, raise a NameError if y is no longer in scope.
Yes, "surprisingly". That's the reason why I stated earlier that when a default in *intended* to change at runtime, it is worth making it clear with explicit code and even comments. My previous proposal was precisely to make the default fixed, so that this (silent) behaviour disappears. In the case of an intentional runtime-changing default, it is thus a Good Thing to use a sentinel -- and with my proposal we would have to do it because of defaults beeing cached at definition time. so to have the above behaviour, one would need to write: def f(x=SENTINEL): # y will change at runtime if x is SENTINEL: x = y return x+1 For an example: def writeIndent(indent_level, indent_token=SENTINEL): # 'indent_token' may be provided by the caller # to conform to the source beeing edited. # Else read it from current user config. if indent_token is SENTINEL: indent_token = config.indent_token .......
The real problem is that people don't have a consistent expectation for default arguments. No matter what behaviour Python uses, people will be caught out by it sometimes.
I rather think that an important fraction of experienced python programmers have expectations dictated by the current semantics they're used to. (Which is indeed correct. This a case of "intuitive = familiar".) But expectations from so to say "ordinary" people unaware of the said semantics are clearly different. The fact that people are bitten (when default is changed from inside the func), or merely surprised (your case above with default changed from outside) rather shows what they expect the func def to mean (both when writing and reading it). Denis ------ la vita e estrany
On Sat, 16 May 2009 14:05:25 +0200 spir <denis.spir@free.fr> wrote:
Le Sat, 16 May 2009 12:21:13 +1000, Steven D'Aprano <steve@pearwood.info> s'exprima ainsi:
On Sat, 16 May 2009 01:51:31 am spir wrote:
* requires people to learn one more feature (so newbies will still be confused that def f(x=[]) doesn't behave as they expect).
That's the relevant drawback for me. A solution that does not solve the issue. A new syntactic pattern to allow call time evaluation of defaults is a (costly) solution for people who don't need it.
Let me expand on 'costly'.
By "expand", you mean make things even more costly?
I stated that defaults are part of a func def, and should be cached when the definition is evaluated, in a way that they are separate from local vars. This means that a local var should not point to the same object as the one cached. I did not enter implementation stuff, but this obviously requires, I guess, that defaults are (deep?)copied into locals at call time, when the object is mutable. Pseudo code for
def f(arg=whatever)
# at definition time f.__defaults__["arg"] = whatever # at call time if <arg not provided by caller> if <cached object is safely immutable> arg = f.__defaults__["arg"] else: arg = copy(f.__defaults__["arg"])
The advantage is that if ever "whatever" is a complex expression, it will not be re-evaluated on each call. Unlike with the late-binding proposal.
Right. It'll be *copied*. So consider: x = 1000000 * [[]] def f(y=x): y[18][0] = None y[23] = None y[0][1] = None So instead of simply creating a new reference to the object (which you get with either the current semantics or the re-evaluate at call time semantics), you now copy a list with a million elements on every call. For this case, the current semantics are the only one that works well: you can put the expression into the call list, and don't need either an extra variable to avoid rebuilding your long list on every call (if you re-evaluate the argument) or using a sentinel and assignment from that extra variable to avoid copying it if you copy the values.
As I see it, re-evaluating 'whatever' at call time does not serve any purpose -- except possibly that the result may change at runtime intentionally, which is another topic.
The problem is *not* with the behavior of default values to arguments. The problem is with the behavior of multiple references to mutable objects. People who aren't used to object/reference semantics don't understand them (of course, they don't understand multiple references to immutable objects either, but that's a different problem). They will be confused when they body of f above throws an exception when you don't pass it y - no matter *what* the calling semantics! Admittedly, default values for arguments are nastier than other because people don't see it as multiple references to one object until it's pointed out. Both copying and reevaluation change that. <mike -- Mike Meyer <mwm@mired.org> http://www.mired.org/consulting.html Independent Network/Unix/Perforce consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
Steven D'Aprano writes:
(For the record, I've suggested creating a unary-& operator so that we can write "def f(&x=[])" to get late-binding of x.)
Could you summarize that discussion briefly?
On Fri, 15 May 2009 03:08:18 pm Stephen J. Turnbull wrote:
Could you summarize that discussion briefly?
Many newbies, and some more experienced programmers, are confused by the behaviour of functions when parameters are given default mutable arguments:
def f(x=[]): ... x.append(1) ... return x ... f() [1] f() [1, 1]
Some people are surprised by this behaviour, and would prefer that the default value for x be freshly created each time it is needed. This is one of the most common, and most acrimonious, topics of discussion on comp.lang.python. The standard idiom for the expected behaviour is to insert boilerplate code that checks for a sentinel: def f(x=None): if x is None: x = [] x.append(1) return x The chances of having the standard behaviour changed are slim, at best, for various reasons including backward compatibility and runtime efficiency. Also, I believe Guido has ruled that the standard behaviour will not be changed. However, some have suggested that if the standard compile-time creation of defaults won't be changed, perhaps it could be made optional, with special syntax, or perhaps a decorator, controlling the behaviour. See these two proof-of-concept decorators, by Geremy Condra, for example: http://code.activestate.com/recipes/576751/ http://code.activestate.com/recipes/576754/ I'm not convinced by decorator-based solutions, so I'll pass over them. I assume that any first-class solution will require cooperation from the compiler, and thus move the boilerplate out of the function body into the byte code. (Or whatever implementation is used -- others have suggested using thunks.) Assuming such compiler support is possible, it only remains to decide on syntax for it. Most suggested syntax I've seen has marked the default value itself, e.g.: def f(x = new []). Some have suggested overloading lambda, perhaps with some variation like def f(x = *lambda:[]). I suggest that the markup should go on the formal parameter name, not the default value: we're marking the formal parameter as "special" for using delayed semantics, not that the default object (usually [] or {}) will be special. Some years ago, Python overloaded the binary operators * and ** for use as special markers in parameter lists. I suggest we could do the same, by overloading the & operator in a similar fashion: inside the parameter list, &x would mean to delay evaluation of the default argument: def f(x=[], &y=[]) x would use the current compile-time semantics, y would get the new runtime semantics. I don't have any particular reason for choosing & over any other binary operator. I think ^ would also be a good choice. Tagging a parameter with unary-& but failing to specify a default value should be a syntax error. Likewise for unary-& outside of a parameter list. (At least until such time as somebody suggests a good use for such a thing.) -- Steven D'Aprano
Steven D'Aprano wrote:
Some years ago, Python overloaded the binary operators * and ** for use as special markers in parameter lists. I suggest we could do the same, by overloading the & operator in a similar fashion: inside the parameter list, &x would mean to delay evaluation of the default argument:
Yikes, that syntax would seriously confuse any C++ programmer (me included). I wouldn't be able to avoid thinking of those parameters as pass by reference (i.e. actually referring to the memory location of the passed in argument so you can fiddle with immutable values in the caller, not just pass-a-reference-by-value the way Python does for normal arguments). Marking the parameter name strikes me as wrong anyway - it's only the evaluation of the default argument which is special, not the parameter itself. Cheers, Nick. P.S. (The subject line change caught my attention. The recap of the discussion was very handy, and it does appear to have evolved in a more useful direction, but I'm going back to largely ignoring the thread now...) -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia ---------------------------------------------------------------
On Fri, 15 May 2009 08:26:30 pm Nick Coghlan wrote:
Some years ago, Python overloaded the binary operators * and ** for use as special markers in parameter lists. I suggest we could do the same, by overloading the & operator in a similar fashion: inside the parameter list, &x would mean to delay evaluation of the default argument: [...] Marking the parameter name strikes me as wrong anyway - it's only the evaluation of the default argument which is special, not the
Steven D'Aprano wrote: parameter itself.
But it is the parameter that is special. The default object itself is not. Consider the function definition: def f(x=[], &y=[]): (or any other syntax you prefer). The empty list you get as the default value for x is exactly the same as the empty list you get in y, in every way except for identity. -- Steven D'Aprano
On 5/15/09, Steven D'Aprano <steve@pearwood.info> wrote:
On Fri, 15 May 2009 08:26:30 pm Nick Coghlan wrote:
Steven D'Aprano wrote:
...inside the parameter list, &x would mean to delay evaluation ...
Marking the parameter name strikes me as wrong anyway - it's only the evaluation of the default argument which is special, not the parameter itself.
But it is the parameter that is special. The default object itself is not. Consider the function definition:
def f(x=[], &y=[]):
(or any other syntax you prefer). The empty list you get as the default value for x is exactly the same as the empty list you get in y, in every way except for identity.
Logically, you're correct. But I think the ('&' ==> addressof) meme may have already grown too strong. What it suggests to me is that normally you *would* create a new list, but the ampersand says not to in just this rare case. -jJ
On Fri, May 15, 2009 at 10:14 AM, Jim Jewett <jimjjewett@gmail.com> wrote:
Logically, you're correct. But I think the ('&' ==> addressof) meme may have already grown too strong. What it suggests to me is that normally you *would* create a new list, but the ampersand says not to in just this rare case.
Well the connotations are not much stronger than with '*' and '**'. I've been literally asked by an experienced C/C++/Perl guy "what's this pointer to a pointer parameter used for in this function ?". How about '@' instead ? A mnemonic here could be "just like '@decorator\ndef f():' is a shortcut for 'f = decorator(f)', the '@arg=expr' parameter is a shortcut for 'arg = (lambda: expr)()' if 'arg' is not passed". Admittedly, far from a perfect analogy but probably less controversial than '&'. George
Steven D'Aprano wrote:
On Fri, 15 May 2009 08:26:30 pm Nick Coghlan wrote:
Some years ago, Python overloaded the binary operators * and ** for use as special markers in parameter lists. I suggest we could do the same, by overloading the & operator in a similar fashion: inside the parameter list, &x would mean to delay evaluation of the default argument: [...] Marking the parameter name strikes me as wrong anyway - it's only the evaluation of the default argument which is special, not the
Steven D'Aprano wrote: parameter itself.
But it is the parameter that is special. The default object itself is not.
It's not the object that is being marked as special: it's the expression to create the object. The new syntax is about delaying evaluation of that expression - the parameter itself is perfectly normal, as is the object that is ultimately bound to it. But moving the default argument evaluation to call time instead of definition time - that's special. It may be worth using something like "make_default()" in examples instead of "[]" and see if that makes my point any clearer. I doubt it's possible to come up with a concise syntax for this (particularly one that plays well with function annotations), but best of luck in the search :) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia ---------------------------------------------------------------
On 5/15/09, Nick Coghlan <ncoghlan@gmail.com> wrote:
It's not the object that is being marked as special: it's the expression to create the object. The new syntax is about delaying evaluation of that expression - the parameter itself is perfectly normal, as is the object that is ultimately bound to it. But moving the default argument evaluation to call time instead of definition time - that's special.
It may be worth using something like "make_default()" in examples instead of "[]" and see if that makes my point any clearer.
I doubt it's possible to come up with a concise syntax for this (particularly one that plays well with function annotations), but best of luck in the search :)
I think this is inching towards the variable-defining keywords, like once, final, static ... This one might be "late" or "lazy", though "new" would work if you kept it only to function definitions. def f( a=make_default_once(), lazy b=make_default_each_time()): ... VAR_IN_MODULE_API = lazy whoops_wanted_a_property_after_all() -jJ
Le Sat, 16 May 2009 00:33:20 +1000, Nick Coghlan <ncoghlan@gmail.com> s'exprima ainsi:
It's not the object that is being marked as special: it's the expression to create the object. The new syntax is about delaying evaluation of that expression - the parameter itself is perfectly normal, as is the object that is ultimately bound to it. But moving the default argument evaluation to call time instead of definition time - that's special.
I rather agree. Then we should mark the binding sign '=' as special (not the parameter / the object)! E.g. def f(arg &= []) or def f(arg @= []) I looks a bit strange, but we have augmented assignment already. Denis ------ la vita e estrany
On May 15, 1:26 pm, spir <denis.s...@free.fr> wrote:
Le Sat, 16 May 2009 00:33:20 +1000, Nick Coghlan <ncogh...@gmail.com> s'exprima ainsi:
It's not the object that is being marked as special: it's the expression to create the object. The new syntax is about delaying evaluation of that expression - the parameter itself is perfectly normal, as is the object that is ultimately bound to it. But moving the default argument evaluation to call time instead of definition time - that's special.
I rather agree. Then we should mark the binding sign '=' as special (not the parameter / the object)! E.g.
def f(arg &= [])
or
def f(arg @= [])
I looks a bit strange, but we have augmented assignment already.
Denis ------ la vita e estrany
Maybe ::?
def f(x::a**2+2*b+c): ... return x
Also, is there any reason why this has to be specific to function signatures at this point? Geremy Condra
On May 15, 1:26 pm, spir <denis.s...@free.fr> wrote:
Le Sat, 16 May 2009 00:33:20 +1000, Nick Coghlan <ncogh...@gmail.com> s'exprima ainsi:
It's not the object that is being marked as special: it's the expression to create the object. The new syntax is about delaying evaluation of that expression - the parameter itself is perfectly normal, as is the object that is ultimately bound to it. But moving the default argument evaluation to call time instead of definition time - that's special.
I rather agree. Then we should mark the binding sign '=' as special (not the parameter / the object)! E.g.
def f(arg &= [])
or
def f(arg @= [])
I looks a bit strange, but we have augmented assignment already.
Denis
Maybe :: ?
def f(x::a**2+2*b+c): ... return x ... a, b, c = 0, 1, 2 f() 4
Geremy Condra
How about changing the '=' sign, i.e. def (foo <= []): #stuff i.e. instead of foo 'equals' [], foo 'gets' an [] -T
On May 15, 6:03 pm, Tennessee Leeuwenburg <tleeuwenb...@gmail.com> wrote:
How about changing the '=' sign, i.e. def (foo <= []): #stuff
i.e. instead of foo 'equals' [], foo 'gets' an []
-T
I see :: as having the advantage of not being used for anything currently, as well as being vaguely reminiscent of :=, which wouldn't be too bad either. It would also leave the door open to adding thunks to the language as a whole rather than just here, and assuming (warning! possibly invalid conclusion ahead!) that this would require thunks to work, that seems like a good idea to me. Geremy Condra
Le Sat, 09 May 2009 12:56:20 +0200, Pascal Chambon <chambon.pascal@wanadoo.fr> s'exprima ainsi:
If people want static variables in python, for example to avoid OO programming and still have stateful functions, we can add an explicit "static" keyword or its equivalent.
This is far from beeing pythonic anyway, I guess. Ditto for storing data on the func itself (as shown in another post). It provide a way of linking together data and behaviour; similar techniques are used e.g. in Lisp, so that many Lisp people find OO pretty useless. But python has OO in-built, and even as mainsteam paradigm. Data related to behaviour should be set on an object.
But using the ambiguous value given via a default-valued argument is not pretty, imo. Unless we have a way to access, from inside a code block, the function object in which this code block belongs.
Does it exist ? Do we have any way, from inside a call block, to browse the default arguments that this code block might receive ?
This is a feature of much more reflexive/meta languages like Io (or again Lisp), that were indeed designed from scratch with this capacity in mind and intended as a major programming feature. In Io you can even access the 'raw' message _before_ evaluation, so that you get the expression of argument, not only the resulting value. Denis ------ la vita e estrany
Le Fri, 08 May 2009 22:31:57 +0200, Pascal Chambon <chambon.pascal@wanadoo.fr> s'exprima ainsi:
And no one seemed to enjoy the possibilities of getting "potentially static variables" this way. Static variables are imo a rather bad idea, since they create "stateful functions", that make debugging and maintenance more difficult ; but when such static variable are, furthermore, potentially non-static (i.e when the corresponding function argument is supplied), I guess they become totally useless and dangerous - a perfect way to get hard-to-debug behaviours.
If we want static vars, there are better places than default args for this. (See also the thread about memoizing). E.g. on the object when it's a method, or even on the func itself. def squares(n): square = n * n; print square if square not in squares.static_list: squares.static_list.append(n) squares.static_list = [] squares(1);squares(2);squares(1);squares(3) print squares.static_list Denis ------ la vita e estrany
spir a écrit :
Le Fri, 08 May 2009 22:31:57 +0200, Pascal Chambon <chambon.pascal@wanadoo.fr> s'exprima ainsi:
And no one seemed to enjoy the possibilities of getting "potentially static variables" this way. Static variables are imo a rather bad idea, since they create "stateful functions", that make debugging and maintenance more difficult ; but when such static variable are, furthermore, potentially non-static (i.e when the corresponding function argument is supplied), I guess they become totally useless and dangerous - a perfect way to get hard-to-debug behaviours.
If we want static vars, there are better places than default args for this. (See also the thread about memoizing). E.g. on the object when it's a method, or even on the func itself.
def squares(n): square = n * n; print square if square not in squares.static_list: squares.static_list.append(n) squares.static_list = []
squares(1);squares(2);squares(1);squares(3) print squares.static_list
Denis ------ la vita e estrany _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
Well, I've just realized I'd sent a semi-dumb question in my previous answer :p I'd never quite realized it was possible to store stuffs inside the function object, by retrieving it from inside the code object. And it works pretty well... In classes you can access your function via self, in closures they get caught in cells... it's only in global scope that there are problems : here, if you rename squares (newsquares = squares ; squares = None), you'll have an error by calling newsquares, because it searches "squares" in the global scope, without the help of "self" or closures. Still, an explicit way of targetting "the function I'm in" would be sweet imo, but retrieving it that way is not far from being as handy. Thanks for the tip that opened my eyes, regards, Pascal
On Sat, 9 May 2009 09:16:46 pm Pascal Chambon wrote:
Still, an explicit way of targetting "the function I'm in" would be sweet imo, but retrieving it that way is not far from being as handy.
There's a rather long discussion on the comp.lang.python newsgroup at the moment about that exact question. Look for the recent thread titled "Self function". If you can't get Usenet and don't like Google Groups, the c.l.py newsgroup is also available as a python mailing list, and a gmane mailing list. -- Steven D'Aprano
On Sat, 9 May 2009 09:36:12 pm Steven D'Aprano wrote:
There's a rather long discussion on the comp.lang.python newsgroup at the moment about that exact question
Er, to be precise, by "at the moment" I actually mean "over the last few days". The thread seems to have more-or-less finished now. Of course, no Usenet thread is every *completely* finished. Please feel free to resurrect it if you have any good ideas, questions or insight into the issue. -- Steven D'Aprano
Pascal Chambon schrieb:
Still, an explicit way of targetting "the function I'm in" would be sweet imo, but retrieving it that way is not far from being as handy.
You could use e.g. def selffunc(func): @wraps(func) def newfunc(*args, **kwds): return func(func, *args, **kwds) return newfunc @selffunc def foo(func, a): func.cache = a to avoid the "ugly" lookup of the function in the global namespace. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out.
Hi Pascal, Taking the example of def foo(bar = []): bar.append(4) print(bar) I'm totally with you in thinking that what is 'natural' is to expect to get a new, empty, list every time. However this isn't want happens. As far as I'm concerned, that should more or less be the end of the discussion in terms of what should ideally happen. The responses to the change in behaviour which I see as more natural are, to summarise, as follows: -- For all sorts of technical reasons, it's too hard -- It changes the semantics of the function definition being evaluated at compile time -- It's not what people are used to With regards to the second point, it's not like the value of arguments is set at compile time, so I don't really see that this stands up. I don't think it's intuitive, it's just that people become accustomed to it. There is indeed, *some sense* in understanding that the evaluation occurs at compile-time, but there is also a lot of sense (and in my opinion, more sense) in understanding the evaluation as happening dynamically when the function is called. With regards to the first point, I'm not sure that this is as significant as all of that, although of course I defer to the language authors here. However, it seems as though it could be no more costly than the lines of code which most frequently follow to initialise these variables. On the final point, that's only true for some people. For a whole lot of people, they stumble over it and get it wrong. It's one of the most un-Pythonic things which I have to remember about Python when programming -- a real gotcha. I don't see it as changing one way of doing things for another equally valid way of doing things, but changing something that's confusing and unexpected for something which is far more natural and, to me, Pythonic. For me, Python 3k appears to be a natural place to do this. Python 3 still appears to be regarded as a work-in-progress by most people, and I don't think that it's 'too late' to change for Python 3k. Perhaps, given the timing, the people involved, the complexity of change etc, then for pragmatic reasons this may have to be delayed, but I don't think that's a good thing. I'd much rather see it done, personally. I think that many people would feel the same way. Regards, -Tennessee On Sat, May 9, 2009 at 6:31 AM, Pascal Chambon <chambon.pascal@wanadoo.fr>wrote:
Hello,
I'm surely not original in any way there, but I'd like to put back on the table the matter of "default argument values". Or, more precisely, the "one shot" handling of default values, which makes that the same mutable objects, given once as default arguments, come back again and again at each function call. They thus become some kinds of "static variables", which get polluted by the previous calls, whereas many-many-many python users still believe that they get a fresh new value at each function call. I think I understand how default arguments are currently implemented (and so, "why" - technically - it does behave this way), but I'm still unsure of "why" - semantically - this must be so.
I've browsed lots of google entries on that subject, but as far as I'm concerned, I've found nothing in favor current semantic. I've rather found dozens, hundreds of posts of people complaining that they got biten by this gotcha, many of them finishing with a "Never put mutable values in default arguments, unless you're very very sure of what you're doing !".
And no one seemed to enjoy the possibilities of getting "potentially static variables" this way. Static variables are imo a rather bad idea, since they create "stateful functions", that make debugging and maintenance more difficult ; but when such static variable are, furthermore, potentially non-static (i.e when the corresponding function argument is supplied), I guess they become totally useless and dangerous - a perfect way to get hard-to-debug behaviours.
On the other hand, when people write "def func(mylist=[]):", they basically DO want a fresh new list at each call, be it given by the caller or the default argument system. So it's really a pity to need tricks like
* def f(a, L=None): *>* if L is None: *>* L = [] *to get what we want (and if None was also a possible value ? what other value should we put as a placeholder for "I'd like None or a fresh new list but I can't say it directly ?").
So I'd like to know : are there other "purely intellectual" arguments for/against the current semantic of default arguments (I might have missed some discussion on this subject, feel free to point them ?
Currently, this default argument handling looks, like a huge gotcha for newcomers, and, I feel, like an embarrassing wart to most pythonistas. Couldn't it be worth finding a new way of doing it ? Maybe there are strong arguments against a change at that level ; for example, performance issues (I'm not good in those matters). But I need to ensure.
So here are my rough ideas on what we might do - if after having the suggestions from expert people, it looks like it's worth writting a PEP, I'll be willing to particpateon it. Basically, I'd change the python system so that, when a default argument expression is encountered, instead of being executed, it's wrapped in some kind of zero-argument lambda expression, which gets pushed in the "func_defaults" attribute of the function. And then, each time a default argument is required in a function call, this lambda expression gets evaluated and gives the expected value.
I guess this will mean some overhead during function call, so this might become another issue. It's also a non retrocompatible change, so I assume we'd have to use a "from __future__ import XXX" until Python4000. But I think the change is worth the try, because it's a trap which waits for all the python beginners.
So, if this matters hasn't already been marked somewhere as a no-go, I eagerly await the feedback of users and core developpers on the subject. :)
By the way, I'm becoming slightly allergical to C-like languages (too much hassle for too little gain, compared to high level dynamic languages), but if that proposition goes ahead, and no one wants to handle the implementation details, I'll put the hands in the engine ^^
Regards, Pascal
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- -------------------------------------------------- Tennessee Leeuwenburg http://myownhat.blogspot.com/ "Don't believe everything you think"
On Sun, 10 May 2009 10:19:01 am Tennessee Leeuwenburg wrote:
Hi Pascal, Taking the example of
def foo(bar = []): bar.append(4) print(bar)
I'm totally with you in thinking that what is 'natural' is to expect to get a new, empty, list every time.
That's not natural to me. I would be really, really surprised by the behaviour you claim is "natural":
DEFAULT = 3 def func(a=DEFAULT): ... return a+1 ... func() 4 DEFAULT = 7 func() 8
For deterministic functions, the same argument list should return the same result each time. By having default arguments be evaluated every time they are required, any function with a default argument becomes non-deterministic. Late evaluation of defaults is, essentially, equivalent to making the default value a global variable. Global variables are rightly Considered Harmful: they should be used with care, if at all.
However this isn't want happens. As far as I'm concerned, that should more or less be the end of the discussion in terms of what should ideally happen.
As far as I'm concerned, what Python does now is the idea behaviour. Default arguments are part of the function *definition*, not part of the body of the function. The definition of the function happens *once* -- the function isn't recreated each time you call it, so default values shouldn't be recreated either.
The responses to the change in behaviour which I see as more natural are, to summarise, as follows: -- For all sorts of technical reasons, it's too hard -- It changes the semantics of the function definition being evaluated at compile time -- It's not what people are used to
And it's not what many people want. You only see the people who complain about this feature. For the multitude of people who expect it or like it, they have no reason to say anything (except in response to complaints). When was the last time you saw somebody write to the list to say "Gosh, I really love that Python uses + for addition"? Features that *just work* never or rarely get mentioned.
With regards to the second point, it's not like the value of arguments is set at compile time, so I don't really see that this stands up.
I don't see what relevance that has. If the arguments are provided at runtime, then the default value doesn't get used.
I don't think it's intuitive,
Why do you think that intuitiveness is more valuable than performance and consistency? Besides, intuitiveness is a fickle thing. Given this pair of functions: def expensive_calculation(): time.sleep(60) return 1 def useful_function(x=expensive_calculation()): return x + 1 I think people would be VERY surprised that calling useful_function() with no arguments would take a minute *every time*, and would complain that this slowness was "unintuitive".
it's just that people become accustomed to it. There is indeed, *some sense* in understanding that the evaluation occurs at compile-time, but there is also a lot of sense (and in my opinion, more sense) in understanding the evaluation as happening dynamically when the function is called.
No. The body of the function is executed each time the function is called. The definition of the function is executed *once*, at compile time. Default arguments are part of the definition, not the body, so they too should only be executed once. If you want them executed every time, put them in the body: def useful_function(x=SENTINEL): if x is SENTINEL: x = expensive_calculation() return x+1
With regards to the first point, I'm not sure that this is as significant as all of that, although of course I defer to the language authors here. However, it seems as though it could be no more costly than the lines of code which most frequently follow to initialise these variables.
On the final point, that's only true for some people. For a whole lot of people, they stumble over it and get it wrong. It's one of the most un-Pythonic things which I have to remember about Python when programming -- a real gotcha.
I accept that it is a Gotcha. The trouble is, the alternative behaviour you propose is *also* a Gotcha, but it's a worse Gotcha, because it leads to degraded performance, surprising introduction of global variables where no global variables were expected, and a breakdown of the neat distinction between creating a function and executing a function. But as for it being un-Pythonic, I'm afraid that if you really think that, your understanding of Pythonic is weak. From the Zen: The Zen of Python, by Tim Peters Special cases aren't special enough to break the rules. Although practicality beats purity. If the implementation is hard to explain, it's a bad idea. (1) Assignments outside of the body of a function happen once, at compile time. Default values are outside the body of the function. You want a special case for default values so that they too happen at runtime. That's not special enough to warrant breaking the rules. (2) The potential performance degradation of re-evaluating default arguments at runtime is great. For practical reasons, it's best to evaluate them once only. (3) In order to get the behaviour you want, the Python compiler would need a more complicated implementation which would be hard to explain.
I don't see it as changing one way of doing things for another equally valid way of doing things, but changing something that's confusing and unexpected for something which is far more natural and, to me, Pythonic.
I'm sorry, while re-evaluation of default arguments is sometimes useful, it's more often NOT useful. Most default arguments are simple objects like small ints or None. What benefit do you gain from re-evaluating them every single time? Zero benefit. (Not much cost either, for simple cases, but no benefit.) But for more complex cases, there is great benefit to evaluating default arguments once only, and an easy work-around for those rare cases that you do want re-evaluation.
For me, Python 3k appears to be a natural place to do this. Python 3 still appears to be regarded as a work-in-progress by most people, and I don't think that it's 'too late' to change for Python 3k.
Fortunately you're not Guido, and fortunately this isn't going to happen. I recommend you either accept that this behaviour is here to stay, or if you're *particularly* enamoured of late evaluation behaviour of defaults, that you work on some sort of syntax to make it optional. -- Steven D'Aprano
I think this is a case where there are pros and cons on both sides. There are a lot of pros to the current behavior (performance, flexibility, etc.), but it comes with the con of confusing newbies and making people go through the same song and dance to set a "sentinel value" when the want the other behavior and they can't ensure that None won't be passed. The newbie problem can't be fixed from now until Python 4000, since it would break a lot of existing uses of default values, but we could cut down on the annoyance of setting and check a sentinel value by introducing a new keyword, eg. def f(l=fresh []): ... instead of __blank = object() def f(l=__blank): if l is __blank: l = [] ... The pros of a new keyword are saving 3 lines and being more clear upfront about what's going on with the default value . The con is that adding a new keyword bloats the language. We could try reusing an existing keyword, but none of the current ones seem to fit: and elif import return as else in try assert except is while break finally lambda with class for not yield continue from or def global pass del if raise (I copied this from Python 3.0's help, but there seems to be a documentation error: nonlocal, None, True, and False are also keywords in Python 3+.) The best one on the current list it seems to me would be "else" as in def f(l else []): ... But I dunno… It just not quite right, you know? So, I'm -0 on changing the current behavior, but I'm open to it if someone can find a way to do it that isn't just an ad hoc solution to this one narrow problem but has a wider general use.
Le Sat, 9 May 2009 16:06:06 -1000, Carl Johnson <cmjohnson.mailinglist@gmail.com> s'exprima ainsi:
I think this is a case where there are pros and cons on both sides. There are a lot of pros to the current behavior (performance, flexibility, etc.), but it comes with the con of confusing newbies and making people go through the same song and dance to set a "sentinel value" when the want the other behavior and they can't ensure that None won't be passed. The newbie problem can't be fixed from now until Python 4000, since it would break a lot of existing uses of default values, but we could cut down on the annoyance of setting and check a sentinel value by introducing a new keyword, eg.
def f(l=fresh []): ...
instead of
__blank = object() def f(l=__blank): if l is __blank: l = [] ... [...]
Maybe the correctness of the current behaviour can be checked by a little mental experiment. ======= Just imagine python hasn't default arguments yet, and they are the object of a PEP. An implementation similar to the current one is proposed. Then, people realise that in the case where the given value happens to be mutable _and_ updated in the function body,... What do you think should/would be decided? -1- Great, we get static variables for free. It is a worthful feature we expected for a while. In numerous use cases they will allow easier and much more straightforward code. Let's go for it. -2- Well, static variables may be considered useful, in which case there should be a new PEP for them. Conceptually, they are a totally different feature, we shall certainly not mess up both, shall we? ======= I bet for n°2, for the reasoning of people stating it's a major gotcha will be hard to ignore. But may be wrong. Still, default arguments actually *are* called "default arguments", which means they should be considered as such, while they do not behave as such in all cases. Now, we must consider the concrete present situation in which their real behaviour is used as a common workaround. I do not really understand why default args are used as static vars while at least another possibility exists in python which is semantically much more consistent: ### instead of "def callSave(number, record=[])" ### just set record on the func: def callSave(value): callSave.record.append(value) return callSave.record callSave.record = [] print callSave(1) ; print callSave(2) ; print callSave(3) ==> [1] [1, 2] [1, 2, 3] Also, func attributes are an alternative for another common (mis)use of default arguments, namely the case of a function factory: def paramPower(exponent): ### instead of "def power(number, exponent=exponent)" ### just set exponent on the func: def power(number): return number**power.exponent power.exponent = exponent return power power3 = paramPower(3) ; power5 = paramPower(5) print power3(2) ; print power5(2) ==> 8 32 In both cases, tha notion of a func attribute rather well matches the variable value's meaning. As a consequence, I find this solution much nicer for a func factory as well as for a static variable. Denis ------ la vita e estrany
spir schrieb:
Also, func attributes are an alternative for another common (mis)use of default arguments, namely the case of a function factory:
def paramPower(exponent): ### instead of "def power(number, exponent=exponent)" ### just set exponent on the func: def power(number): return number**power.exponent power.exponent = exponent return power power3 = paramPower(3) ; power5 = paramPower(5) print power3(2) ; print power5(2) ==> 8 32
You don't need a function attribute here; just "exponent" will work fine. The problem is where you define multiple functions, e.g. in a "for" loop, and function attributes don't help there: adders = [] for i in range(10): def adder(x): return x + i This will fail to do what it seems to do; you need to have some kind of binding "i" in a scope where it stays constant. You could use a "make_adder" factory function, similar to your paramPower, but that is more kludgy than necessary, because it can easily be solved by a default argument. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out.
For me, Python 3k appears to be a natural place to do this. Python 3 still appears to be regarded as a work-in-progress by most people, and I don't think that it's 'too late' to change for Python 3k.
Fortunately you're not Guido, and fortunately this isn't going to happen. I recommend you either accept that this behaviour is here to stay, or if you're *particularly* enamoured of late evaluation behaviour of defaults, that you work on some sort of syntax to make it optional.
Thank you for the rest of the email, which was (by and large) well-considered and (mostly) stuck to the points of the matter. I will get to them in proper time when I have been able to add to the argument in a considered way after fully understanding your points. However, this last section really got under my skin. It seems completely inappropriate to devolve any well-intentioned email discussion into an appalling self-service ad-hominem attack. Your assertion of your ethical viewpoint (use of Fortunately without a backing argument), attempt to bully me out of my position (recomment you accept this behaviour is here to stay) are not appreciated. You have *your* view of what is fortunate, right and appropriate. I took every care NOT to assert my own viewpoint as universally true; you have not done so. Guido is just a person, as you are just a person, as I am just a person. Can we not please just stick to a simple, civilised discussion of the point without trying to win cheap debating points or use the "Zen" of Python to denigrate people who have either genuinely failed to grasp some aspect of a concept, or whose intuition is simply different. Without people whose intuition is different, no advancement is possible. Without debate about what constitutes the "Zen" of Python, the "Zen" of Python must always be static, unchanging, unchallenged and therefore cannot grow. I do not think that is what anyone meant when they were penning the "Zen" of Python. This list is not best-served by grandstanding. It may not even be best-served by the now effectively personal debate which you have drawn me into through your personalisation of the issue (I quote: "your understanding is weak"). Terms such as weak and strong are inherently laden with ethical and social overtones -- incomplete, misplaced, or any number of other qualifiers could have kept the debate to the factual level. Regards, -Tennessee
Any argument for changing to a more "dynamic" default scheme had better have a definition of the behavior of the following code, and produce a good rationale for that behavior: x = 5 def function_producer(y): def inner(arg=x+y): return arg + 2 return inner x = 6 f1 = function_producer(x) x = 4.1 y = 7 f2 = function_producer(3) print x, y, f1(), f2() del y x = 45 print x, f1(), f2() del x print f1(), f2() --Scott David Daniels Scott.Daniels@Acm.Org
On Sun, May 10, 2009 at 1:28 AM, Scott David Daniels <Scott.Daniels@acm.org> wrote:
Any argument for changing to a more "dynamic" default scheme had better have a definition of the behavior of the following code, and produce a good rationale for that behavior:
x = 5 def function_producer(y): def inner(arg=x+y): return arg + 2 return inner
I don't think the proposed scheme was ever accused of not being well-defined. Here's the current equivalent dynamic version: x = 5 def function_producer(y): missing = object() def inner(arg=missing): if arg is missing: arg = x+y return arg + 2 return inner -1 for changing the current semantics (too much potential breakage), +0.x for a new keyword that adds dynamic semantics (and removes the need for the sentinel kludge). George
George Sakkis wrote:
+0.x for a new keyword that adds dynamic semantics (and removes the need for the sentinel kludge).
We don't need new syntax for it. Here's a proof-of-concept hack that you can do it with a function decorator. import copy def clone_arguments(f): default_args = list(f.func_defaults) if len(default_args) < f.func_code.co_argcount: delta = f.func_code.co_argcount - len(default_args) default_args = ([None] * delta) + default_args def fn(*args): if len(args) < default_args: args = args + tuple(copy.deepcopy(default_args[len(args):])) return f(*args) return fn @clone_arguments def thing_taking_array(a, b = []): b.append(a) return b print thing_taking_array('123') print thing_taking_array('abc') -1 on changing Python one iota for this, /larry/
Larry Hastings wrote:
George Sakkis wrote:
+0.x for a new keyword that adds dynamic semantics (and removes the need for the sentinel kludge).
We don't need new syntax for it. Here's a proof-of-concept hack that you can do it with a function decorator.
Your decorator only works for mutables where you just want a deep copy. It doesn't work for cases where you want a whole expression to be re-evaluated from scratch. (Maybe for the side effects or something.) That said, it couldn't be that hard to work out a similar decorator using lambda thunks instead. The internals of the decorator would be something like: for n, arg in enumerate(args): if arg is defaults[n]: #If you didn't get passed anything args[n] = defaults[n]() #Unthunk the lambda The usage might be: @dynamicdefaults def f(arg=lambda: dosomething()): It cuts 3 lines of boilerplate down to one line, but makes all your function calls a little slower. -- Carl
On 10 May 2009, at 11:14, Larry Hastings wrote:
George Sakkis wrote:
+0.x for a new keyword that adds dynamic semantics (and removes the need for the sentinel kludge).
We don't need new syntax for it. Here's a proof-of-concept hack that you can do it with a function decorator. import copy
Not that we don't need syntax for default arguments either :) Here is a decorator that does it: def default(**defaults): defaults = defaults.items() def decorator(f): def decorated(*args, **kwargs): for name, val in defaults: kwargs.setdefault(name, val) return f(*args, **kwargs) return decorated return decorator Here it is in action:
z=1 @default(z=z) ... def foo(a, z): ... print a + z ... z=None foo(3) 4
@default(history=[]) ... def bar(x, history): ... history.append(x) ... return list(history) ... map(bar, 'spam') [['s'], ['s', 'p'], ['s', 'p', 'a'], ['s', 'p', 'a', 'm']]
Let's get rid of default arguments altogether, and we will have solved the problem! Furthemore, by removing default arguments from the language, we can let people choose what semantics they want for default arguments. I.e, if they want them to be reevaluated each time, they could write the default decorator as follows (it is exactly the same as the one above except for a pair of parentheses that have been added on one line. def dynamic_default(**defaults): defaults = defaults.items() def decorator(f): def decorated(*args, **kwargs): for name, val in defaults: kwargs.setdefault(name, val()) # ^^ return f(*args, **kwargs) return decorated return decorator Example:
@dynamic_default(l=list) ... def baz(a, l): ... l.append(a) ... return l ... baz(2) [2] baz(3) [3]
;) -- Arnaud
On Sun, May 10, 2009 at 12:10 PM, Arnaud Delobelle <arnodel@googlemail.com> wrote:
Furthemore, by removing default arguments from the language, we can let people choose what semantics they want for default arguments. I.e, if they want them to be reevaluated each time, they could write the default decorator as follows (it is exactly the same as the one above except for a pair of parentheses that have been added on one line.
Cute, but that's still a subset of what the dynamic semantics would provide; the evaluated thunks would have access to the previously defined arguments: def foo(a, b, d=(a*a+b+b)**0.5, s=1/d): return (a,b,d,s) would be equivalent to missing = object() def foo(a, b, d=missing, s=missing): if d is missing: d = (a*a+b+b)**0.5 if s is missing: s = 1/d return (a,b,d,s) George
On Sun, May 10, 2009 at 1:00 PM, George Sakkis <george.sakkis@gmail.com> wrote:
On Sun, May 10, 2009 at 12:10 PM, Arnaud Delobelle <arnodel@googlemail.com> wrote:
Furthemore, by removing default arguments from the language, we can let people choose what semantics they want for default arguments. I.e, if they want them to be reevaluated each time, they could write the default decorator as follows (it is exactly the same as the one above except for a pair of parentheses that have been added on one line.
Cute, but that's still a subset of what the dynamic semantics would provide; the evaluated thunks would have access to the previously defined arguments:
def foo(a, b, d=(a*a+b+b)**0.5, s=1/d): return (a,b,d,s)
would be equivalent to
missing = object() def foo(a, b, d=missing, s=missing): if d is missing: d = (a*a+b+b)**0.5 if s is missing: s = 1/d return (a,b,d,s)
Just for kicks, here's a decorator that supports dependent dynamically computed defaults; it uses eval() to create the lambdas on the fly: @evaldefaults('s','d') def foo(a, b, d='(a*a+b*b)**0.5', t=0.1, s='(1+t)/(d+t)'): return (a,b,d,t,s) print foo(3,4) #======= import inspect import functools # from http://code.activestate.com/recipes/551779/ from getcallargs import getcallargs def evaldefaults(*eval_params): eval_params = frozenset(eval_params) def decorator(f): params,_,_,defaults = inspect.getargspec(f) param2default = dict(zip(params[-len(defaults):], defaults)) if defaults else {} param2lambda = {} for p in eval_params: argsrepr = ','.join(params[:params.index(p)]) param2lambda[p] = eval('lambda %s: %s' % (argsrepr, param2default[p]), f.func_globals) @functools.wraps(f) def wrapped(*args, **kwargs): allkwargs,missing = getcallargs(f, *args, **kwargs) missing_eval_params = eval_params.intersection(missing) f_locals = {} for i,param in enumerate(params): value = allkwargs[param] if param in missing_eval_params: allkwargs[param] = value = param2lambda[param](**f_locals) f_locals[param] = value return f(**allkwargs) return wrapped return decorator George
Arnaud Delobelle wrote:
Not that we don't need syntax for default arguments either :) Here is a decorator that does it: [...] Let's get rid of default arguments altogether, and we will have solved the problem! Furthemore, by removing default arguments from the language, we can let people choose what semantics they want for default arguments. [...] ;)
Comedy or not, I don't support getting rid of default arguments. Nor do I support changing the semantics of default arguments so they represent code that is run on each function invocation. As I demonstrated, people can already choose what semantics they want for default arguments, by choosing whether or not to decorate their functions with clone_arguments or the like. We don't need to remove default arguments from the language for that to happen. /larry/
George Sakkis wrote:
On Sun, May 10, 2009 at 1:28 AM, Scott David Daniels <Scott.Daniels@acm.org> wrote:
Any argument for changing to a more "dynamic" default scheme had better have a definition of the behavior of the following code, and produce a good rationale for that behavior:
x = 5 def function_producer(y): def inner(arg=x+y): return arg + 2 return inner
I don't think the proposed scheme was ever accused of not being well-defined. Here's the current equivalent dynamic version:
x = 5 def function_producer(y): missing = object() def inner(arg=missing): if arg is missing: arg = x+y return arg + 2 return inner
So sorry. def function_producer(y): def inner(arg=x+y): return arg + 2 y *= 10 return inner I was trying to point out that it becomes much trickier building functions with dynamic parts, and fluffed the example. --Scott David Daniels Scott.Daniels@Acm.Org
George Sakkis wrote:
On Sun, May 10, 2009 at 1:28 AM, Scott David Daniels <Scott.Daniels@acm.org> wrote:
Any argument for changing to a more "dynamic" default scheme had better have a definition of the behavior of the following code, and produce a good rationale for that behavior:
x = 5 def function_producer(y): def inner(arg=x+y): return arg + 2 return inner
In this version, x is resolved each time when function_producer is called which could be different each time. The x = 5 line is possible irrelevant.
I don't think the proposed scheme was ever accused of not being well-defined. Here's the current equivalent dynamic version:
x = 5 def function_producer(y): missing = object() def inner(arg=missing): if arg is missing: arg = x+y return arg + 2 return inner
In this version, x is resolved each time each of the returned inners is called, which could be different each time and which could be different from any of the times function_producer was called. Not equivalent. Given how often this issue is resuscitated, I am will consider raising my personal vote from -1 to -0. My friendly advice to anyone trying to promote a change is to avoid categorical (and false or debatable) statements like "Newbies get confused" (Some do, some do not) or "The intuitive meaning is" (to you maybe, not to me). Also avoid metaphysical issues, especially those that divide people.* Do focus on practical issues, properly qualified statements, and how a proposal would improve Python. Terry Jan Reedy * The initial argument for changing the meaning of int/int amounted to this: "integers are *merely* a subset of reals, therefore....". Those with a different philosophy dissented and the actual proposal was initially lost in the resulting fire.
Terry Reedy writes:
Given how often this issue is resuscitated, I am will consider raising my personal vote from -1 to -0.
Do focus on practical issues, properly qualified statements, and how a proposal would improve Python.
One thing I would like to see addressed is use-cases where the *calling* syntax *should* use default arguments.[1] In the case of the original example, the empty list, I think that requiring the argument, and simply writing "foo([])" instead of "foo()" is better, on two counts: EIBTI, and TOOWTDI. And it's certainly not an expensive adjustment. In a more complicated case, it seems to me that defining (and naming) a separate function would often be preferable to defining a complicated default, or explicitly calling a function in the actual argument. Ie, rather than def consume_integers(ints=fibonacci_generator()): for i in ints: # suite and termination condition why not def consume_integers(ints): for i in ints: # suite and termination condition def consume_fibonacci(): consume_integers(fibonacci_generator()) or def consume_integers_internal(ints): for i in ints: # suite and termination condition def consume_integers(): consume_integers_internal(fibonacci_generator()) depending on how frequent or intuitive the "default" Fibonacci sequence is? IMO, for both above use-cases EIBTI applies as an argument that those are preferable to a complex, dynamically- evaluated default, and for the second TOOWTDI also applies. Footnotes: [1] In the particular cases being advocated as support for dynamic evaluation of default arguments, not in general. It is clear to me that having defaults for rarely used option arguments is a good thing, and I think that is a sufficient justification for Python to support default arguments.
Tennessee Leeuwenburg wrote:
For me, Python 3k appears to be a natural place to do this. Python 3 still appears to be regarded as a work-in-progress by most people, and I don't think that it's 'too late' to change for Python 3k.
Sorry, it *is* too late. The developers have been very careful about breaking 3.0 code in 3.1 only with strong justification. 3.1 is in feature freeze as of a few days ago.
Fortunately you're not Guido, and fortunately this isn't going to happen. I recommend you either accept that this behaviour is here to stay, or if you're *particularly* enamoured of late evaluation behaviour of defaults, that you work on some sort of syntax to make it optional. Thank you for the rest of the email, which was (by and large) well-considered and (mostly) stuck to the points of the matter. I will get to them in proper time when I have been able to add to the argument in a considered way after fully understanding your points.
However, this last section really got under my skin. It seems completely inappropriate to devolve any well-intentioned email discussion into an appalling self-service ad-hominem attack.
I do not see any attack whatsoever, just advice which you took wrongly.
...se of Fortunately without a backing argument),
'Fortunately' as is clear from the context, was in respect to your expressed casual attitude toward breaking code. Some people have a negative reaction to that. In any case, it is a separate issue from 'default arguments'.
attempt to bully me out of my position (recomment you accept this behaviour is here to stay) are not appreciated.
He recommended that you not beat your head against a brick wall because of a misconception about what is currently socially possible. He then suggested something that *might* be possible. If that advice offends you, so be it. Terry Jan Reedy
However, this last section really got under my skin. It seems completely inappropriate to devolve any well-intentioned email discussion into an appalling self-service ad-hominem attack.
I do not see any attack whatsoever, just advice which you took wrongly.
...se of Fortunately without a backing argument),
'Fortunately' as is clear from the context, was in respect to your expressed casual attitude toward breaking code. Some people have a negative reaction to that. In any case, it is a separate issue from 'default arguments'.
attempt to bully me out of my position (recomment you accept this behaviour is here to stay) are not appreciated.
He recommended that you not beat your head against a brick wall because of a misconception about what is currently socially possible. He then suggested something that *might* be possible. If that advice offends you, so be it.
It's not the content of the advice (don't push stuff uphill) which got to me at all, it was the tone and manner in which it was conveyed. Much of the email was well-balanced, which I fully acknowledged. Maybe you're just more inclined to overlook a few bits of innuendo, and probably most of the time so am I. However, it's actually not okay, and the implied personal criticism was very clearly present. It wasn't severe, ,perhaps my reaction was quite forceful, but it's just not okay to be putting people down. I don't have a casual attitude towards breaking code, just an open mind towards discussions on their merits. I don't really appreciate the negative tones, and I'm sure that if anyone else is in the firing line, they wouldn't appreciate either, even if it to some extent it's all a bit of a storm in a teacup. Unless someone who is happy to cop a bit of flak stands up and says that's not on, then maintaining a "thick skin" -- i.e. putting up with people putting you down, be it through a clear and direct put-down, or through a more subtle implication -- becomes the norm. It becomes acceptable, perhaps indeed even well-regarded, to take a certain viewpoint then suggest that anyone who doesn't share it is doing something wrong. Well nuts to that. Emails are, as everyone should know, an unclear communication channel. I've found myself on the wrong side of this kind of debate before, and I've heard plenty of stories of people who were put down, pushed out or made to feel stupid -- and for what? There are just so many stories, many of which I have heard first-hand, of people who have felt alienated on online lists where prowess and insight are so highly regarded that they become means by which others are put down. It's that larger problem which people need not to put up with. However, I'm just about to go offline for 12 hours or so, and I know the US will be waking up to their emails shortly, so I just wanted to take this opportunity before the sun rotates again to say to the list and the original author that I'd really like to avoid a continued shouting contest or make anyone upset. I've obviously ruffled some feathers already, and I guess probably this email may ruffle some more, but really I just want to make clear that : (a) It's not okay to put myself or anyone else down, claiming some personal superiority (b) That attitude is all this email is about. It doesn't need to be any bigger than that. Regards, -Tennessee
Tennessee Leeuwenburg schrieb:
I don't have a casual attitude towards breaking code, just an open mind towards discussions on their merits. I don't really appreciate the negative tones, and I'm sure that if anyone else is in the firing line, they wouldn't appreciate either, even if it to some extent it's all a bit of a storm in a teacup. Unless someone who is happy to cop a bit of flak stands up and says that's not on, then maintaining a "thick skin" -- i.e. putting up with people putting you down, be it through a clear and direct put-down, or through a more subtle implication -- becomes the norm. It becomes acceptable, perhaps indeed even well-regarded, to take a certain viewpoint then suggest that anyone who doesn't share it is doing something wrong.
The problem is that people whose proposal immediately meets negative reactions usually feel put down no matter what exactly you say to them. If there was a polite way of saying "This will not change, please don't waste more of our time with this discussion." that still gets the point across, I would be very grateful. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out.
On Mon, May 11, 2009 at 12:59 AM, Georg Brandl <g.brandl@gmx.net> wrote:
Tennessee Leeuwenburg schrieb:
I don't have a casual attitude towards breaking code, just an open mind towards discussions on their merits. I don't really appreciate the negative tones, and I'm sure that if anyone else is in the firing line, they wouldn't appreciate either, even if it to some extent it's all a bit of a storm in a teacup. Unless someone who is happy to cop a bit of flak stands up and says that's not on, then maintaining a "thick skin" -- i.e. putting up with people putting you down, be it through a clear and direct put-down, or through a more subtle implication -- becomes the norm. It becomes acceptable, perhaps indeed even well-regarded, to take a certain viewpoint then suggest that anyone who doesn't share it is doing something wrong.
The problem is that people whose proposal immediately meets negative reactions usually feel put down no matter what exactly you say to them. If there was a polite way of saying "This will not change, please don't waste more of our time with this discussion." that still gets the point across, I would be very grateful.
I understand that. I think there is a good way to do it. First of all, I would recognise that this is the python-ideas list, not the python-dev list, and that this is *exactly* the place to discuss ideas on their merits, and potentially put aside pragmatics to engage in a discussion of design philosophy. For bad ideas, I would suggest: "Thanks for your contribution. However, this has been discussed quite a lot before, and the groundswell of opinion is most likely that this is not going to be a good addition to Python. However, if you'd like to discuss the idea further, please consider posting it to comp.lang.py." For good/okay ideas that just won't get up, I would suggest: "Thanks for your contribution. I see your point, but I don't think it's likely to get enough traction amongst the developer community for someone else to implement it. However, if you'd like more feedback on your ideas so that you can develop a PEP or patch, please feel free to have a go. However, please don't be disappointed if it doesn't get a lot of support unless you are happy to provide some more justification for your position.". I don't really think anyone on a mailing list really needs to waste any more time than they want to -- just ignore the thread. I would definitely avoid things like: "You clearly have no idea what you are talking about" "If you only knew what I knew, you'd know differently" It's probably not possible to avoid people with an idea feeling deflated if their ideas are not popular, but on an ideas list such as this, I think that having conversations should be encouraged. Certainly that's what got under my skin. If I was chatting in person, or with friends, or in a meeting, the appropriate thing to do would be to say "Hey, that's a bit rough!" and then probably the attitude would be wound back, or the person would respond with "Oh, that's not what I meant, I just meant this..." and the misunderstanding would be quickly resolved. Unfortunately, email just *sucks* for telling the difference between someone with a chip on their shoulder, or someone who is just being helpful and made a bad choice of words. Cheers, -T
On Mon, May 11, 2009 at 12:59 AM, Georg Brandl <g.brandl@gmx.net> wrote:
The problem is that people whose proposal immediately meets negative reactions usually feel put down no matter what exactly you say to them. If there was a polite way of saying "This will not change, please don't waste more of our time with this discussion." that still gets the point across, I would be very grateful.
I thought that the way to do that is to say "This proposal is un-Pythonic".[1] Now, a claim that something is "un-Pythonic" is basically the end of discussion, since in the end it's a claim that the BDFL would reject it, not something based entirely on "objective criteria". Even the BDFL sometimes doesn't know what's Pythonic until he sees it! So a bare claim of "un-Pythonic" is borrowing the BDFL's authority, which should be done very cautiously. IMO, the "problem" arose here because Tennessee went outside of discussing the idea on its merits. Rather, he wrote "the current situation seems un-Pythonic," but didn't point to violations of any of the usual criteria. OTOH, Steven did discuss the idea on its Pythonic merits, showing that it arguably violates at least three of the tenets in the Zen of Python. In turn, Steven could have left well-enough alone and avoided explicitly pointing out that, therefore, Tennessee doesn't seem to understand what "Pythonic" is. FWIW, early in my participation in Python-Dev I was told, forcefully, that I wasn't qualified to judge Pythonicity. That statement was (and is) true, and being told off was a very helpful experience for me. How and when to say it is another matter, but avoiding it entirely doesn't serve the individual or the community IMO. Note that this whole argument does not apply to terms like "natural," "intuitive," "readable," etc., only to the pivotal term "Pythonic". The others are matters of individual opinion. AIUI, "Pythonic" is a matter of *the BDFL's* opinion (if it comes down to that, and sometimes it does). Footnotes: [1] If it's a fundamental conceptual problem. "This proposal is incompatible" can be used if it's a matter of violating the language definition according to the language reference and any relevant PEPs.
On Sun, May 10, 2009 at 11:23 AM, Steven D'Aprano <steve@pearwood.info>wrote:
On Sun, 10 May 2009 10:19:01 am Tennessee Leeuwenburg wrote:
Hi Pascal, Taking the example of
def foo(bar = []): bar.append(4) print(bar)
I'm totally with you in thinking that what is 'natural' is to expect to get a new, empty, list every time.
That's not natural to me. I would be really, really surprised by the behaviour you claim is "natural":
DEFAULT = 3 def func(a=DEFAULT): ... return a+1 ... func() 4 DEFAULT = 7 func() 8
Good example! If I may translate that back into the example using a list to make sure I've got it right... default = [] def func(a=default): a.append(5) func() func() default will now be [5,5]
For deterministic functions, the same argument list should return the same result each time. By having default arguments be evaluated every time they are required, any function with a default argument becomes non-deterministic. Late evaluation of defaults is, essentially, equivalent to making the default value a global variable. Global variables are rightly Considered Harmful: they should be used with care, if at all.
If I can just expand on that point somewhat... In the example I gave originally, I had in mind someone designing a function, whereby it could be called either with some pre-initialised term, or otherwise it would use a default value of []. I imagined a surprised designer finding that the default value of [] was a pointer to a specific list, rather than a new empty list each time. e.g. def foo(bar = []): bar.append(5) return bar The same argument list (i.e. no arguments) would result in a different result being returned every time. On the first call, bar would be [5], then [5,5], then [5,5,5]; yet the arguments passed (i.e. none, use default) would not have changed. You have come up with another example. I think it is designed to illustrate that a default argument doesn't need to specify a default value for something, but could be a default reference (such as a relatively-global variable). In that case, it is modifying something above its scope. To me, that is what you would expect under both "ways of doing things". I wonder if I am missing your point... I'm totally with you on the Global Variables Are Bad principle, however. I don't design them in myself, and where I have worked with them, usually they have just caused confusion.
However this isn't want
happens. As far as I'm concerned, that should more or less be the end of the discussion in terms of what should ideally happen.
As far as I'm concerned, what Python does now is the idea behaviour. Default arguments are part of the function *definition*, not part of the body of the function. The definition of the function happens *once* -- the function isn't recreated each time you call it, so default values shouldn't be recreated either.
I agree that's how you see things, and possibly how many people see things, but I don't accept that it is a more natural way of seeing things. However, what *I* think is more natural is just one person's viewpoint... I totally see the philosophical distinction you are trying to draw, and it certainly does help to clarify why things are the way they are. However, I just don't know that it's the best way they could be.
The responses to the change in behaviour which I see as more natural
are, to summarise, as follows: -- For all sorts of technical reasons, it's too hard -- It changes the semantics of the function definition being evaluated at compile time -- It's not what people are used to
And it's not what many people want.
You only see the people who complain about this feature. For the multitude of people who expect it or like it, they have no reason to say anything (except in response to complaints). When was the last time you saw somebody write to the list to say "Gosh, I really love that Python uses + for addition"? Features that *just work* never or rarely get mentioned.
:) ... well, that's basically true. Of course there are some particular aspects of Python which are frequently mentioned as being wonderful, but I see your point. However, I'm not sure we really know one way or another about what people want then -- either way.
With regards to the second point, it's not like the value of arguments is set at compile time, so I don't really see that this stands up.
I don't see what relevance that has. If the arguments are provided at runtime, then the default value doesn't get used.
I think this is the fundamental difference -- to me speaks worlds :) ... I think you just have a different internal analogy for programming than I do. That's fine. To me, I don't see that a line of code should not be dynamically evaluated just because it's part of the definition. I just don't see why default values shouldn't be (or be able to be) dynamically evaluated. Personally I think that doing it all the time is more natural, but I certainly don't see why allowing the syntax would be bad. I'd basically do that 100% of the time. I'm not sure I've ever used a default value other than None in a way which I wouldn't want dynamically evaluated.
I don't think it's intuitive,
Why do you think that intuitiveness is more valuable than performance and consistency?
Because I like Python more than C? I'm pretty sure everyone here would agree than in principle, elegance of design and intuitive syntax are good. Agreeing on what that means might involve some robust discussion, but I think everyone would like the same thing. Well, consistency is pretty hard to do without... :)
Besides, intuitiveness is a fickle thing. Given this pair of functions:
def expensive_calculation(): time.sleep(60) return 1
def useful_function(x=expensive_calculation()): return x + 1
I think people would be VERY surprised that calling useful_function() with no arguments would take a minute *every time*, and would complain that this slowness was "unintuitive".
That seems at first like a good point. It is a good point, but I don't happen to side with you on this issue, although I do see that many people might. The code that I write is not essentially performance-bound. It's a lot more design-bound (by which I mean it's very complicated, and anything I can do to simplify it is well-worth a bit of a performance hit). However, when the design options are available (setting aside what default behaviour should be), it's almost always possible to design things how you'd like them. e.g. def speed_critical_function(x=None): if x is None: time.sleep(60: return 1 def handy_simple_function(foo=5, x=[]): or maybe (foo=5, x = new []): for i in range(5): x.append(i) return x Then, thinking about it a little more (and bringing back a discussion of default behaviour), I don't really see why the implementation of the dynamic function definition would be any slower than using None to indicate it wasn't passed in, followed by explicit default-value setting.
it's just that people become accustomed to it. There is indeed, *some sense* in understanding that the evaluation occurs at compile-time, but there is also a lot of sense (and in my opinion, more sense) in understanding the evaluation as happening dynamically when the function is called.
No. The body of the function is executed each time the function is called. The definition of the function is executed *once*, at compile time. Default arguments are part of the definition, not the body, so they too should only be executed once. If you want them executed every time, put them in the body:
def useful_function(x=SENTINEL): if x is SENTINEL: x = expensive_calculation() return x+1
I agree that's how things *are* done, but I just don't see why it should be that way, beyond it being what people are used to. It seems like there is no reason why it would be difficult to implement CrazyPython which does things as I suggest. Given that, it also doesn't seem like there is some inherent reason to prefer the design style of RealActualPython over CrazyPython. Except, of course that RealActualPython exists and I can use it right now (thanks developers!), versus CrazyPython which is just an idea.
With regards to the first point, I'm not sure that this is as
significant as all of that, although of course I defer to the language authors here. However, it seems as though it could be no more costly than the lines of code which most frequently follow to initialise these variables.
On the final point, that's only true for some people. For a whole lot of people, they stumble over it and get it wrong. It's one of the most un-Pythonic things which I have to remember about Python when programming -- a real gotcha.
I accept that it is a Gotcha. The trouble is, the alternative behaviour you propose is *also* a Gotcha, but it's a worse Gotcha, because it leads to degraded performance, surprising introduction of global variables where no global variables were expected, and a breakdown of the neat distinction between creating a function and executing a function.
But as for it being un-Pythonic, I'm afraid that if you really think that, your understanding of Pythonic is weak. From the Zen:
The Zen of Python, by Tim Peters
Special cases aren't special enough to break the rules. Although practicality beats purity. If the implementation is hard to explain, it's a bad idea.
(1) Assignments outside of the body of a function happen once, at compile time. Default values are outside the body of the function. You want a special case for default values so that they too happen at runtime. That's not special enough to warrant breaking the rules.
Your logic is impeccable :) ... yet, if I may continue to push my wheelbarrow uphill for a moment longer, I would argue that is an implementation detail, not a piece of design philosophy.
(2) The potential performance degradation of re-evaluating default arguments at runtime is great. For practical reasons, it's best to evaluate them once only.
Maybe that's true. I guess I have two things to say on that point. The first is that I'm still not sure that's really true in a problematic way. Anyone wanting efficiency could continue to use sentinel values of None (which obviously don't need to be dynamically evaluated) while other cases would surely be no slower than the initialisation code would be anyway. Is the cost issue really that big a problem? The other is that while pragmatics is, of course, a critical issue, it's also true that it's well-worth implementing more elegant language features if possible. It's always a balance. The fastest languages are always less 'natural', while the more elegant and higher-level languages somewhat slower. Where a genuine design improvement is found, I think it's worth genuinely considering including that improvement, even if it is not completely pragmatic.
(3) In order to get the behaviour you want, the Python compiler would need a more complicated implementation which would be hard to explain.
Yes, that's almost certainly true.
I don't see it as changing one way of doing things for another equally valid way of doing things, but changing something that's confusing and unexpected for something which is far more natural and, to me, Pythonic.
I'm sorry, while re-evaluation of default arguments is sometimes useful, it's more often NOT useful. Most default arguments are simple objects like small ints or None. What benefit do you gain from re-evaluating them every single time? Zero benefit. (Not much cost either, for simple cases, but no benefit.)
But for more complex cases, there is great benefit to evaluating default arguments once only, and an easy work-around for those rare cases that you do want re-evaluation.
Small ints and None are global pointers (presumably!) so there is no need to re-evaluate them every time. The list example is particularly relevant (ditto empty dictionary) since I think that would be one of the most common cases for re-evaluation. Presumably a reasonably efficient implementation could be worked out such that dynamic evaluation of the default arguments (and indeed the entire function definition) need only occur where a dynamic default value were included. I agree that the workaround is not that big a deal once you're fully accustomed to How Things Work, but it just seems 'nicer' to allow dynamic defaults. That's all I really wanted to say in the first instance, I didn't think that position would really get anyone's back up. Regards, -Tennessee
On Mon, May 11, 2009, Tennessee Leeuwenburg wrote:
On Sun, May 10, 2009 at 11:23 AM, Steven D'Aprano <steve@pearwood.info>wrote:
(1) Assignments outside of the body of a function happen once, at compile time. Default values are outside the body of the function. You want a special case for default values so that they too happen at runtime. That's not special enough to warrant breaking the rules.
Your logic is impeccable :) ... yet, if I may continue to push my wheelbarrow uphill for a moment longer, I would argue that is an implementation detail, not a piece of design philosophy.
I'm not going to look it up, but in the past, Guido has essentially claimed that this behavior is by design (not so much by itself but in conjunction with other deliberate decisions). I remind you of this: "Programming language design is not a rational science. Most reasoning about it is at best rationalization of gut feelings, and at worst plain wrong." --GvR, python-ideas, 2009-3-1 -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "It is easier to optimize correct code than to correct optimized code." --Bill Harlan
participants (25)
-
Aahz
-
Arnaud Delobelle
-
Carl Johnson
-
Chris Rebert
-
CTO
-
Curt Hagenlocher
-
Georg Brandl
-
George Sakkis
-
Gerald Britton
-
Greg Ewing
-
Jacob Holm
-
Jeremy Banks
-
Jim Jewett
-
Larry Hastings
-
Mike Meyer
-
MRAB
-
Nick Coghlan
-
Oleg Broytmann
-
Pascal Chambon
-
Scott David Daniels
-
spir
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Tennessee Leeuwenburg
-
Terry Reedy