If A.M. Kuchling's list of Python Warts is any indication, Python has removed many of the warts it once had. However, the behavior of mutable default argument values is still a frequent stumbling-block for newbies. It is also present on at least 3 different lists of Python's deficiencies ([0][1][2]).
Example of current, unintuitive behavior (snipped from [0]):
def popo(x=[]): ... x.append(666) ... print x ... popo() [666] popo() [666, 666] popo() [666, 666, 666]
Whereas a newbie with experience with immutable default argument values would, by analogy, expect:
popo() [666] popo() [666] popo() [666]
In scanning [0], [1], [2], and other similar lists, I have only found one mediocre use-case for this behavior: Using the default argument value to retain state between calls. However, as [2] comments, this purpose is much better served by decorators, classes, or (though less preferred) global variables. Other uses are alluded to be equally esoteric and unpythonic.
To work around this behavior, the following idiom is used: def popo(x=None): if x is None: x = [] x.append(666) print x
However, why should the programmer have to write this extra boilerplate code when the current, unusual behavior is only relied on by 1% of Python code?
Therefore, I propose that default arguments be handled as follows in Py3K:
This is fully backwards-compatible with the aforementioned workaround, and removes the need for the it, allowing one to write the first, simpler definition of popo().
Comments?
[0] 10 Python pitfalls (http://zephyrfalcon.org/labs/python_pitfalls.html) [1] Python Gotchas (http://www.ferg.org/projects/python_gotchas.html#contents_item_6) [2] When Pythons Attack (http://www.onlamp.com/pub/a/python/2004/02/05/learn_python.html?page=2)
There are a few problems here. Deep copy of the originally created default argument can be expensive and would not work in any useful way with non-literals as defaults, such as function calls or subscript lookups or even simple attributes.
If any solution is possible, it would require a way to differentiate between mutable and immutable objects, and evaluate the immutables for every call. This brings on more problems as to when you do the initial evaluation, if you then might do further evaluations depending on the results of the first.
Any solution would be an unjust addition of expense to defaults.
On 1/14/07, Chris Rebert cvrebert@gmail.com wrote:
If A.M. Kuchling's list of Python Warts is any indication, Python has removed many of the warts it once had. However, the behavior of mutable default argument values is still a frequent stumbling-block for newbies. It is also present on at least 3 different lists of Python's deficiencies ([0][1][2]).
Example of current, unintuitive behavior (snipped from [0]):
def popo(x=[]): ... x.append(666) ... print x ... popo() [666] popo() [666, 666] popo() [666, 666, 666]
Whereas a newbie with experience with immutable default argument values would, by analogy, expect:
popo() [666] popo() [666] popo() [666]
In scanning [0], [1], [2], and other similar lists, I have only found one mediocre use-case for this behavior: Using the default argument value to retain state between calls. However, as [2] comments, this purpose is much better served by decorators, classes, or (though less preferred) global variables. Other uses are alluded to be equally esoteric and unpythonic.
To work around this behavior, the following idiom is used: def popo(x=None): if x is None: x = [] x.append(666) print x
However, why should the programmer have to write this extra boilerplate code when the current, unusual behavior is only relied on by 1% of Python code?
Therefore, I propose that default arguments be handled as follows in Py3K:
This is fully backwards-compatible with the aforementioned workaround, and removes the need for the it, allowing one to write the first, simpler definition of popo().
Comments?
[0] 10 Python pitfalls (http://zephyrfalcon.org/labs/python_pitfalls.html) [1] Python Gotchas (http://www.ferg.org/projects/python_gotchas.html#contents_item_6) [2] When Pythons Attack (http://www.onlamp.com/pub/a/python/2004/02/05/learn_python.html?page=2)
Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://ironfroggy-code.blogspot.com/
Calvin Spealman wrote:
There are a few problems here. Deep copy of the originally created default argument can be expensive For immutables, yes, it would make an unnecessary copy, though it seems less likely to have complex immutable objects than complex mutable objects as defaults.
and would not work in any useful way with non-literals as defaults, such as function calls or subscript lookups or even simple attributes. Good point, I hadn't considered that.
If any solution is possible, it would require a way to differentiate between mutable and immutable objects, and evaluate the immutables for every call. That doesn't seem to consider the other cases you just mentioned, though I see merit in your idea. How about there be some way to specify that a default argument value be re-evaluated at every call it's required for, while all other arguments have existing semantics?
Hypothetical syntax:
def foo(a, b=4, c=<Bar([q, w, e], 7)>):
#b's default value is evaluated exactly once, at definition-time
#c's default value is evaluated every time foo() is called and no
value for c is given
Where the <>s indicate these special semantics.
Any solution would be an unjust addition of expense to defaults. That sounds a bit like premature optimization to me.
Where the <>s indicate these special semantics.
The syntax bothers me, but launched an idea: How about separating parameter default value issues from instance specific definition time objects?
def foo(a, b=4, c=None):
local d = Bar([2,3,4])
...
The function code can decide if it wants to use c, provided by the caller, or d when no value for c is given. The developer may decide that None is not a good value for 'no value for c', but that's a design decision.
You can do this now with:
def foo(a, b=4, c=None):
...
foo.func_dict['d'] = Bar([2,3,4])
But I would rather see this inside foo(), jamming variables into the func_dict bothers me too :-).
The new keyword would work for classes, but be a functional noop:
class Snorf:
local eggs = 3
spam = 4
Joel
On 1/17/07, Joel Bender jjb5@cornell.edu wrote:
Where the <>s indicate these special semantics.
The syntax bothers me, but launched an idea: How about separating parameter default value issues from instance specific definition time objects?
def foo(a, b=4, c=None):
local d = Bar([2,3,4])
...
The function code can decide if it wants to use c, provided by the caller, or d when no value for c is given. The developer may decide that None is not a good value for 'no value for c', but that's a design decision.
I dont understand how that would be different than doing
c = c if c is not None else Bar([2,3,4])
You can do this now with:
def foo(a, b=4, c=None):
...
foo.func_dict['d'] = Bar([2,3,4])
This would not really work in practice. See this:
def f(): ... print a ... f.func_dict['a'] = 10 f() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in f NameError: global name 'a' is not defined
But I would rather see this inside foo(), jamming variables into the func_dict bothers me too :-).
The new keyword would work for classes, but be a functional noop:
class Snorf:
local eggs = 3
spam = 4
Joel
Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://ironfroggy-code.blogspot.com/
Calvin Spealman wrote:
I dont understand how that would be different than doing
c = c if c is not None else Bar([2,3,4])
Because that would be calling Bar(), perhaps creating a new Bar object, every time foo() is called with None for c, which is not what a default argument values are about. I'm proposing a way to create function local singleton objects, removing them from the parameter list.
def foo(x):
local history = []
history.append(x)
Rather than:
def foo(x, history=[]):
history.append(x)
and then hoping that nobody calls foo() with a history parameter.
Joel
Joel Bender jjb5@cornell.edu wrote:
Calvin Spealman wrote:
I dont understand how that would be different than doing
c = c if c is not None else Bar([2,3,4])
Because that would be calling Bar(), perhaps creating a new Bar object, every time foo() is called with None for c, which is not what a default argument values are about. I'm proposing a way to create function local singleton objects, removing them from the parameter list. [snip] and then hoping that nobody calls foo() with a history parameter.
With your proposal, you are seeking to attempt to fix a "problem" that no one has complained about. -1 .
On 1/18/07, Josiah Carlson jcarlson@uci.edu wrote:
Joel Bender jjb5@cornell.edu wrote:
Calvin Spealman wrote:
I dont understand how that would be different than doing
c = c if c is not None else Bar([2,3,4])
But he actually ones a variable that does keep state between calls (like a mutable default arg), but can't be overridden.
With your proposal, you are seeking to attempt to fix a "problem" that no one has complained about. -1 .
Sure they have, and they've solved it (under different names) in plenty of other languages. In python, the only current solution seems to be turning the function into a class (with self) or at least a closure. People have griped about this.
For What Its Worth, my personal opinion is that having to create an object instead of a function is annoying, but not so bad (or so frequent) that it is worth special syntax.
-jJ
On 1/18/07, Jim Jewett jimjjewett@gmail.com wrote:
On 1/18/07, Josiah Carlson jcarlson@uci.edu wrote:
Joel Bender jjb5@cornell.edu wrote:
Calvin Spealman wrote:
I dont understand how that would be different than doing
c = c if c is not None else Bar([2,3,4])
But he actually ones a variable that does keep state between calls (like a mutable default arg), but can't be overridden.
With your proposal, you are seeking to attempt to fix a "problem" that no one has complained about. -1 .
Sure they have, and they've solved it (under different names) in plenty of other languages. In python, the only current solution seems to be turning the function into a class (with self) or at least a closure. People have griped about this.
User-defined function attributes is another handy solution.
For What Its Worth, my personal opinion is that having to create an object instead of a function is annoying, but not so bad (or so frequent) that it is worth special syntax.
Function attributes fit the bill really good if writing a class is too much overhead.
George
On 1/18/07, George Sakkis gsakkis@rutgers.edu wrote:
On 1/18/07, Jim Jewett jimjjewett@gmail.com wrote:
But he actually ones a variable that does keep state between calls (like a mutable default arg), but can't be overridden.
For What Its Worth, my personal opinion is that having to create an object instead of a function is annoying, but not so bad (or so frequent) that it is worth special syntax.
Function attributes fit the bill really good if writing a class is too much overhead.
Not really, because Python doesn't have the equivalent of "this". The only way for a function to access its own attributes is to hardcode a name and to assume the name will always refer to that same function object.
In practice, it mostly works, but so does just using a global variable.
-jJ
George Sakkis wrote:
Sure they have, and they've solved it (under different names) in plenty of other languages. In python, the only current solution seems to be turning the function into a class (with self) or at least a closure. People have griped about this.
User-defined function attributes is another handy solution.
For What Its Worth, my personal opinion is that having to create an object instead of a function is annoying, but not so bad (or so frequent) that it is worth special syntax.
Function attributes fit the bill really good if writing a class is too much overhead.
To follow up on this, here is a way to get something pretty close to what I wanted. From this...
def foo(x):
local history = []
history.append(x)
To this...
def local(**locals):
def _local(fn):
fn.__dict__.update(locals)
return fn
return _local
@local(history = [])
def foo(x):
foo.history.append(x)
I like this because it keeps history out of the parameter list, and while it's not part of the local namespace, it's readily accessible.
Joel
On 5/2/07, Joel Bender jjb5@cornell.edu wrote:
@local(history = [])
def foo(x):
foo.history.append(x)
This assumes that the name "foo" won't be rebound underneath you. That is usually, but not always, true. __this_function__ from PEP 3130 would solve that gotcha.
I like this because it keeps history out of the parameter list, and while it's not part of the local namespace, it's readily accessible.
Those are good things.
-jJ
Chris Rebert cvrebert@gmail.com wrote:
If A.M. Kuchling's list of Python Warts is any indication, Python has removed many of the warts it once had. However, the behavior of mutable default argument values is still a frequent stumbling-block for newbies. It is also present on at least 3 different lists of Python's deficiencies ([0][1][2]).
Example of current, unintuitive behavior (snipped from [0]):
def popo(x=[]): ... x.append(666) ... print x ... popo() [666] popo() [666, 666] popo() [666, 666, 666] [snip] Comments?
As provided by Calvin Spealman, the above can be fixed with:
def popo(x=None):
x = x if x is not None else []
x.append(666)
print x
I would also mention that forcing users to learn about mutable arguments and procedural programming is not a bad thing. Learning the "gotcha" of mutable default arguments is a very useful lesson, and to remove that lesson, I believe, wouldn't necessarily help new users to Python, or new programmers in general.
Josiah Carlson wrote:
As provided by Calvin Spealman, the above can be fixed with:
def popo(x=None):
x = x if x is not None else []
x.append(666)
print x
I would also mention that forcing users to learn about mutable arguments and procedural programming is not a bad thing. Learning the "gotcha" of mutable default arguments is a very useful lesson, and to remove that lesson, I believe, wouldn't necessarily help new users to Python, or new programmers in general.
First, your 'fix' misses the point: though the proposed feature isn't necessary, and code can be written without using it, it allows mutable default argument values to be expressed more clearly and succinctly than the idiom your 'fix' uses. Second, Python isn't (inherently) about teaching new programmers about programming, and what is good for newbies isn't necessarily good for experienced programmers. And at any rate, the lesson would still exist in the form of having to use the new feature I proposed (in my strawman syntax,
<DefaultArgumentValueThatGetsReEvaluated()>), and also in doing multiplication on 2D lists (e.g. x = [mutable]*42; x[7].mutate(); x[0].mutated == True).
Comments (from anyone) on my revised proposal (from the 2nd email)? I would particularly appreciate alternate syntax suggestions.
Chris Rebert cvrebert@gmail.com wrote:
Josiah Carlson wrote:
As provided by Calvin Spealman, the above can be fixed with:
def popo(x=None):
x = x if x is not None else []
x.append(666)
print x
I would also mention that forcing users to learn about mutable arguments and procedural programming is not a bad thing. Learning the "gotcha" of mutable default arguments is a very useful lesson, and to remove that lesson, I believe, wouldn't necessarily help new users to Python, or new programmers in general.
Maybe you are taking me a bit too seriously, but hopefully this will add some levity; I'm a poo-poo head. Moving on...
First, your 'fix' misses the point: though the proposed feature isn't necessary, and code can be written without using it, it allows mutable default argument values to be expressed more clearly and succinctly than the idiom your 'fix' uses.
As I stated, it wasn't my fix. And using previously existing syntax that adds 1 line to a function to support a particular desired result, I think, is perfectly reasonable. Had the conditional syntax been available for the last decade, those "gotchas" pages would have said "mutable default arguments are tricky, always use the following, and it will probably be the right thing" and moved on.
Second, Python isn't (inherently) about teaching new programmers about programming, and what is good for newbies isn't necessarily good for experienced programmers.
Indeed, and what may be good for certain experienced programmers, may not be good for other experienced programmers, or for the language in general. And personally, I am not sure that I could be convinced that a syntax to support what can be emulated by a single line is even worthy of addition. In the case of decorators, or even the py3k support for argument annotation, there are certain operations that can be made significantly easier. In this case, I'm not convinced that the extra syntax baggage is worthwhile.
Nevermind that it would be one more incompatible syntax that would make it difficult to write for 2.5/2.6 and 3.x .
I don't like new syntax for something like this, but I think the default
argument values can be fixed with semantic changes (which should not break
the most common current uses):
What I think should happen is compile a function like this
def popo(x=[]): x.append(666) print x
as if it had read
def popo(x=__default_argument_marker__): if x == __default_argument_marker__: x = [] x.append(666) print x
This way, every execution of popo gets its own list. Of course,
__default_argument_marker__ is just a way to tell the python runtime that
no argument was provided, it should not be exposed to the language.
If a variable is used in the default argument, it becomes a closure
variable:
d = createMyListOfLists() n = getDefaultIndex()
def foo(x=d[n]): x.append(666) print x
this is compiled as if it had read
d = createMyListOfLists() n = getDefaultIndex()
def foo(x=__default_argument_marker__): if x == __default_argument_marker__: x = d[n] # d and n are closure variables x.append(666) print x
d and n are looked up in foo's parent scope, which in this example is the
global scope. Of course the bytecode compiler should make sure d and n
don't name-clash with any variables used in the body of foo.
When you use variables as default value instead of literals, I think most
of the time you intend to have the function do something to the same
object the variable is bound to, instead of the function creating it's own
copy every time it's called. This behaviour still works with these
semantics:
a = [] def foo(x=[[],a]): x[0].append(123) x[1].append(123) print x foo() [[123], [123]] foo() [[123], [123, 123]] foo() [[123], [123, 123, 123]]
foo is compiled as if it had read:
def foo(x=__default_argument_marker__): if x == __default_argument_marker__: x = [[],a] # a is a closure variable x[0].append(123) x[1].append(123) print x
An other difference between this proposal and the current situation is
that it would be possible to change the value of a default argument after
the function is defined. However I don't think that would really be a
problem, and this behaviour is just the same as that of other closure
variables. Besides, this (what I perceive as a) problem with closure
variables is fixable on its own.
Jan
On Mon, 22 Jan 2007 01:49:51 +0100, Josiah Carlson jcarlson@uci.edu
wrote:
>
Chris Rebert cvrebert@gmail.com wrote:
Josiah Carlson wrote:
As provided by Calvin Spealman, the above can be fixed with:
def popo(x=None):
x = x if x is not None else []
x.append(666)
print x
I would also mention that forcing users to learn about mutable
arguments
and procedural programming is not a bad thing. Learning the "gotcha"
of mutable default arguments is a very useful lesson, and to remove
that
lesson, I believe, wouldn't necessarily help new users to Python, or
new
programmers in general.
Maybe you are taking me a bit too seriously, but hopefully this will add some levity; I'm a poo-poo head. Moving on...
First, your 'fix' misses the point: though the proposed feature isn't necessary, and code can be written without using it, it allows mutable default argument values to be expressed more clearly and succinctly than the idiom your 'fix' uses.
As I stated, it wasn't my fix. And using previously existing syntax that adds 1 line to a function to support a particular desired result, I think, is perfectly reasonable. Had the conditional syntax been available for the last decade, those "gotchas" pages would have said "mutable default arguments are tricky, always use the following, and it will probably be the right thing" and moved on.
Second, Python isn't (inherently) about teaching new programmers about programming, and what is good for newbies isn't necessarily good for experienced programmers.
Indeed, and what may be good for certain experienced programmers, may not be good for other experienced programmers, or for the language in general. And personally, I am not sure that I could be convinced that a syntax to support what can be emulated by a single line is even worthy of addition. In the case of decorators, or even the py3k support for argument annotation, there are certain operations that can be made significantly easier. In this case, I'm not convinced that the extra syntax baggage is worthwhile.
Nevermind that it would be one more incompatible syntax that would make it difficult to write for 2.5/2.6 and 3.x .
Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
On 1/24/07, Jan Kanis jan.kanis@phil.uu.nl wrote:
I don't like new syntax for something like this, but I think the default argument values can be fixed with semantic changes (which should not break the most common current uses):
What I think should happen is compile a function like this
def popo(x=[]): x.append(666) print x
as if it had read
def popo(x=__default_argument_marker__): if x == __default_argument_marker__: x = [] x.append(666) print x
How is this different from the x=None idiom of today?
def f(inlist=None):
if inlist is None:
inlist=[]
The if (either 2 lines or against PEP8) is a bit ugly, but Calvin pointed out that you can now write it as
def f(inlist=None):
inlist = inlist if (inlist is not None) else []
I see below that you give it slightly different semantics, but I'm not entirely sure how to tell when those different semantics should apply (always? when the variable name is marked with __*__? When a specific non-None singleton appears?), or why you would ever want them.
When you use variables as default value instead of literals, I think most of the time you intend to have the function do something to the same object the variable is bound to, instead of the function creating it's own copy every time it's called. This behaviour still works with these semantics:
a = [] def foo(x=[[],a]): x[0].append(123) x[1].append(123) print x foo() [[123], [123]] foo() [[123], [123, 123]] foo() [[123], [123, 123, 123]]
So you're saying that x[1] should be persistent because it (also) has a name (as 'a'), but x[0] should be recreated fresh on each call because it doesn't?
-jJ
On Thu, 25 Jan 2007 15:41:54 +0100, Jim Jewett jimjjewett@gmail.com
wrote:
On 1/24/07, Jan Kanis jan.kanis@phil.uu.nl wrote:
I don't like new syntax for something like this,
but I think the default
argument values can be fixed with semantic changes (which should not
break
the most common current uses):
What I think should happen is compile a function like this
def popo(x=[]): x.append(666) print x
as if it had read
def popo(x=__default_argument_marker__): if x == __default_argument_marker__: x = [] x.append(666) print x
How is this different from the x=None idiom of today?
def f(inlist=None):
if inlist is None:
inlist=[]
The __default_argument_marker__ is not really a part of my proposal. You
can replace it with None everywhere if you want to. The reason I used it
is because using None can clash when the caller passes None in as explicit
value like this:
def foo(x, y=None): y = getApropreateDefaultValue() if y == None else y x.insert(y)
foo(bar, None)
Now if you want to have foo do bar.insert(None), calling foo(bar, None)
won't work.
However, I guess the risk of running into such a case in real code is
neglegible, and my urge to write correct code won from writing
understandable code. Just pretend I used None everywhere. (and that the
compiler can magically distinguish between a default-argument None and a
caller-provided None, if you wish.)
The if (either 2 lines or against PEP8) is a bit ugly, but Calvin pointed out that you can now write it as
def f(inlist=None):
inlist = inlist if (inlist is not None) else []
I see below that you give it slightly different semantics, but I'm not entirely sure how to tell when those different semantics should apply (always? when the variable name is marked with __*__? When a specific non-None singleton appears?), or why you would ever want them.
Please just ignore the __default_argument_marker__ thing. I hope we agree
that the problem we're trying to solve is that while
def f(inlist=None): inlist = inlist if (inlist is not None) else []
works in the current python, it's non-intuitive and ugly, and it would be
nice to have python do 'the right thing' if we can find a nice way to make
it do that.
Oh, and the changed semantics would allways be uses.
When you use
variables as default value instead of literals, I think
most
of the time you intend to have the function do something to the same
object the variable is bound to, instead of the function creating it's
own
copy every time it's called. This behaviour still works with these
semantics:
a = [] def foo(x=[[],a]): x[0].append(123) x[1].append(123) print x foo() [[123], [123]] foo() [[123], [123, 123]] foo() [[123], [123, 123, 123]]
So you're saying that x[1] should be persistent because it (also) has a name (as 'a'), but x[0] should be recreated fresh on each call because it doesn't?
I think what python does currently can be improved upon. I think that if
someone defines a function like def f(x=[]): ... he'll most likely want x
to be an empty list every time the function is called. But just having
python do an automagic copy(x) or deepcopy(x) is not going to work, becaus
it can be expensive, isn't always nescessary, and is sometimes plainly
impossible, eg: def f(x=sys.stdout): ...
So, sometimes we want a new thing on every call, and sometimes we don't.
And sometimes we want to specify the default value with a literal, and
sometimes with a variable. My assumption is that these two differences
coincide most of the time. I also think my approach is more intuitive to
people new to python and not familiar with this quirk/wart.
You make it sound as if doing something different with a named variable
and a literal is strange, but this is exactly what happens in every normal
python expression:
a = [] b = [[], a] id(a) 12976416 id(b[0]) 12994640 id(b[1]) 12976416
# let's execute the b = ... statement again (comparable to 'call a
function again') b = [[], a] id(b[0]) 12934800 id(b[1]) 12976416
b[0] gets recreated, while b[1] is not.
So, I think my solution of evaluating the default argument on every call
and letting any variables in the expression be closure variables
accomplishes everything we want:
On Fri, 26 Jan 2007 04:36:33 +0100, Chris Rebert cvrebert@gmail.com
wrote:
So, basically the same as my proposal except without
syntax changes and
with the <Foo()> default argument value semantics applied to all
arguments. I'm okay with this, however some possible performance issues
might exist with re-evaluating expensive default arg vals on every call
where they're required. This is basically why my proposal required new
syntax, so that people could use the old "eval once at definition-time"
semantics on expensive default values to get better performance.
[snip]
If the performance issues aren't significant, I'm all for your proposal.
It'd be nice to not to have to add new syntax.
Well, I wasn't thinking about it as basically the same as your proposal,
but thinking about it again I think it is. (I was thinking more along the
lines of having this do what looks intuitive to me, by applying the normal
python language rules in the IMO 'right' way.)
on performance issues:
If you're using the x=None ... if x==None: x = [] trick the object gets
evaluated and recreated on every call anyway, so there's no change.
If you aren't using a literal as default, nothing gets re-evaluated, so no
problem either.
The only time it could become a problem is with code like this:
def foo(x=createExpensiveThing()): return x.bar()
If the function modifies x, you'll probably want to use a fresh x every
call anyway (using x=None). If you do want the x to be persistant, chances
are you already have a reference to it somewhere, but if you don't you'll
have to create one. The reference doesn't get re-evaluated, so there's no
performance issue, only possibly a namespace clutter issue.
If the function doesn't modify x it may give rise to a performance issue,
but that can easily be solved by creating the default before defining the
function and using the variable. Rejecting this proposition because of
this seems like premature optimisation to me.
Another slight performance loss may be the fact that variables used in a
default value will sometimes become closure variables, which are slightly
slower than locals. However these variables are not used in the functions
body, and it only is an issue if we're defining a function inside another
function. I think this point is neglegible.
"Jan Kanis" jan.kanis@phil.uu.nl wrote:
I hope we agree
that the problem we're trying to solve is that while
def f(inlist=None): inlist = inlist if (inlist is not None) else []
works in the current python, it's non-intuitive and ugly, and it would be
nice to have python do 'the right thing' if we can find a nice way to make
it do that.
I'm going to have to disagree on the 'non-intuitive and ugly' claim. We are just going to have to agree to disagree.
On Mon, 29 Jan 2007 08:38:39 +0100, Roman Susi rnd@onego.ru wrote:
This is what incremental dynamic semantics is about. So, the suggestion is good only as separated feature, but is IMHO wrong if considered in the language design as a whole.
wtf is incremental dynamic semantics, in this context? I did some googling
but all I found referred to techniques related to programming
environments. This proposal is just about a change in the language.
On Sat, 27 Jan 2007 06:30:00 +0100, Josiah Carlson jcarlson@uci.edu
wrote:
"Jan Kanis" jan.kanis@phil.uu.nl wrote:
I hope we agree that the problem we're trying to solve is that while [snip] I'm going to have to disagree on the 'non-intuitive and ugly' claim. We are just going to have to agree to disagree.
On Mon, 29 Jan 2007 08:38:39 +0100, Roman Susi rnd@onego.ru wrote:
Hello!
I'd liked to say outright that this bad idea which complicates matters more than provides solutions. Right now it is enough to know that the part from def to ":" is executed at definition time.
Well, it's good to be clear on where the disagreements lie. However I'm
not yet ready to let it rest at that without some more arguments.
As Chris pointed out in his first mail, this 'wart' is mentioned on
several lists of python misfeatures: [0][1][2]. I'd like to add to this
that even the python documentation finds this issue severe enough to issue
an "Important warning"[4].
It seems clear that this behaviour is a gotcha, at least for newbies. This
could be excused if there is a good reason to spend the additional time
learning this behaviour, but some of the links state, and my assumption
is, that there are very few situations where re-evaluating causes a
problem and which isn't easily fixable.
The semantics which I'd like to have are even easier than the current
semantics: everything in a function, be it before or after the colon, is
executed when the function is called.
Of course, as Collin Winters pointed out, the burden of proof of showing
that these semantics aren't going to be a problem is still on the pep
proponents.
On the other hand, are there really any good reasons to choose the current
semantics of evaluation at definition time? What I've heard basically
boils down to two arguments:
So, are there any _other_ arguments in favour of the current semantics??
[0] 10 Python pitfalls (http://zephyrfalcon.org/labs/python_pitfalls.html)
[1] Python Gotchas
(http://www.ferg.org/projects/python_gotchas.html#contents_item_6)
[2] When Pythons Attack
(http://www.onlamp.com/pub/a/python/2004/02/05/learn_python.html?page=2)
[4] Python manual - 4. More control flow tools
(http://docs.python.org/tut/node6.html#SECTION006710000000000000000)
On 1/30/07, Jan Kanis jan.kanis@phil.uu.nl wrote:
On the other hand, are there really any good reasons to choose the current semantics of evaluation at definition time?
While I sympathize with the programmer that falls for this common Python gotcha, and would not have minded if Python's semantics were different from the start (though the current behavior is cleaner and more consistent), making such a radical change to such a core part of the language semantics now is a very bad idea for many reasons.
What I've heard basically boils down to two arguments:
The argument here is not "let's not change anything because it's change," but rather "let's not break large amounts of existing code without a very good reason." As has been stated here by others, making obsolete a common two-line idiom is not a compelling enough reason to do so.
Helping out beginning Python programmers, while well-intentioned, doesn't feel like enough of a motivation either. Notice that the main challenge for the novice programmer is not to learn how default arguments work -- novices can learn to recognize and write the idiom easily enough -- but rather to learn how variables and objects work in general.
a=b=['foo'] c=d=42 a+=['bar'] c+=1 b ['foo', 'bar'] d 42
At some point in his Python career, a novice is going to have to understand why b "changed" but d didn't. Fixing the default argument "wart" doesn't remove the necessity to understand the nature of mutable objects and variable bindings in Python; it just postpones the problem. This is a fact worth keeping in mind when deciding whether the sweeping change in semantics is worth the costs.
Though it's been decried here as unPythonic, I can't be the only person who uses the idiom def foo(..., cache={}): for making a cache when the function in question does not rise to the level of deserving to be a class object instead. I don't apologize for finding it less ugly than using a global variable.
I know I'm not the only user of the idiom because I didn't invent it -- I learned it from the Python community. And the fact that people have already found usages of the current default argument behavior in the standard library is an argument against the "unPythonic" claim.
I'm reminded of GvR's post on what happened when he made strings non-iterable in a local build (iterable strings being another "wart" that people thought needed fixing): http://mail.python.org/pipermail/python-3000/2006-April/000824.html
So, are there any _other_ arguments in favour of the current semantics??
Yes. First, consistency. What do the three following Python constructs have in common?
1) lambda x=foo(): None 2) (x for x in foo()) 3) def bar(x=foo()): pass
Answer: all three evaluate foo() immediately, choosing not to defer the evaluation to when the resulting object is invoked, even though they all reasonably could.
It's especially notable that the recently-added feature (generator expressions) follows existing precedent. This was not accidental, but rather a considered design decision. Two paragraphs from PEP 289 could apply equally well to your proposal:
| Various use cases were proposed for binding all free variables when | the generator is defined. And some proponents felt that the resulting | expressions would be easier to understand and debug if bound | immediately.
| However, Python takes a late binding approach to lambda expressions | and has no precedent for automatic, early binding. It was felt that | introducing a new paradigm would unnecessarily introduce complexity.
In fact, the situation here is worse. PEP 289 is arguing against early binding of free variables as being complex. You're not proposing an early binding, but rather a whole new meaning of the "=" token, "save this expression for conditional evaluation later." It's never meant anything like that before.
Second, the a tool can't fix all usages of the old idiom. When things break, they can break in subtle or confusing ways. Consider my module "greeter":
== begin greeter.py == import sys def say_hi(out = sys.stdout): print >> out, "Hi!" del sys # don't want to leak greeter.sys to the outside world == end greeter.py ==
Nothing I've done here is strange or unidiomatic, and yet your proposed change breaks it, and it's unclear how an automated tool should fix it. What's worse about the breakage is that it doesn't break when greeter is imported, or even when greeter.say_hi is called with an argument. It might take a while before getting a very surprising error "global name 'sys' is not defined".
Third, the old idiom is less surprising.
def foo(x=None): if x is None: x=<some_expr>
<some_expr> may take arbitrarily long to complete. It may have side effects. It may throw an exception. It is evaluated inside the function call, but only evaluated when the default value is used (or the function is passed None).
There is nothing surprising about any of that. Now:
def foo(x=<some_expr>): pass
Everything I said before applies. The expression can take a long time, have side effects, throw an exception. It is conditionally evaluated inside the function call.
Only now, all of that is terribly confusing and surprising (IMO).
Greg F
(my response is a bit late, I needed some time to come up with a good
answer to your objections)
On Tue, 30 Jan 2007 16:48:54 +0100, Greg Falcon veloso@verylowsodium.com
wrote:
On 1/30/07, Jan Kanis jan.kanis@phil.uu.nl wrote:
On the other hand, are there really any good
reasons to choose the
current
semantics of evaluation at definition time?
While I sympathize with the programmer that falls for this common Python gotcha, and would not have minded if Python's semantics were different from the start (though the current behavior is cleaner and more consistent), making such a radical change to such a core part of the language semantics now is a very bad idea for many reasons.
It would be a py 3.0 change. Other important stuff is going to change as
well. This part of python is IMO not that much part of the core that it
can't change at all. Especially since the overwhelming majority of all
uses of default args have immutable values, so their behaviour isn't going
to change anyway. (judging by the usage in the std lib.)
Things like list comprehension and generators were a much greater change
to python, drastically changing the way an idiomatic python program is
written. They were added in 2.x because they could be implementen backward
compatible. With python 3.0, backward compatibility isn't so important
anymore. The whole reason for python 3.0's existance is to fix backward
incompatible stuff.
What I've heard basically boils down to two arguments:
The argument here is not "let's not change anything because it's change," but rather "let's not break large amounts of existing code without a very good reason." As has been stated here by others, making obsolete a common two-line idiom is not a compelling enough reason to do so.
py3k is going to break large ammounts of code anyway. This pep certainly
won't break the most of it. And there's gonna be an automatic py2 -> py3
refactoring tool, that can catch any possible breakage from this pep as
well.
Helping out beginning Python programmers, while well-intentioned, doesn't feel like enough of a motivation either. Notice that the main challenge for the novice programmer is not to learn how default arguments work -- novices can learn to recognize and write the idiom easily enough -- but rather to learn how variables and objects work in general. [snip] At some point in his Python career, a novice is going to have to understand why b "changed" but d didn't. Fixing the default argument "wart" doesn't remove the necessity to understand the nature of mutable objects and variable bindings in Python; it just postpones the problem. This is a fact worth keeping in mind when deciding whether the sweeping change in semantics is worth the costs.
The change was never intended to prevent newbies from learning about
pythons object model. There are other ways to do that. But keeping a
'wart' because newbies will learn from it seems like really bad reasoning,
language-design wise.
Though it's been decried here as unPythonic, I can't be the only person who uses the idiom def foo(..., cache={}): for making a cache when the function in question does not rise to the level of deserving to be a class object instead. I don't apologize for finding it less ugly than using a global variable.
How often do you use this compared to the x=None idiom?
This idiom is really going to be the only idiom that's going to break.
There are many ways around it, I wouldn't mind an @cache(var={}) decorator
somewhere (perhaps in the stdlib). These kind of things seem to be exactly
what decorators are good at.
I know I'm not the only user of the idiom because I didn't invent it -- I learned it from the Python community. And the fact that people have already found usages of the current default argument behavior in the standard library is an argument against the "unPythonic" claim.
I'm reminded of GvR's post on what happened when he made strings non-iterable in a local build (iterable strings being another "wart" that people thought needed fixing): http://mail.python.org/pipermail/python-3000/2006-April/000824.html
In that thread, Guido is at first in favour of making strings
non-iterable, one of the arguments being that it sometimes bites people
who expect e.g. a list of strings and get a string. He decides not to make
the change because there appear to be a number of valid use cases that are
hard to change, and the number of people actually getting bitten by it is
actually quite small. (To support that last part, note for example that
none of the 'python problems' pages listed in the pep talk about string
iteration while all talk about default arguments, some with dire warnings
and quite a bit of text.)
In the end, the numbers are going to be important. There seems to be only
a single use case in favour of definition time semantics for default
variables (caching), which isn't very hard to do in a different way.
Though seasoned python programmers don't get bitten by default args all
the time, they have to work around it all the time using =None.
If it turns out that people are actually using caching and other idioms
that require definition time semantics all the time, and the =None idiom
is used only very rarely, I'd be all in favour of rejecting this pep.
>
So, are there any _other_ arguments in favour of the current semantics??
Yes. First, consistency.
[factoring out the first argument into another email. It's taking me some
effort to get my head around the early/late binding part of the generator
expressions pep, and the way you find an argument in that. As far as I
understand it currently, either you or I do not understand that part of
the pep correctly. I'll try to get this mail out somewhere tomorrow]
Second, the a tool can't fix all usages of the old idiom. When things break, they can break in subtle or confusing ways. Consider my module "greeter":
== begin greeter.py == import sys def say_hi(out = sys.stdout): print >> out, "Hi!" del sys # don't want to leak greeter.sys to the outside world == end greeter.py ==
Nothing I've done here is strange or unidiomatic, and yet your proposed change breaks it, and it's unclear how an automated tool should fix it.
Sure this can be fixed by a tool:
import sys @caching(out = sys.stdout) def say_hi(out): print >> out, "Hi!" del sys
where the function with the 'caching' wrapper checks to see if an argument
for 'out' is provided, or else provides it itself. The caching(out =
sys.stdout) is actually a function _call_, so it's sys.stdout gets
evaluated immediately.
possible implementation of caching:
def caching(cachevars): def inner(func): def wrapper(argdict): for var in cachevars: if not var in argdict: argdict[var] = cachevars[var] return func(**argdict) return wrapper return inner
Defining a decorator unfortunately requires three levels of nested
functions, but apart from that the thing is pretty straightforward, and it
only needs to be defined once to use on every occasion of the caching
idiom.
It doesn't currently handle positional vars, but that can be added.
What's worse about the breakage is that it doesn't break when greeter is imported,
That's true of any function with a bug in it. Do you want to abandon
functions alltogether?
or even when greeter.say_hi is called with an argument.
Currently for people using x=None, if x=None <calculate default
value>,
this difference is a branch in the code. That's why you need to test _all_
possible branches in your unit test. Analagously you need to test all
combinations of arguments if you want to catch as many bugs as possible.
It might take a while before getting a very surprising error "global name 'sys' is not defined".
However, your greeter module actually has a slight bug. What if I do this:
import sys, greeter sys.stdout = my_output_proxy() greeter.say_hi()
Now say_hi() still uses the old sys.stdout, which is most likely not what
you want. If greeter were implemented like this:
import sys as _sys def say_hi(out = _sys.stdout): print >> out, "Hi!"
under the proposed semantics, it would all by itself do a late binding of
_sys.stdout, so when I change sys.stdout somewhere else, say_hi uses the
new stdout.
Deleting sys in order not to 'leak' it to any other module is really not
useful. Everybody knows that python does not actually enforce
encapsulation, nor does it provide any kind of security barriers between
modules. So if some other module wants to get at sys it can get there
anyway, and if you want to indicate that sys isn't exporters and greeter's
sys shouldn't be messed around with, the renaming import above does that
just fine.
Third, the old idiom is less surprising.
def foo(x=None): if x is None: x=<some_expr>
<some_expr> may take arbitrarily long to complete. It may have side effects. It may throw an exception. It is evaluated inside the function call, but only evaluated when the default value is used (or the function is passed None).
There is nothing surprising about any of that. Now:
def foo(x=<some_expr>): pass
Everything I said before applies. The expression can take a long time, have side effects, throw an exception. It is conditionally evaluated inside the function call.
Only now, all of that is terribly confusing and surprising (IMO).
Read the "what's new in python 3.0" (assuming the pep gets incorporated,
of course).
Exception tracebacks and profiler stats will point you at the right line,
and you will figure it out. As you said above, all of it is true under the
current =None idiom, so there are no totally new ways in which a program
can break. If you know the ways current python can break (take too long,
unwanted side effects, exceptions) you will figure it out in the new
version.
Anyway, many python newbies consider it confusing and surprising that an
empty list default value doesn't stay empty, and all other pythoneers have
to work around it a lot of times. It will be a pretty unique python
programmer whose program will break in ways mentioned above by the default
expression being evaluated at call time, and wouldn't have broken under
python's current behaviour, and who isn't able to figure out what happened
in a reasonable amout of time. So even if your argument holds, it will
still be a net win to accept the pep.
> >
Greg F
Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
Jan Kanis wrote:
On Mon, 29 Jan 2007 08:38:39 +0100, Roman Susi rnd@onego.ru wrote:
This is what incremental dynamic semantics is about. So, the suggestion is good only as separated feature, but is IMHO wrong if considered in the language design as a whole.
wtf is incremental dynamic semantics, in this context? I did some googling but all I found referred to techniques related to programming environments. This proposal is just about a change in the language.
Oh, I thought this is very fundamental and well-known as it is the second line of Guido's definition of Python:
"Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. ..."
As I understand it, it means that statements are read sequencially and the semantics of names depends on the flow control.
The proposal (as it seems to me) wants to make change to the language inconsistent with nature dynamic semantics. Its like put implicit:
if RUN_FIRST_TIME: do this else: do that
for the same line of code (def statement's first line).
[snip]
Well, it's good to be clear on where the disagreements lie. However I'm not yet ready to let it rest at that without some more arguments.
As Chris pointed out in his first mail, this 'wart' is mentioned on several lists of python misfeatures: [0][1][2]. I'd like to add to this that even the python documentation finds this issue severe enough to issue an "Important warning"[4].
It seems clear that this behaviour is a gotcha, at least for newbies. This could be excused if there is a good reason to spend the additional time learning this behaviour, but some of the links state, and my
But as somebody already said the alternative is even worse... Its quite easier to mention 3-5 Python warts up front to newbies than to introduce subtle exception for semantics and noise words such as "new" which do not have any other use elsewhere.
assumption is, that there are very few situations where re-evaluating causes a problem and which isn't easily fixable. The semantics which I'd like to have are even easier than the current semantics: everything in a function, be it before or after the colon, is executed when the function is called. Of course, as Collin Winters pointed out, the burden of proof of showing that these semantics aren't going to be a problem is still on the pep proponents.
So, are there any _other_ arguments in favour of the current semantics??
Repeat the mantra:
Python's dynamic semantics will not change
Python's dynamic semantics will not change
Python's dynamic semantics will not change
;-)
Roman
On 1/30/07, Roman Susi rnd@onego.ru wrote:
The proposal (as it seems to me) wants to make change to the language inconsistent with nature dynamic semantics. Its like put implicit:
if RUN_FIRST_TIME: do this else: do that
for the same line of code (def statement's first line).
No, my proto-PEP has never contained semantics like that anywhere in it.
Well, it's good to be clear on where the disagreements lie. However I'm not yet ready to let it rest at that without some more arguments.
As Chris pointed out in his first mail, this 'wart' is mentioned on several lists of python misfeatures: [0][1][2]. I'd like to add to this that even the python documentation finds this issue severe enough to issue an "Important warning"[4].
It seems clear that this behaviour is a gotcha, at least for newbies. This could be excused if there is a good reason to spend the additional time learning this behaviour, but some of the links state, and my
But as somebody already said the alternative is even worse... Its quite easier to mention 3-5 Python warts up front to newbies than to introduce subtle exception for semantics and noise words such as "new" which do not have any other use elsewhere.
Sidenote: There could instead be a new keyword 'once' to indicate the old semantics. I think the PEP related to adding a switch statement proposes the same keyword for a similar use.
On Tue, 30 Jan 2007 20:29:34 +0100, Roman Susi rnd@onego.ru wrote:
Jan Kanis wrote:
On Mon, 29 Jan 2007 08:38:39 +0100, Roman Susi rnd@onego.ru wrote:
This is what incremental dynamic semantics is about. So, the suggestion is good only as separated feature, but is IMHO wrong if considered in the language design as a whole.
wtf is incremental dynamic semantics, in this context? I did some googling but all I found referred to techniques related to programming environments. This proposal is just about a change in the language.
Oh, I thought this is very fundamental and well-known as it is the second line of Guido's definition of Python:
"Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. ..."
As I understand it, it means that statements are read sequencially and the semantics of names depends on the flow control.
I never made a link with this description of python and the term
'incremental dynamic semantics'. Guess I learned something new today.
The proposal (as it seems to me) wants to make change to the language inconsistent with nature dynamic semantics. Its like put implicit:
if RUN_FIRST_TIME: do this else: do that
for the same line of code (def statement's first line).
[snip]
I'm certainly no proponent of doing something like this. Do note that
the
discussion is between evaluating default expressions at definition time or
at call time, and the pep entails evaluating default expressions on
/every/ function call, just like the body of the function is evaluated on
every call. There's already other code being evaluated at both def time
and call time, so the proposal does not introduce anything new to pythons
evaluation model.
IMO having the default exprs evaluate at call time is in no way against
pythons language design.
>
Well, it's good to be clear on where the disagreements lie. However I'm not yet ready to let it rest at that without some more arguments.
As Chris pointed out in his first mail, this 'wart' is mentioned on several lists of python misfeatures: [0][1][2]. I'd like to add to this that even the python documentation finds this issue severe enough to issue an "Important warning"[4].
It seems clear that this behaviour is a gotcha, at least for newbies. This could be excused if there is a good reason to spend the additional time learning this behaviour, but some of the links state, and my
But as somebody already said the alternative is even worse... Its quite easier to mention 3-5 Python warts up front to newbies than to introduce subtle exception for semantics and noise words such as "new" which do not have any other use elsewhere.
But it's much better to just eliminate the warts without introducing
subtle exceptions. The pep proposes evaluating default exprs on every
function call, just like the function body is. No new exceptions are
introduced. The fact that newbies often expect default values to be fresh
on every call seems to entail that they won't be surprised a lot when the
idiom of using default values as caches won't work, if the pep gets
accepted. Old time pythoneers who know what's new in 3.0 won't be
surprised either. I'm not in favour of introducing new noise words or
other new syntax, I just want to 'fix' pythons current semantics.
assumption is, that there are very few situations where re-evaluating causes a problem and which isn't easily fixable. The semantics which I'd like to have are even easier than the current semantics: everything in a function, be it before or after the colon, is executed when the function is called. Of course, as Collin Winters pointed out, the burden of proof of showing that these semantics aren't going to be a problem is still on the pep proponents.
So, are there any _other_ arguments in favour of the current semantics??
Repeat the mantra:
Python's dynamic semantics will not change
Python's dynamic semantics will not change
Python's dynamic semantics will not change
;-)
If anything, the proposal is going to _improve_ pythons dynamicness and
late-bindingness.
Oh, and it's nice to know repeating mantras now count as 'arguments' <wink>
So, basically the same as my proposal except without syntax changes and with the <Foo()> default argument value semantics applied to all arguments. I'm okay with this, however some possible performance issues might exist with re-evaluating expensive default arg vals on every call where they're required. This is basically why my proposal required new syntax, so that people could use the old "eval once at definition-time" semantics on expensive default values to get better performance.
Also, since people (including me) don't really like the <>-syntax, I've come up with some real syntax possibilities for my proposal:
def foo(bar=new baz):
def foo(bar=fresh baz):
def foo(bar=separate baz):
def foo(bar=another baz):
def foo(bar=unique baz):
I personally like 'fresh'. It seems accurate and not too long. I welcome other thoughts on better syntax for this.
If the performance issues aren't significant, I'm all for your proposal. It'd be nice to not to have to add new syntax.
Jan Kanis wrote:
I don't like new syntax for something like this, but I think the default argument values can be fixed with semantic changes (which should not break the most common current uses):
What I think should happen is compile a function like this
def popo(x=[]): x.append(666) print x
as if it had read
def popo(x=__default_argument_marker__): if x == __default_argument_marker__: x = [] x.append(666) print x
This way, every execution of popo gets its own list. Of course, __default_argument_marker__ is just a way to tell the python runtime that no argument was provided, it should not be exposed to the language.
If a variable is used in the default argument, it becomes a closure variable:
d = createMyListOfLists() n = getDefaultIndex()
def foo(x=d[n]): x.append(666) print x
this is compiled as if it had read
d = createMyListOfLists() n = getDefaultIndex()
def foo(x=__default_argument_marker__): if x == __default_argument_marker__: x = d[n] # d and n are closure variables x.append(666) print x
d and n are looked up in foo's parent scope, which in this example is the global scope. Of course the bytecode compiler should make sure d and n don't name-clash with any variables used in the body of foo.
When you use variables as default value instead of literals, I think most of the time you intend to have the function do something to the same object the variable is bound to, instead of the function creating it's own copy every time it's called. This behaviour still works with these semantics:
a = [] def foo(x=[[],a]): x[0].append(123) x[1].append(123) print x foo() [[123], [123]] foo() [[123], [123, 123]] foo() [[123], [123, 123, 123]]
foo is compiled as if it had read:
def foo(x=__default_argument_marker__): if x == __default_argument_marker__: x = [[],a] # a is a closure variable x[0].append(123) x[1].append(123) print x
An other difference between this proposal and the current situation is that it would be possible to change the value of a default argument after the function is defined. However I don't think that would really be a problem, and this behaviour is just the same as that of other closure variables. Besides, this (what I perceive as a) problem with closure variables is fixable on its own.
Jan
On Mon, 22 Jan 2007 01:49:51 +0100, Josiah Carlson jcarlson@uci.edu wrote:
>
Chris Rebert cvrebert@gmail.com wrote:
Josiah Carlson wrote:
As provided by Calvin Spealman, the above can be fixed with:
def popo(x=None):
x = x if x is not None else []
x.append(666)
print x
I would also mention that forcing users to learn about mutable arguments and procedural programming is not a bad thing. Learning the "gotcha" of mutable default arguments is a very useful lesson, and to remove that lesson, I believe, wouldn't necessarily help new users to Python, or new programmers in general.
Maybe you are taking me a bit too seriously, but hopefully this will add some levity; I'm a poo-poo head. Moving on...
First, your 'fix' misses the point: though the proposed feature isn't necessary, and code can be written without using it, it allows mutable default argument values to be expressed more clearly and succinctly than the idiom your 'fix' uses.
As I stated, it wasn't my fix. And using previously existing syntax that adds 1 line to a function to support a particular desired result, I think, is perfectly reasonable. Had the conditional syntax been available for the last decade, those "gotchas" pages would have said "mutable default arguments are tricky, always use the following, and it will probably be the right thing" and moved on.
Second, Python isn't (inherently) about teaching new programmers about programming, and what is good for newbies isn't necessarily good for experienced programmers.
Indeed, and what may be good for certain experienced programmers, may not be good for other experienced programmers, or for the language in general. And personally, I am not sure that I could be convinced that a syntax to support what can be emulated by a single line is even worthy of addition. In the case of decorators, or even the py3k support for argument annotation, there are certain operations that can be made significantly easier. In this case, I'm not convinced that the extra syntax baggage is worthwhile.
Nevermind that it would be one more incompatible syntax that would make it difficult to write for 2.5/2.6 and 3.x .
Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas