
On Sat, 16 May 2009 01:51:31 am spir wrote:
- requires people to learn one more feature
(so newbies will still be confused that def f(x=[]) doesn't behave as they expect).
That's the relevant drawback for me. A solution that does not solve the issue. A new syntactic pattern to allow call time evaluation of defaults is a (costly) solution for people who don't need it.
There is no solution to the problem of newbies' confusion. The standard behaviour will remain in Python 2.x and almost certainly Python 3.x. The earliest it could change is Python 3.3: it could be introduced with a "from __future__ import defaults" in 3.2 and become standard in 3.3.
(It almost certainly will never be the standard behaviour, but if it did, that would be the earliest it could happen.)
And even if it did change, then newbies will be surprised and upset that def f(x=y) doesn't behave as they expect. Here's the current behaviour:
y = result_of_some_complex_calculation() # => 11 def f(x=y):
... return x+1 ...
f()
12
y = 45 f()
12
Given the proposed behaviour, that second call to f() would surprisingly return 46, or worse, raise a NameError if y is no longer in scope.
The real problem is that people don't have a consistent expectation for default arguments. No matter what behaviour Python uses, people will be caught out by it sometimes.