The proposal to add bind to the function definition is silly, since we
can do the equivalent of def f(bind i, …) already using
>>> class bind(object):
... def __init__(self, *args, **kwargs):
... self.args, self.kwargs = args, kwargs
... def __call__(self, f):
... def inner(*args, **kwargs):
... return f(self, *args, **kwargs)
... return inner
>>> l = 
>>> for i in range(10):
... def my_func(bound_vars, *args, **kwargs):
... return bound_vars.args
>>> [f() for f in l]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> [f() for f in l]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Just put the bind decorator into functools and the problem is solved.
This is better than the (UGLY!) default values hack, since in this
case it is impossible for your caller to accidentally overwrite the
value you wanted bound (or at least not without some stackframe
manipulation, at which point you get what you deserve).
I also don't like Guido's proposed var and new keywords. With all due
perfectly good tool for adding new scopes: functions. Just use a
decorator like this map_maker to make an imap of the for-loop you
wanted to have a separate scope in.
>>> class map_maker(object):
... def __init__(self, f):
... self.f = f
... def __call__(self, seq):
... return (self.f(item) for item in seq)
>>> a = 1
... def my_map(a):
... print("Look ma, the letter a equals", a)
Look ma, the letter a equals 0
Look ma, the letter a equals 1
Look ma, the letter a equals 2
Look ma, the letter a equals 3
Look ma, the letter a equals 4
Look ma, the letter a equals 5
Look ma, the letter a equals 6
Look ma, the letter a equals 7
Look ma, the letter a equals 8
Look ma, the letter a equals 9
[None, None, None, None, None, None, None, None, None, None]
So, my proposal is that the API of the bind decorator be cleaned up
considerably then added to the functools. The map_maker API seems to
be good enough and could also go into functools.
To all of you who participated in this (not so fruitful) discussion,
I sincerely thank you for your precious time and thought. Although we
failed to yield anything very useful for later Python users to clear
up the issue of colons, I do appreciate your effort and patience. If
there is any bad emotions generated during the process, please do not
take it personally; that's not the purpose---after all, we all want to
make our beloved language better and better.
On 8-Feb-09, at 5:48 AM, spir wrote:
> Le Sun, 8 Feb 2009 03:17:25 -0330,
> Riobard Zhan <yaogzhan(a)gmail.com> a écrit :
> Hello Riobard,
> You know that I rather agree with you on the point that colon would
> rather be optional, that the purpose is similar to that of semi-
> colons, that the opponents' arguments are not convincing at all (I
> bet from the form of these arguments, that if colons were optional
> in python, most of them would fight against a proposition to make
> them obligatory ;-).
> Still, this debate goes nowhere now, and you just kill your own
> credibility. It's time to stop. Not only python will not change on
> that point, but the discussion does not bring any more clue to help
> understand the whys and hows of syntax/semantics.
> Have you had a look at cobra? http://cobra-language.com/
> la vida e estranya
On 6-Feb-09, at 8:41 PM, Mike Meyer wrote:
> That this consistency - ignoring trailing separators in list
> structures - can be misunderstood to be an optional ending separator
> in the degenerate case of a single statement is a good indication of
> why consistency isn't a trump property.
This is a very strange view of consistency to me. How many different
kinds of list separators do we have? I can only think of semicolons
and commas. I don't think semicolons are anything like commas. Non-
trailing semicolons can be omitted, while non-trailing commas cannot,
even if you put each item of [1,2,3] in separate lines.
If you don't think consistency counts (at least in this case), I
cannot argue with you---that's waaay off topic.
On 8-Feb-09, at 4:43 AM, Mike Meyer wrote:
>> Wait a minute... What do you mean by a "list" of statements? Is this
>> one list of length 2, or two lists of length 1?
>> a = 1
>> b = 2
> Two lists of length one. Each list is terminated by the new line.
If you think the above code is composed of two lists of length 1
instead of one list of length 2, then I guess probably we have
completely different views of how to group things as a list. That
probably explained why I feel very strange about your definition in
I'm not going to argue with you on this. I might never see the point
of thinking the following code
a = 1; b = 2
c = 3
as a list of length 2 + a list of length 1, instead of a list of
length 3. If I take your approach, I might treat
as two lists, too.
Nick Coghlan wrote:
> One important question to ask yourself is whether the semantics you want
> may make more sense as a new generator method (as happened with the
> addition of send() and throw()) rather than as new syntax.
> def f():
> c = g()
> yield *c
> print c.result()
That turns one line into three and makes it impossible
to embed it in an expression. It's a very poor substitute
for what I have in mind.
> In particular, the return value of 'yield *' would likely still by
> needed for send() in the case where the subgenerator has already
> terminated, so the only sensible destination for the sent value is the
> generator that invoked 'yield *'
The effect I'm after is the same as what would happen if
the subgenerator were yielding directly to the caller of
the outer generator. Since, except for the first send(), it's
only possible to send() something to a generator when it's
suspended in a yield, anything sent to the outer generator
after the subgenerator terminates would have to appear as
the return value of some later (ordinary) yield in the
outer generator itself or another subgenerator.
The full expansion, taking sends into account, of
result = yield *g()
would be something like
_g = g()
_v = yield _g.next()
_v = yield _g.send(_v)
except StopIteration, _e:
result = _e.return_value
I think I've got that right. While it may look like the
last value assigned to _v gets lost, that's not actually
the case, because the last next() or send() call before
_g terminates never returns, raising StopIteration instead.
(Here I'm assuming the return value is passed back as an
argument to the StopIteration exception, something that I
think got proposed at one point but never adopted. The
return value could alternatively be attached to the
On Sun, 8 Feb 2009 03:17:25 -0330
Riobard Zhan <yaogzhan(a)gmail.com> wrote:
> On 6-Feb-09, at 8:41 PM, Mike Meyer wrote:
>> That this consistency - ignoring trailing separators in list
>> structures - can be misunderstood to be an optional ending separator
>> in the degenerate case of a single statement is a good indication of
>> why consistency isn't a trump property.
> This is a very strange view of consistency to me. How many different
> kinds of list separators do we have? I can only think of semicolons
> and commas. I don't think semicolons are anything like commas. Non-
> trailing semicolons can be omitted, while non-trailing commas cannot,
> even if you put each item of [1,2,3] in separate lines.
Oops, I thought I missed a clause. The last sentence should be "Non-
trailing semicolons can be omitted [if you put each statement in its
own line], while non-trailing commas cannot, even if you put each item
of [1,2,3] in separate lines."
On 8-Feb-09, at 3:34 AM, Mike Meyer wrote:
> You still don't understand the semantics of semicolons. Non-trailing
> semicolons are required, and can *not* be omitted. Try it and see:
> bhuda$ python
> Python 2.6 (r26:66714, Nov 11 2008, 07:45:20)
> [GCC 4.2.1 20070719 [FreeBSD]] on freebsd7
> Type "help", "copyright", "credits" or "license" for more information.
>>>> a = 1; b = 2
>>>> a = 1 b = 2
> File "<stdin>", line 1
> a = 1 b = 2
> SyntaxError: invalid syntax
> Only *trailing* semicolons - the one following the last statement in
> the list - can be omitted. Just like lists in list literals, in tuple
> literals (module zero & one element tuples), in dictionary literals,
> and as arguments to certain types of functions functions.
I'm really confused by your words. Here is a list of statements.
a = 1; # non-trailing semicolon of the list of statements
b = 2; # trailing semicolon of the list of statements
Both semicolons can be omitted.
Wait a minute... What do you mean by a "list" of statements? Is this
one list of length 2, or two lists of length 1?
a = 1
b = 2
[Redirecting to python-ideas]
2009/2/8 Bruce Leban <bruce(a)leapyear.org>:
> On Sat, Feb 7, 2009 at 11:16 PM, Arnaud Delobelle <arnodel(a)googlemail.com>
>> 2009/2/8 Bruce Leban <bruce(a)leapyear.org>:
>> > There *is* something in Python related to this that I find obviously
>> > different and that's local and global variables. I would prefer that all
>> > global variables have to be included in a global declaration. I dislike
>> > the
>> > fact that an assignment to a variable changes other references to that
>> > same
>> > name from local to global references. This sort of feels like "spooky
>> > action
>> > at a distance" to me.
>> IMHO it would be very tiresome to have to declare all global functions
>> and builtins used in a function as global E.g.
>> def foo(x):
>> return x + 2
>> def bar(x):
>> global str, foo, int
>> return str(foo(int(x)))
> I don't want it for functions, just for variables. I realize that those may
> be the same on some level but I don't think fo them that way when I'm
> writing code.
That's impossible. Functions are python objects which are bound to
variables at runtime. At compile time (when it has to be decided
which variable is local and which is local), there is no way to know
if a variable will be bound to a function or to another object.
Worse, many python objects are callable without being functions.
I wonder why there is no difference in syntax between binding and rebinding. Obviously, the semantics is not at all the same, for humans as well as for the interpreter:
* Binding: create a name, bind a value to it.
* Rebinding: change the value bound to the name.
I see several advantages for this distinction and no drawback. The first advantage, which imo is worthful enough, is to let syntax match semantics; as the distinction *makes sense*.
A nice side-effect would be to allow detection of typographic or (human ;-) memory errors:
* When an error wrongly and silently creates a new name instead of launching a NameError exception. A distinct syntax for rebinding would prevent that.
* When an error wrongly and silently rebinds an existing name instead of launching a NameError exception. A distinct syntax for (first) binding would prevent that.
No need, I guess, to insist on the fact that such errors sometimes lead to long and difficult debugging precisely for they are silent. This, because in all cases "a=1" is a valid instruction, as there is no distinction between binding and rebinding.
I suspect a further advantage may be to get rid of "global" and "nonlocal" declarations -- which, as I see it, do not at all fit the python way. I may be wrong on that, still it seems such declarations are necessary only because of the above distinction lacking. My rational on this is:
* It is very common and helpful to allow a local variable beeing named identically as another one in an external scope.
* There is no binding/redinding distinction in python syntax.
* So that whenever a name appears on the left side of an assignment, inside a non-global scope, there is no way to know whether the programmer intends to create a local name or to access a possibly existing external name.
* To resolve this ambiguity, python adopts the rule of creating a local name.
* Thus, it becomes impossible to rebind an external name from a local scope. Which is still useful in rather rare, but relevant, use cases.
* So that 'global', and later 'nonlocal', declarations had to be introduced in python.
(I tried to be as clear and step-by-step as I can so that this reasoning can easily be refuted if ever it holds errors I cannot see.)
It seems that if ever the second step would not hold, then there would be no reason for such declarations. Imagine that rebinbing is spellt using ':='. Then, from a non-glocal scope:
* a=1 causes creation of a local name
* a:=1 rebinds a local name if exists, or rebinds an external name if exists (step-by-step up to module level scope), or else launches NameError.
There may be reasons why such a behaviour is not the best a programmer would expect: I wait for your comments.
Obviously, for the sake of compatibility, this is more a base for discussion, if you find the topic interesting, than for a proposal for python 9000...
PS: As a reference to the thread on the sign ':' at the end of block headlines, the syntactic format I would actually enjoy is:
* binding name : value
* rebinding name :: value
la vida e estranya
On Sat, Feb 7, 2009 at 9:24 PM, <glyph(a)divmod.com> wrote:
> For what it's worth, I don't care if this is added. I can continue typing
> that stanza. I just know that if it *is* added, I'd find it a lot easier to
> read "yield from foo()" than "yield *foo()".
Similarly conditioned +1.
I've been re-examining from ground up the whole state of affairs in
writing a debugger. One of the challenges of a debugger or any
source-code analysis tool is verifying that the source-code that the
tool is reporting on corresponds to the compiled object under
For debuggers, this problem becomes more likely to occur when you are
debugging on a computer that isn't the same as the computer where the
code is running.)
For this, it would be useful to have a cryptographic hash like a SHA1
in the compiled object, but hopefully accessible via the module object
where the file path is stored.
I understand that there is a mtime timestamp in the .pyc but this is
not as reliable as cryptographic hash such as SHA1.
There seems to be some confusion in thinking the only use case for
this is in remote debugging where source code may be on a different
computer than where the code is running, but I do not believe this is
so. Here are two other situations which come up.
First is a code coverage tool like coverage.py which checks coverage
over several runs. Let's say the source code is erased and checked out
again; or edited and temporarily changed several times but in the end
the file stays the same. A SHA1 has will understand the file hasn't
changed, mtime won't.
A second more contrived example is in in some sort of secure
environment. Let's say I am using the compiled Python code, (say for
an embedded device) and someone offers me what's purported to be the
source code. How can I easily verify that this is correct?
In theory I suppose if I have enough information about the version of
Python and which platform, I can compile the purported source ignoring
some bits of information (like the mtime ;-) in the compiled
object. But one would have to be careful about getting compilers and
platforms then same or understand how this changes compilation.