I think it would be a good idea if Python tracebacks could be translated
into languages other than English - and it would set a good example.
For example, using French as my default local language, instead of
>>> 1/0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero
I might get something like
>>> 1/0
Suivi d'erreur (appel le plus récent en dernier) :
Fichier "<stdin>", à la ligne 1, dans <module>
ZeroDivisionError: division entière ou modulo par zéro
André
Here's an updated version of the PEP reflecting my
recent suggestions on how to eliminate 'codef'.
PEP: XXX
Title: Cofunctions
Version: $Revision$
Last-Modified: $Date$
Author: Gregory Ewing <greg.ewing(a)canterbury.ac.nz>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 13-Feb-2009
Python-Version: 3.x
Post-History:
Abstract
========
A syntax is proposed for defining and calling a special type of generator
called a 'cofunction'. It is designed to provide a streamlined way of
writing generator-based coroutines, and allow the early detection of
certain kinds of error that are easily made when writing such code, which
otherwise tend to cause hard-to-diagnose symptoms.
This proposal builds on the 'yield from' mechanism described in PEP 380,
and describes some of the semantics of cofunctions in terms of it. However,
it would be possible to define and implement cofunctions independently of
PEP 380 if so desired.
Specification
=============
Cofunction definitions
----------------------
A cofunction is a special kind of generator, distinguished by the presence
of the keyword ``cocall`` (defined below) at least once in its body. It may
also contain ``yield`` and/or ``yield from`` expressions, which behave as
they do in other generators.
From the outside, the distinguishing feature of a cofunction is that it cannot
be called the same way as an ordinary function. An exception is raised if an
ordinary call to a cofunction is attempted.
Cocalls
-------
Calls from one cofunction to another are made by marking the call with
a new keyword ``cocall``. The expression
::
cocall f(*args, **kwds)
is evaluated by first checking whether the object ``f`` implements
a ``__cocall__`` method. If it does, the cocall expression is
equivalent to
::
yield from f.__cocall__(*args, **kwds)
except that the object returned by __cocall__ is expected to be an
iterator, so the step of calling iter() on it is skipped.
If ``f`` does not have a ``__cocall__`` method, or the ``__cocall__``
method returns ``NotImplemented``, then the cocall expression is
treated as an ordinary call, and the ``__call__`` method of ``f``
is invoked.
Objects which implement __cocall__ are expected to return an object
obeying the iterator protocol. Cofunctions respond to __cocall__ the
same way as ordinary generator functions respond to __call__, i.e. by
returning a generator-iterator.
Certain objects that wrap other callable objects, notably bound methods,
will be given __cocall__ implementations that delegate to the underlying
object.
Grammar
-------
The full syntax of a cocall expression is described by the following
grammar lines:
::
atom: cocall | <existing alternatives for atom>
cocall: 'cocall' atom cotrailer* '(' [arglist] ')'
cotrailer: '[' subscriptlist ']' | '.' NAME
Note that this syntax allows cocalls to methods and elements of sequences
or mappings to be expressed naturally. For example, the following are valid:
::
y = cocall self.foo(x)
y = cocall funcdict[key](x)
y = cocall a.b.c[i].d(x)
Also note that the final calling parentheses are mandatory, so that for example
the following is invalid syntax:
::
y = cocall f # INVALID
New builtins, attributes and C API functions
--------------------------------------------
To facilitate interfacing cofunctions with non-coroutine code, there will
be a built-in function ``costart`` whose definition is equivalent to
::
def costart(obj, *args, **kwds):
try:
m = obj.__cocall__
except AttributeError:
result = NotImplemented
else:
result = m(*args, **kwds)
if result is NotImplemented:
raise TypeError("Object does not support cocall")
return result
There will also be a corresponding C API function
::
PyObject *PyObject_CoCall(PyObject *obj, PyObject *args, PyObject *kwds)
It is left unspecified for now whether a cofunction is a distinct type
of object or, like a generator function, is simply a specially-marked
function instance. If the latter, a read-only boolean attribute
``__iscofunction__`` should be provided to allow testing whether a given
function object is a cofunction.
Motivation and Rationale
========================
The ``yield from`` syntax is reasonably self-explanatory when used for the
purpose of delegating part of the work of a generator to another function. It
can also be used to good effect in the implementation of generator-based
coroutines, but it reads somewhat awkwardly when used for that purpose, and
tends to obscure the true intent of the code.
Furthermore, using generators as coroutines is somewhat error-prone. If one
forgets to use ``yield from`` when it should have been used, or uses it when it
shouldn't have, the symptoms that result can be extremely obscure and confusing.
Finally, sometimes there is a need for a function to be a coroutine even though
it does not yield anything, and in these cases it is necessary to resort to
kludges such as ``if 0: yield`` to force it to be a generator.
The ``cocall`` construct address the first issue by making the syntax directly
reflect the intent, that is, that the function being called forms part of a
coroutine.
The second issue is addressed by making it impossible to mix coroutine and
non-coroutine code in ways that don't make sense. If the rules are violated, an
exception is raised that points out exactly what and where the problem is.
Lastly, the need for dummy yields is eliminated by making it possible for a
cofunction to call both cofunctions and ordinary functions with the same syntax,
so that an ordinary function can be used in place of a cofunction that yields
zero times.
Record of Discussion
====================
An earlier version of this proposal required a special keyword ``codef`` to be
used in place of ``def`` when defining a cofunction, and disallowed calling an
ordinary function using ``cocall``. However, it became evident that these
features were not necessary, and the ``codef`` keyword was dropped in the
interests of minimising the number of new keywords required.
The use of a decorator instead of ``codef`` was also suggested, but the current
proposal makes this unnecessary as well.
It has been questioned whether some combination of decorators and functions
could be used instead of a dedicated ``cocall`` syntax. While this might be
possible, to achieve equivalent error-detecting power it would be necessary
to write cofunction calls as something like
::
yield from cocall(f)(args)
making them even more verbose and inelegant than an unadorned ``yield from``.
It is also not clear whether it is possible to achieve all of the benefits of
the cocall syntax using this kind of approach.
Prototype Implementation
========================
An implementation of an earlier version of this proposal in the form of patches
to Python 3.1.2 can be found here:
http://www.cosc.canterbury.ac.nz/greg.ewing/python/generators/cofunctions.h…
If this version of the proposal is received favourably, the implementation will
be updated to match.
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:
Hello,
Python 3 has removed callable() under the justification that's it's not
very useful and duck typing (EAFP) should be used instead. However,
it has since been felt by many people that it was an annoying loss;
there are situations where you truly want to know whether something is a
callable without actually calling it (for example when writing
sophisticated decorators, or simply when you want to inform the user
of an API misuse).
The substitute of writing `isinstance(x, collections.Callable)` is
not good, 1) because it's wordier 2) because collections is really not
an intuitive place where to look for a Callable ABC.
So, I would advocate bringing back the callable() builtin, which was
easy to use, helpful and semantically sane.
Regards
Antoine.
Disclaimer: this is a currently half-baked idea that needs some
discussion here if it is going to turn into something a bit more
coherent :)
On and off, I've been pondering the problem of the way implementation
details (like the real file structures of the multiprocessing and
unittest packages, or whether or not an interpreter use the pure
Python or the C accelerated version of various modules) leak out into
the world via the __module__ attribute on various components. This
mostly comes up when discussing pickle compatibility between 2.x and
3.x, but in can show up in various guises whenever you start relying
on dynamic introspection.
As, I see it, there are 3 basic ways of dealing with the problem:
1. Allow objects to lie about their source module
This is likely a terrible idea, since a function's global namespace
reference would disagree with its module reference. I suspect much
weirdness would result.
2. A pickle-specific module alias registry, since that is where the
problem comes up most often
A possible approach, but not necessarily a good one (since it isn't
really a pickle-specific problem).
3. An inspect-based module alias registry
That is, an additional query API (get_canonical_module_name?) in the
inspect module that translates from the implementation detail module
name to the "preferred" module name. The implementation could be as
simple as a "__canonical__" attribute in the module namespace.
I actually quite like option 3, with various things (such as pydoc)
updated to show *both* names when they're different. That way people
will know where to find official documentation for objects from
pseudo-packages and acceleration modules (i.e. under the canonical
name), without hiding where the actual implementation came from.
Pickle *generation* could then be updated to only send canonical
module names during normal operation, reducing the exposure of
implementation details like pseudo-packages and acceleration modules.
Whether or not runpy should set __canonical__ on the main module would
be an open question (probably not, *unless* runpy was also updated to
add the main module to sys.modules under its real name as well
__main__).
Cheers,
Nick.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Hello all,
I often use python as a calculator or for simple operations using -c. It
would be enormously useful if python -c "..." would output on stdout the
result of the last evaluated expression in the same way that the interactive
interpreter does.
The following outputs nothing:
python -c "12 / 4.1"
So you always have to prefix expressions with print to see the result.
Michael
--
http://www.voidspace.org.uk/
May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html
[Changed subject *and* list]
> 2010/12/31 Maciej Fijalkowski <fijall(a)gmail.com>
>> How do you know that range is a builtin you're thinking
>> about and not some other object?
On Fri, Dec 31, 2010 at 7:02 AM, Cesare Di Mauro
<cesare.di.mauro(a)gmail.com> wrote:
> By a special opcode which could do this work. ]:-)
That can't be the answer, because then the question would become "how
does the compiler know it can use the special opcode". This particular
issue (generating special opcodes for certain builtins) has actually
been discussed many times before. Alas, given Python's extremely
dynamic promises it is very hard to do it in a way that is
*guaranteed* not to change the semantics. For example, I could have
replaced builtins['range'] with something else; or I could have
inserted a variable named 'range' into the module's __dict__. (Note
that I am not talking about just creating a global variable named
'range' in the module; those the compiler could recognize. I am
talking about interceptions that a compiler cannot see, assuming it
compiles each module independently, i.e. without whole-program
optimizations.)
Now, *in practice* such manipulations are rare (with the possible
exception of people replacing open() with something providing hooks
for e.g. a virtual filesystem) and there is probably some benefit to
be had. (I expect that the biggest benefit might well be from
replacing len() with an opcode.) I have in the past proposed to change
the official semantics of the language subtly to allow such
optimizations (i.e. recognizing builtins and replacing them with
dedicated opcodes). There should also be a simple way to disable them,
e.g. by setting "len = len" at the top of a module, one would be
signalling that len() is not to be replaced by an opcode. But it
remains messy and nobody has really gotten very far with implementing
this. It is certainly not "low-hanging fruit" to do it properly.
I should also refer people interested in this subject to at least
three PEPs that were written about this topic: PEP 266, PEP 267 and
PEP 280. All three have been deferred, since nobody was bold enough to
implement at least one of them well enough to be able to tell if it
was even worth the trouble. I haven't read either of those in a long
time, and they may well be outdated by current JIT technology. I just
want to warn folks that it's not such a simple matter to replace "for
i in range(....):" with a special opcode.
(FWIW, optimizing "x[i] = i" would be much simpler -- I don't really
care about the argument that a debugger might interfere. But again,
apart from the simplest cases, it requires a sophisticated parser to
determine that it is really safe to do so.)
--
--Guido van Rossum (python.org/~guido)
After learning a bit of Ocaml I started to like its pattern matching features.
Since then I want to have a "match" statement in Python. I wonder if anybody
else would like this too.
ML style pattern matching is syntactic sugar, that combines "if" statements
with tuple unpacking, access to object attributes, and assignments. It is a
compact, yet very readable syntax for algorithms, that would otherwise require
nested "if" statements. It is especially useful for writing interpreters, and
processing complex trees.
Instead of a specification in BNF, here is a function written with the
proposed pattern matching syntax. It demonstrates the features that I find
most important. The comments and the print statements explain what is done.
Proposed Syntax
---------------
def foo(x):
match x with
| 1 -> # Equality
print("x is equal to 1")
| a:int -> # Type check
print("x has type int: %s" % a)
| (a, b) -> # Tuple unpacking
print("x is a tuple with length 2: (%s, %s)" % (a, b))
| {| a, b |} -> # Attribute existence and access
print("x is an object with attributes 'a' and 'b'.")
print("a=%s, b=%s" % (a, b))
# Additional condition
| (a, b, c) with a > b ->
print("x is a tuple with length 3: (%s, %s, %s)" % (a, b, c))
print("The first element is greater than the second element.")
# Complex case
| {| c:int, d=1 |}:Foo ->
print("x has type Foo")
print("x is an object with attributes 'c' and 'd'.")
print("'c' has type 'int', 'd' is equal to 1.")
print("c=%s, d=%s" % (c, d))
# Default case
| _ ->
print("x can be anything")
Equivalent Current Python
-------------------------
The first four cases could be handled more simply, but handling all cases in
the same way leads IMHO to more simple code overall.
def foo(x):
while True:
# Equality
if x == 1:
print("x is equal to 1")
break
# Type check
if isinstance(x, int):
a = x
print("x is an integer: %s" % a)
break
# Tuple unpacking
if isinstance(x, tuple) and len(x) == 2:
a, b = x
print("x is a tuple with length 2: (%s, %s)" % (a, b))
break
# Attribute existence testing and access
if hasattr(x, "a") and hasattr(x, "b"):
a, b = x.a, x.b
print("x is an object with attributes 'a' and 'b'.")
print("a=%s, b=%s" % (a, b))
break
# Additional condition
if isinstance(x, tuple) and len(x) == 3:
a, b, c = x
if a > b :
print("x is a tuple with length 3: (%s, %s, %s)" % (a, b, c))
print("The first element is greater than the second "
"element.")
break
# Complex case
if isinstance(x, Foo) and hasattr(x, "c") and hasattr(x, "d"):
c, d = x.c, x.d
if isinstance(c, int) and d == 1:
print("x has type Foo")
print("x is an object with attributes 'c' and 'd'.")
print("'c' has type 'int', 'd' is equal to 1.")
print("c=%s, d=%s" % (c, d))
break
# Default case
print("x can be anything")
break
Additional Code to Run Function "foo"
-------------------------------------
class Bar(object):
def __init__(self, a, b):
self.a = a
self.b = b
class Foo(object):
def __init__(self, c, d):
self.c = c
self.d = d
foo(1) # Equality
foo(2) # Type check
foo((1, 2)) # Tuple unpacking
foo(Bar(1, 2)) # Attribute existence testing and access
foo((2, 1, 3)) # Additional condition
foo(Foo(2, 1)) # Complex case
foo("hello") # Default case
I left out dict and set, because I'm not sure how they should be handled. I
think list should be handled like tuples. Probably there should be a universal
matching syntax for all sequences, similarly to the already existing syntax:
a, b, *c = s
I don't really like the "->" digraph at the end of each match case. A colon
would be much more consistent, but I use colons already for type checking
(a:int).
I generally think that Python should acquire more features from functional
languages. In analogy to "RPython" it should ultimately lead to "MLPython", a
subset of the Python language that can be type checked and reasoned about by
external tools, similarly to what is possible with Ocaml.
Eike.
> Date: Sat, 18 Dec 2010 12:45:49 +0100
> From: spir <denis.spir(a)gmail.com>
> To: python-ideas(a)python.org
> Subject: Re: [Python-ideas] ML Style Pattern Matching for Python
> Message-ID: <20101218124549.35a0d1c9@o>
> Content-Type: text/plain; charset=UTF-8
>
> On Sat, 18 Dec 2010 12:23:45 +0100
> Eike Welk <eike.welk(a)gmx.net> wrote:
>
>
...
>
> I want composite object literal notation as well. But certainly not {| a=1, b=2 |}. Rather (a=1, b=2) or (a:1, b:2).
> Untyped case would created an instance of Object (but since as of now they can't have attrs, there should be another modif), or of a new FreeObject subtype of Object.
>
...
>
> Denis
"(a=1, b=2) or (a:1, b:2)"
These look to me like they should produce a named-tuple instance.
I usually just use:
... class Simple(object):
... def __init__(self, **kwargs):
... self.__dict__.update(kwargs)
... Simple(a=1,b=2)
-Mark
> Date: Tue, 14 Dec 2010 13:19:24 +0000
> From: Michael Foord <fuzzyman(a)voidspace.org.uk>
> To: "Gregory P. Smith" <greg(a)krypto.org>
> Cc: Python-Ideas <python-ideas(a)python.org>
> Subject: Re: [Python-ideas] replace boolean methods on builtin types
> with properties [py4k?]
> Message-ID:
> <AANLkTi=yEZ2i8j0CFF0V_P82S=WjULBFCq2gamO_xGu8(a)mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> > On 13 December 2010 22:35, Gregory P. Smith <greg(a)krypto.org> wrote:
> >
> > A hack could be done today to have these behave as both functions and
> > properties by having the property return a callable bool derived class that
> > returns itself. But that seems a bit too cool yet gross...
> >
> >
> bool derived class... good luck with that. :-)
>
> Michael
Well it it doesn't sound that odd to me.
This it's one of the (few) things I miss from Matlab.
Not that parentheses are optional for function calls, but that 'true', 'false', 'Inf' and 'NaN' act both
as their respective values, and as functions. Like 'ones' and 'zeros' they can be called with a size
tuple to get an array of nd-array of that value.
-Mark