> Guido van Rossum wrote:
>> I think that SPARK syntax and everything else that people have
>> traditionally added to docstring markup that isn't strictly speaking
>> documentation (even some extreme cases of doctest usage) ought to be
>> considered as candidates for attribute-ification.
> Where do method attribute type signatures and DBC fit in?
> As a decorator, or in the docstring?
While it will still be *possible* to abuse the docstring, it
*should* go as a decorator. Not every machine-readable class
invariant is useful documentation to someone who isn't already
debugging that code in particular.
> I'm concerned that the funcattrs(a="xyz" .. ) sample tossed
> around here will be rather ugly for specifying DBC strings.
I agree that it would be better to create a DBC class.
If there is a DBC attribute on a code object, then DBC-aware
tools will look to that object for the DBC conditions. Whether
you express them as strings or predicate functions or ... your
As a specific case, a debugger could use object.__DBC__.__call__
instead of object.__call__. (If you wanted to enforce the
DBC checks at all times, then your decorator could just return
a new object that checks and delegates, rather than an annotated
version of the original.)
> Finally, I don't have a need to access DBC annotations at
> runtime once my module is distributed. I would not want to
> pay the memory cost overhead of loading DBC information or
> attribute type signatures at runtime.
Then define a release-DBC class whose constructor is pass, and
whose decoration is to return the object unchanged (without a
DBC attribute). Use that in your released code. Whether to
strip DBC info entirely or just throw it away on loading is up
> However another person at PyCon poo-poo'd my concern over
> bloating .pyc files and subsequent memory use. As a compromise
> I suggested that "annotation" information could go into the
> .pyc, but be loaded "on demand" at runtime.
Changing the format of .pyc is beyond the scope of PEP318.
If you want a special case for DBC, you can write it. Make
your DBC class save the annotations for example.py to
example.dbc, and retrieve them on demand. You may have to
go through a rewrite/reload step to get them out of the
.pyc without removing them from the .py, but you can do it.
If on-demand is not required, you could probably change the
compiler to ignore any attribute registered as ignorable
during optimization, instead of just __doc__.
>>> But if you're moving to a wider precision, surely there is an even
>>> better decimal approximation to the IEEE-rounded "1.1" than
>>> 1.1000000000000001 (with even more digits), so isn't the preceding
>>> paragraph a justification for using that approximation instead?
>> Like Ping, you're picturing typing in "1.1" by hand, so that you
>> *know* decimal 1.1 on-the-nose is the number you "really want".
> No, I don't think so. I said ``the IEEE-rounded "1.1"'', by which I
> mean the IEEE floating-point number that is closest to
> (infinite-precision) 1.1.
Oops -- got it.
> Let's call that number X. Now, of course X is a rational number, and
> one that can be exactly represented on any machine with at least as
> many bits in its floating-point representation as the machine that
> computed X.
> On the original machine, converting 1.1 to floating-point yields
> exactly X, as does converting 1.1000000000000001.
> You claim that on a machine with more precision than the original
> machine, converting 1.1000000000000001 to floating-point will yield a
> value closer to X than converting 1.1 to floating-point will yield.
> I agree with you. However, I claim that there is probably another
> decimal number, with even more digits, that when converted to
> floating-point on that machine will yield even a closer approximation
> to X, so isn't your line of reasoning an argument for using that
> decimal number instead?
It is, but it wasn't practical. 754 requires that float->string done to 17
significant digits, then back to float again, will reproduce the original
float exactly. It doesn't require perfect rounding (there are different
accuracy requirements over different parts of the domain -- it's
complicated), and it doesn't require that a conforming float->string
operation be able to produce more than 17 meaningful digits. For example,
on Windows under 2.3.3:
>>> print "%.50f" % 1.1
It's fine by the 754 std that all digits beyond the 17th are 0. It would
also be fine if all digits beyond the 17th were 1, 8, or chosen at random.
So long as Python relies on the platform C, it can't assume more than that
is available. Well, it can't even assume that much, relying on C89, but
almost all Python fp behavior is inherited from C, and as a "quality of
implementation" issue I believed vendors would, over time, at least try to
pay lip service to 754. That prediction was a good one, actually.
> Here's another way to look at it. Suppose I want to convert 2**-30 to
> decimal. On a 64-bit machine, I can represent that value to 17
> significant digits as 9.31322574615478516e-10. However, I can also
> represent it exactly as 9.3132257461547851625e-10.
On Windows (among others), not unless you write your own float->string
routines to get those "extra" digits.
>>> print "%.50g" % (2**-30)
BTW, it's actually easy to write perfect-rounding float<->string routines in
Python. The drawback is (lack of) speed.
> If you are arguing that I can get a better approximation on a machine
> with more precision if I write the first of these representations,
> doesn't that argument suggest that the second of these representations
> is better still?
Yes. The difference is that no standard requires that C be able to produce
the latter, and you only suggest David Gay's code because you haven't tried
to maintain it <wink -- but it is a cross-platform mess>.
> Remember that every binary floating-point number has an exact decimal
> representation (though the reverse, of course, is not true).
Phillip J. Eby:
> 'as' is not currently a keyword, and making it so would break
> any programs that use it as a name.
It has been documented as a pseudo-keyword for some time, with
the warning that it will may become a real keyword later.
> Really, of the current "before the def" proposals, I think I
> like Samuele's:
approach the best. The '*' seems to say "special note, pay
I agree that it says special note; it just doesn't say *why*
the list is special. We happen to know that it secretly means
"After you evaluate this list, keep a reference, then
call each member on the function or class I'm about to
but a keyword (even as) makes it more explicit why this list
is special. Of course,
as: d1, d2, d2
also works; if you put a keyword at the front, then  becomes
redundant. And that was one of the bakeoff options, except
that Guido used "decorate:" instead of "as:" in his example.
> 'goto' example: breaking out from a deeply nested loop:
> goto .end
mark that space--^
Shall I file a bug about python lexer which allows space between class and
attribute? Or that's a fiature?
I have to say I favor the "last before colon" approach, but if it has
to be before the def, then I think it should have a keyword, and if
you don't want to introduce a new keyword, then it looks like "is" is
the only reasonable candidate. And if you do have a keyword, you
don't need the square brackets. So you have
is: author("Guido"), signature(int, result=None)
I saw both of the following use cases mentioned on the list, and they
seemed interesting, so I went ahead and wrote up implementations:
A decorator that can be used to incrementally construct
properties. For example, the following code constructs a
property 'x' from an fget, an fset, and an fdel function:
>>> class A:
... def x(self) [property_getter]:
... return self.__x
... def x(self, val) [property_setter]:
... self.__x = val
... def x(self) [property_deleter]:
... del self.__x
In particular, this decorator checks if a property named
'func' is defined in the enclosing frame. If so, then it
updates that property's fget to be 'func'. If not, then
it creates a new property whose fget is 'func'. The
property's docstring is taken from the first decorated
property function to define a docstring.
A decorator-generator that can be used to incrementally construct
a generic function that delegates to individual functions based on
the type signature of the arguments. For example, the following
code defines a generic function that uses two different actual
functions, depending on whether its argument is a string or an
>>> def f(x) [generic(int)]:
... print x, 'is an int'
>>> def f(x) [generic(str)]:
... print x, 'is a string'
In particular, the decorator returned by this function checks if a
Generic function-delegation object with the same name as the
decorated function is defined in the enclosing frame. If so, then
it registers the function with that generic object, under
'type_signature'. If not, then it creates a new Generic
function-delegation object, and registers the function.
Full code is at <http://www.cis.upenn.edu/~edloper/pydecorators.html>.
But I still don't feel like I have a good handle on what "acceptable"
uses of decorators are.. So, for each of these 2 use cases, is it...
- an excellent example that should be included in the stdlib
- perfectly acceptable, and belongs in the python cookbook
- somewhat hackish, but ok under the right circumstances
- an abhorition of nature
(If one of the first 3, then maybe they should be added to the pep?)
Thanks for the feedback.
>> Typically the suite (things indented after the colon) are all
>> of the same type.
> Funny, but this isn't the case with Python anywhere else; I can have
> functions, class and variable definitions inside any other suite.
I phrased that badly.
The statements following a suite are all interchangable from the *suite's*
perspective. If not, then a second header clause (e.g. "else:") is added.
If a particular expression is special, it comes before the ":" within
the header. (x, y in "for x in y:")
Docstrings are arguably an exception, but they get away with it because
they evaluate to themselves, without a side-effect. If you remove their
special status, programs don't change their behavior. (Which is why
optimized code can just remove them.)
Decorators behave differently from other statements, and this should be
Everything in Guido's bakeoff meets this requirement. Option 2 *might*
cause confusion with normal lists, which is why he also listed 2.a.
(2) Individually fine, but the tie between the list and def is week.
[d1, d2, d3]
def func (args):
(2a) Also warns you that the list is strange, but doesn't say how.
*[d1, d2, d3]
def func (args):
(2b) Explicitly relates the list to the def, but does add a keyword.
On the other hand, adding this keyword makes it slightly less
difficult to extend the syntax again, if that ever becomes
[d1, d2, d3] of
def func (args):
I apologize for not changing my digest subjects in earlier messages.
> Message: 4
> Date: Wed, 31 Mar 2004 19:16:06 -0500
> From: "Phillip J. Eby" <pje(a)telecommunity.com>
> Subject: Re: [Python-Dev] Re: PEP 318: Decorators last before colon
> To: Neil Schemenauer <nas-python(a)python.ca>, python-dev(a)python.org
> Message-ID: <220.127.116.11.0.20040331184627.01e86a10(a)telecommunity.com>
> Content-Type: text/plain; charset="us-ascii"; format=flowed
> Ugly or not, *all* of the proposed syntaxes that don't involve creating an
> extra suite have the practical effect of improving the semantic readability
> of decorator use over today's Python. In addition, they also have the
> practical effect of making decorator use more convenient, since the two
> extra repetitions of the function or class name go away.
> I suspect this is part of why there is such disagreement on the subject of
> decorators: people who make heavy use of them today know exactly what
> problems today's syntax has from both the readability and writability
> viewpoints. Whereas people who do not use them, don't get why limiting
> their use to fixed subsets, or single decorators, or any number of other
> ideas just negate the usefulness of having a syntax in the first
> place. Or, like Michel, they don't see a point to having a syntax at
I'd like to clarify that I'm not against a new syntax (I'd be more than
a bit hypocritical having written two PEPs proposing new syntax!). My
original comment was in fact that the line-preceding-def was ugly, not
My second comment is that I wonder if decoration of any syntax has been
thought out in the light of other objects being decorated, like classes,
functions, or interfaces. I just feel like there's more thought to be
applied here on decoration syntax in general. When we take decoration
to the next level, will this syntax suffice? Will it be abused for
reasons not considered, like run-time type checking? I'm not positive
but it seems this is what Bob I. wants it for.
And have we considered all the options? I'm not making any proposals,
but perhaps decoration can be bundled into a separate object that a
class refers to, similar to interfaces. This could allow a lot more
flexibility and still look like Python. Or perhaps this separate
object could be mixed in (blech). But you see my point, the upshot is
you could apply different decorations at different times, decorators
would be separately managed and it wouldn't look like a shark fin duct
taped to a VW bug.
But having said that, I concede: I know that some people make a lot of
redundant finger movements working around the lack of decorations.
Phillip is right that I don't use decorators (I have used classmethod
quite a bit, but that's all) and that he does, so his needs outweigh
mine. So I cave. See below for my vote.
Later, Barry said:
> So I hacked up python-mode a bit to support the syntax coloring of
> Guido's previously MFS (most favored syntax). I wanted to see if the
> concerns about visual obscurity were real or not. Then I rewrote a
> methods of mine that would benefit from decorators (with some
> elaboration). A screen shot of my XEmacs buffer is here:
Mmm.. I like your colors, it reminds me of a scheme Ken M. shared with
me once. I've long lost it, along with his handy pop-to-shell script.
Later, Guido said:
> 3) Prefix suite (could use a different keyword than 'decorate'):
> def foo(cls, arg1, arg2):
I don't really like it, but it's the only one that looks like Python to
me. Otherwise, I vote last-before-colon. Line-preceding-def would be
my last choice.
Greg Ewing wrote:
#- > Decimal floating-point has almost all the pitfalls of binary
#- > floating-point, yet I do not see anyone arguing against decimal
#- > floating-point on the basis that it makes the pitfalls less
#- > apparent.
#- But they're not the pitfalls at issue here. The pitfalls at
#- issue are the ones due to binary floating point behaving
#- *differently* from decimal floating point.
#- Most people's mental model of arithmetic, including floating
#- point, works in decimal. They can reason about it based on
This is a problem for future, outside of this world, portability.
When an alien, with, say, twelve fingers in each hand, uses Python? The
standard fp may be will still be the binary one, so they can make a
Twodecimal.py (I damn my english).
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
La información contenida en este mensaje y cualquier archivo anexo al mismo,
son para uso exclusivo del destinatario y pueden contener información
confidencial o propietaria, cuya divulgación es sancionada por la ley.
Si Ud. No es uno de los destinatarios consignados o la persona responsable
de hacer llegar este mensaje a los destinatarios consignados, no está
autorizado a divulgar, copiar, distribuir o retener información (o parte de
ella) contenida en este mensaje. Por favor notifíquenos respondiendo al
remitente, borre el mensaje original y borre las copias (impresas o grabadas
en cualquier medio magnético) que pueda haber realizado del mismo.
Todas las opiniones contenidas en este mail son propias del autor del
mensaje y no necesariamente coinciden con las de Telefónica Comunicaciones
Personales S.A. o alguna empresa asociada.
Los mensajes electrónicos pueden ser alterados, motivo por el cual
Telefónica Comunicaciones Personales S.A. no aceptará ninguna obligación
cualquiera sea el resultante de este mensaje.