Greg Ewing wrote:
#- > Decimal floating-point has almost all the pitfalls of binary
#- > floating-point, yet I do not see anyone arguing against decimal
#- > floating-point on the basis that it makes the pitfalls less
#- > apparent.
#- But they're not the pitfalls at issue here. The pitfalls at
#- issue are the ones due to binary floating point behaving
#- *differently* from decimal floating point.
#- Most people's mental model of arithmetic, including floating
#- point, works in decimal. They can reason about it based on
This is a problem for future, outside of this world, portability.
When an alien, with, say, twelve fingers in each hand, uses Python? The
standard fp may be will still be the binary one, so they can make a
Twodecimal.py (I damn my english).
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
La información contenida en este mensaje y cualquier archivo anexo al mismo,
son para uso exclusivo del destinatario y pueden contener información
confidencial o propietaria, cuya divulgación es sancionada por la ley.
Si Ud. No es uno de los destinatarios consignados o la persona responsable
de hacer llegar este mensaje a los destinatarios consignados, no está
autorizado a divulgar, copiar, distribuir o retener información (o parte de
ella) contenida en este mensaje. Por favor notifíquenos respondiendo al
remitente, borre el mensaje original y borre las copias (impresas o grabadas
en cualquier medio magnético) que pueda haber realizado del mismo.
Todas las opiniones contenidas en este mail son propias del autor del
mensaje y no necesariamente coinciden con las de Telefónica Comunicaciones
Personales S.A. o alguna empresa asociada.
Los mensajes electrónicos pueden ser alterados, motivo por el cual
Telefónica Comunicaciones Personales S.A. no aceptará ninguna obligación
cualquiera sea el resultante de este mensaje.
> If you spell decorators this way:
> def func():
> then what will happen when you type [decorator] at an interactive
> interpreter prompt?
> >>> [decorator]
I've been waiting and reading all the threads before chiming in ...
At first the
had some appeal, although I definitely didn't like it as much as:
def func() [decorator]:
because it split the function definition up into two bits - I lose the association with the def.
However, as these threads have progressed, it's become obvious to me that there are too many ways in which this is a special case for me to be happy with it. I've seen lots of objections brought up based on existing behaviours with the construct, and each time it's been waved away as not being common enough to worry about. These are often actual use cases that experienced developers have - documentation tools, editor configuration, etc.
Well, it is worrying me. Each special case is one more thing that adds a burden to using Python. The fact that whitespace and comments are allowed between the decorator and function definition is particularly worrying to me - it can quite easily mask errors in code - particularly newbie errors:
[a] # oops - I copied this from an interactive session and forgot to modify it.
# This is a simple function
def func (args):
I am hoping to send this summary out on the first day of PyCon (Wed.).
Hopefully I wasn't too tired when I wrote this on the airplane.
Still looking for a summer job or internship programming. If you know of
one, please let me know.
Ever since I first had to type Martin v. L|o_diaeresis|wis' name, I have
had issues with Unicode in the summary. When I realized there was a
problem I thought it was Vim changing my Unicode in some way since I would
notice problems when I reopened the file in TextEdit, OS X's included text
editor that I have always used for writing the summaries (and no, I am not
about to use Vim to do this nor Emacs; spoiled by real-time spelling and
it is just the way I do it). Well, I was wrong. Should have known Vim
was not the issue.
Turned out that TextEdit was opening the text files later assuming the
wrong character encoding. When I forced it to open all files as UTF-8 I
no longer had problems. This also explains the weird MIME-quoted issues I
had earlier that Aahz pointed out to me since I was just copying from
TextEdit into Thunderbird_, my email client, without realizing TextEdit
was not reading the text properly. So I thought I finally solved my
problem. Ha! Not quite.
Turned out to be a slight issue on the generation of the email based on
the tool chain for how we maintain the python.org web site. This is in no
way the web team's fault since I have unique requirements for the
Summaries. But without having to do some recoding of ht2html_ in order to
specify the text encoding, I wasn't sure how I should handle this.
Docutils to the rescue. Turns out that there is a directive_ for handling
Unicode specifically. That is why you see ``|o_diaeresis|`` in the reST
version of the summary but the HTML version shows |o_diaeresis| (for
people reading this in reST, I realize it looks the same for you; just
look at the HTML to see it). For those of you wondering what the text is
meant to represent, it is an "o" with a diaeresis. In simple terms it is
an "o" with two dots on top of it going horizontally and it represented by
0x00F6 in Unicode (while a program in OS X may have helped cause this
headache, the OS's extensive Unicode support, specifically its Special
Characters menu option and bonus Unicode info is very well done and a
great help with getting the Unicode-specific info for the character).
I am planning to consistently do this in the Summaries. While it might
make certain names harder to read for the people who read this in reST, it
doesn't butcher people's names regardless of what version you use and I
take that as a higher precedent.
And here is a question of people who read the Summaries on a regular
basis: would you get any benefit in having new summaries announced in the
`python.org RSS feed`_? Since this entails one extra, small step in each
summary I am asking people to email me to let me know if this would in any
way make their lives easier. So please let me know if knowing when a new
summary is out by way of the RSS feed would be beneficial to you or if
just finding from `comp.lang.python`_ or `comp.lang.python.announce`_ is
I actually wrote this entire summary either in the airport or on the
flight to DC for PyCon (thank goodness for emergency aisles; my 6'6" frame
would be in much more pain than it is otherwise) and thus on Spring Break!
I am hoping to use this as a turning point in doing the Summaries on a
semi-monthly basis again. We will see if Spring quarter lets me stick to
that (expecting a lighter load with less stress next quarter).
.. _Thunderbird: http://www.mozilla.org/projects/thunderbird/
.. _ht2html: http://ht2html.sf.net/
.. _directive: http://docutils.sourceforge.net/spec/rst/directives.html
.. _python.org RSS feed: http://www.python.org/channews.rdf
.. _PyCon: http://www.pycon.org/
PEP 309 ("Partial Function Application") has been rewritten.
PEP 318 got a ton of discussion, to the point of warranting its own
summary: `PEP 318 and the discussion that will never end`_.
PE 327, which is the spec for the Decimal module in the CVS sandbox, got
- `PEP 309 re-written
- `Changes to PEP 327: Decimal data type
Playing willy-nilly with stack frames for speed
A patch to clean up the allocation and remove the freelist (stack frames
not in use that could be used for something else) was proposed. Of course
it would have been applied immediately if there wasn't a catch: recursive
functions slowed down by around 20%.
A way to get around this was proposed, but it would clutter up the code
which was being simplified in the first place. Guido said he would rather
have that than have recursive calls take a hit.
Then a flurry of posts came about discussing other ways to try to speed up
- `scary frame speed hacks
- `reusing frames
PEP 318 and the discussion that will never end
Just looking at the number of contributing threads to this summary should
give you an indication of how talked about this PEP became. In case you
don't remember the discussion `last time`_, this PEP covers
function/method(/class?) decorators: having this::
def foo() [decorate, me]: pass
be equivalent to::
def foo(): pass
foo = decorate(me(foo))
What most of the discussion came down to was syntax and the order of
application. As of this moment it has come down to either the syntax used
above or putting the brackets between the function/method name and the
parameters. Guido spoke up and said he liked the latter syntax (which is
used by Quixote_). People, though, pointed out that while the syntax
works for a single argument, adding a long list starts to separate the
parameter tuple farther and farther from the function/method name. There
was at least one other syntax proposal but it was shot down quickly.
Order of application has not been settled either. Some want the order to
be like in the example: the way you see it is how it would look if you
wrote out the code yourself. Others, though, want the reverse order:
``me(decorate(foo))``. This would put the application order in the order
the values in the brackets are listed.
In the end it was agreed the PEP needed to be thoroughly rewritten which
has not occurred yet.
.. _last time:
.. _Quixote: http://www.mems-exchange.org/software/quixote/
- `Pep 318 - new syntax for wrappers
- `new syntax (esp for wrappers)
- `PEP 318 - function/method/class decoration
- `(Specific syntax of) PEP 318 - function/method/class
- `PEP 318 - generality of list; restrictions on elements
- `PEP 318 needs a rewrite
- `Python-Dev Digest, Vol 8, Issue 20
- `PEP 318 trial balloon (wrappers)
- `funcdef grammar production
Compiler optimization flags for the core
The topic of compiler flags that are optimal for Python came up when
Raymond Hettinger announced his new LIST_APPEND opcode (discussed later in
`Optimizing: Raymond Hettinger's personal crack`_). This stemmed from the
fact that the bytecode has not been touched in a while. This generated a
short discussion on the magic that is caches and how the eval loop always
throws a fit when it gets played with. One suggestion was to rework some
opcodes to use other opcodes instead in order to remove the original
opcodes entirely from the eval loop. But it was pointed out it would be
better to just factor out the C code to functions so that they are just
brought into the cache less often instead of incurring the overhead of
more loops through the eval loop.
This then led to AM Kuchling to state that he was planning in giving a
lightning talk at PyCon_ about various compiler optimization flags he
tried out on Python. Looks like that compiling Python/ceval.c with -Os
(optimizes for space) w/ everything else using -O3 gave the best results
using gcc 3. This sparked the idea of more architecture-dependent
compiler optimizations which would be set when 'configure' was run and
detected the hardware of the system.
In the end no code was changed in terms of the compiler optimizations.
- `New opcode to simplifiy/speedup list comprehensions
- `Who cares about the performance of these opcodes?
Take using Python as a calculator to a whole new level
I remember once there was a thread on `comp.lang.python`_ about how to
tell when you had an addiction to Python. One of the points was when you
start to use Python as your calculator (something I admit to openly; using
the 'sum' built-in is wonderful for quick addition when I would have used
a Scheme interpreter). Well, Raymond Hettinger had the idea of adding a
'calculator' module that would provide a ""pretty good" implementations of
things found on low to mid-range calculators like my personal favorite,
the hp32sII student scientific calculator". He then listed a bunch of
functionality the HP calculator has that he would like to see as a module.
Beyond sparking some waxing about calculators, and the HP 32sII especially
(I used a TI-82 back in high school and junior college so I won't even
both summarizing the nostalgic daydreaming on HP calculators), the
discussion focused mainly on what functionality to provide and the
accuracy of the calculations. The former topic focused on what would be
reasonable and relatively easy to implement without requiring a
mathematician to write in order to be correct or fast.
The topic of accuracy, though, was not as clear-cut. First the issue of
whether to use the in-development Decimal module would be the smart thing
to do. The consensus was to use Decimal since floating-point, even with
IEEE 754 in place, is not accurate enough for something that wants to be
as accurate as an actual calculator. Then discussions on the precision of
accuracy came up. It seemed like it would be important to have a level of
precision kept above the expected output precision to make sure any
rounding errors and such would be kept to a minimum.
Raymond is going to write a PEP outlining the module.
- `calculator module
dateutil module proposed
Gustavo Niemeyer offered to integrate his dateutil_ module into the
stdlib. Discussion of how it should tie into datetime and whether all of
it or only some of its functionality should be brought in was transpired.
As of right now the discussion is still going on.
.. _dateutil: https://moin.conectiva.com.br/DateUtil
Optimizing: Raymond Hettinger's personal crack
Raymond Hettinger, the speed nut that he is, added a new opcode to Python
to speed up list comprehensions by around 35%. But his addiction didn't
Being the dealer of his own drug of choice, Raymond got his next fix by
improving on iterations for dictionaries (this is, of course, after all of
his work on the list internals). As always, thanks goes to Raymond for
putting in the work to make sure the CPython interpreter beats the Parrot_
interpreter by that much more come `OSCON 2004`_ and the `Pie-thon`_
And, at Hye-Shik Chang's request, Raymond listed off his list of things to
do to feed his addiction so he doesn't go into withdrawls any time in the
future. Most of them are nice and involved that would make great
.. _Parrot: http://www.parrotcode.org/
.. _OSCON 2004: http://conferences.oreillynet.com/os2004/
.. _Pie-thon: http://www.hole.fi/jajvirta/weblog/20040108T2001.html
- `New opcode to simplifiy/speedup list comprehensions
- `Joys of Optimization
A simple issue I have with:
[classmethod, logged, debug]
Is "How do you type this into Idle?" I realize this is not the most
important of considerations, but access to experimentation is going to
be vital. You can always force with:
>>> if True:
[classmethod, logged, debug]
but I wonder if we want to go that route.
-Scott David Daniels
On Wed, 2004-03-31 at 12:52, python-dev-request(a)python.org wrote:
> Message: 6
> Date: Wed, 31 Mar 2004 12:49:14 -0800
> From: Guido van Rossum <guido(a)python.org>
> Subject: Re: [Python-Dev] PEP 318: Decorators last before colon
> To: "Phillip J. Eby" <pje(a)telecommunity.com>
> Cc: python-dev(a)python.org
> Message-ID: <200403312049.i2VKnEi14444(a)guido.python.org>
> > There appears to be a strong correlation between people who have
> > specific use cases for decorators, and the people who want the
> > last-before-colon syntax. Whereas, people who have few use cases
> > (or don't like decorators at all) appear to favor syntaxes that move
> > decorators earlier. Whether that means the "earlier" syntaxes are
> > better or worse, I don't know. <0.5 wink>
> Maybe the practitioners are so eager to have something usable that
> they aren't swayed as much by esthetics.
The current system with no syntax is equally usable, what's gained
functionally? This argument was and is often used against interface
syntax; it's a strong argument. In the case of interfaces the argument
was against one nine letter keyword that could not possibly be confused
with a list or anything else. This syntax is riskier.
> The human brain is a lot more flexible in picking up patterns than the
> Python parser; as shown many times in this discussion, most people
> have no clue about the actual syntax accepted by the Python parser,
> and simply copy (and generalize!) patterns they see in examples.
Or like me, they just do what Emacs tells 'em, which does argue for
decorators of any or no syntax. Javadoc coming before a method is
another precedent in favor of something coming before, but these are
without run-time function.
> > It also seems to be working against the AST branch a bit, in that I
> > would expect the decorator expressions to be part of the function
> > definition node, rather than in an unrelated statement.
Another discussion point occurred to me regarding interfaces and
projects that use them heavily like Zope, Twisted, PEAK etc. Has the
decorator syntax as proposed been evaluated in the light of these
interfaces (and any future native language support for them), whose
methods have no body to interpose between the definition and decorators
as they exist now? I've seen the "Large Body" argument use several
times in defense of the decorator syntax being before or above the
Ka-Ping Yee wrote:
> This discussion is turning in a direction that alarms me.
> Putting the [decorator] on a separate line before the function
> changes the stakes entirely. It sets aside real functional issues
> in favour of aesthetics. Being enamoured with the way this syntax
> *looks* does not justify functionally *breaking* other things in
> the implementation to achieve it.
> Consider the arguments in favour:
> 1. Decorators appear first.
> 2. Function signature and body remain an intact unit.
> 3. Decorators can be placed close to the function name.
> These are all aesthetic arguments: they boil down to "the appearance
> is more logical". Similar arguments have been made about the other
> Consider the arguments against:
> 1. Previously valid code has new semantics.
> 2. Parse tree doesn't reflect semantics.
> 3. Inconsistent interactive and non-interactive behaviour. [*]
> 4. Decorators can be arbitrarily far away from the function name.
Agreed, although I won't use such alarmist phrases. ;)
Notice that the form:
def foo(arg1, arg2, ...):
gives all of the benefits you mentioned without incurring any of the
"arguments against", except possibly #4. The PEP itself gives the
"arguments against" this form as:
1. "The function definition is not nested within the using: block making
it impossible to tell which objects following the block will be
Somewhat of an oblique argument, to which the simple answer is: there
must be a def right after it, in exactly the same manner that a try:
must be followed by an except: or finally:
2. An argument which only applies if foo is inside the decorate block. I
don't advocate that.
4. "Finally, it would require the introduction of a new keyword."
Yup. Not a bad thing for such a powerful tool, IMO.
>Well, it is worrying me. Each special case is one more thing that adds a
>burden to using Python. The fact that whitespace and comments are allowed
>between the decorator and function definition is particularly worrying to
>me - it can quite easily mask errors in code - particularly newbie errors:
> [a] # oops - I copied this from an interactive session and forgot to
> modify it.
> # This is a simple function
> def func (args):
I'm not invested into this in any way, but the variations
are syntactically valid today but OTOH differently from plain [...]
correspond to run-time errors.
even better are syntax errors but are really ugly.
I could live with the +[...] form, as I can live
def f(...) [...]:
it's really a matter of "practicality beats purity" and a
The limits of the parser aren't helping either in this case.