From mwh at python.net  Mon Mar  1 05:52:45 2004
From: mwh at python.net (Michael Hudson)
Date: Mon Mar  1 06:15:15 2004
Subject: [Python-Dev] Idea for a fast calling convention
In-Reply-To: <4040DA26.3090508@stackless.com> (Christian Tismer's message of
	"Sat, 28 Feb 2004 19:12:54 +0100")
References: <5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<5.1.0.14.0.20040228103037.038218c0@mail.telecommunity.com>
	<2msmgvhza5.fsf@starship.python.net> <4040DA26.3090508@stackless.com>
Message-ID: <2mn070hm9u.fsf@starship.python.net>

Christian Tismer <tismer@stackless.com> writes:

> p.s.: I believe some automatic source analysis and rewrite might pay
> off in other areas as well. Grepping through the sources, there are
> still very many similar patterns of PyArg_ParseTupleXXX calls, which
> could be replaced by less general, optimized versions.  This would
> even *not* cause code bloat, since all those calling sequences would
> be smaller than now.

Well, yes.  C sucks seriously for things like this, though.  It's
frankly embarassing that *every* time, say, ''.split() is called, some
silly string is being parsed.  Unclear what to do about this (excpet
PyPy, I guess).

Cheers,
mwh

-- 
  I have a feeling that any simple problem can be made arbitrarily
  difficult by imposing a suitably heavy administrative process
  around the development.       -- Joe Armstrong, comp.lang.functional

From bob at redivi.com  Mon Mar  1 07:01:08 2004
From: bob at redivi.com (Bob Ippolito)
Date: Mon Mar  1 06:57:47 2004
Subject: [Python-Dev] Idea for a fast calling convention
In-Reply-To: <2mn070hm9u.fsf@starship.python.net>
References: <5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<5.1.0.14.0.20040228103037.038218c0@mail.telecommunity.com>
	<2msmgvhza5.fsf@starship.python.net>
	<4040DA26.3090508@stackless.com>
	<2mn070hm9u.fsf@starship.python.net>
Message-ID: <1FD3E44B-6B78-11D8-93F5-000A95686CD8@redivi.com>


On Mar 1, 2004, at 5:52 AM, Michael Hudson wrote:

> Christian Tismer <tismer@stackless.com> writes:
>
>> p.s.: I believe some automatic source analysis and rewrite might pay
>> off in other areas as well. Grepping through the sources, there are
>> still very many similar patterns of PyArg_ParseTupleXXX calls, which
>> could be replaced by less general, optimized versions.  This would
>> even *not* cause code bloat, since all those calling sequences would
>> be smaller than now.
>
> Well, yes.  C sucks seriously for things like this, though.  It's
> frankly embarassing that *every* time, say, ''.split() is called, some
> silly string is being parsed.  Unclear what to do about this (excpet
> PyPy, I guess).

Surely there's other reasonable options.  For example, we could start 
using something like Pyrex that could be modified to generate whatever 
gnarly C code needs to happen for optimal runtime performance with 
minimal input ugliness :)

-bob


From mwh at python.net  Mon Mar  1 07:15:46 2004
From: mwh at python.net (Michael Hudson)
Date: Mon Mar  1 07:15:52 2004
Subject: [Python-Dev] Idea for a fast calling convention
In-Reply-To: <1FD3E44B-6B78-11D8-93F5-000A95686CD8@redivi.com> (Bob
	Ippolito's message of "Mon, 1 Mar 2004 07:01:08 -0500")
References: <5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<5.1.0.14.0.20040228103037.038218c0@mail.telecommunity.com>
	<2msmgvhza5.fsf@starship.python.net> <4040DA26.3090508@stackless.com>
	<2mn070hm9u.fsf@starship.python.net>
	<1FD3E44B-6B78-11D8-93F5-000A95686CD8@redivi.com>
Message-ID: <2mad30hifh.fsf@starship.python.net>

Bob Ippolito <bob@redivi.com> writes:

> On Mar 1, 2004, at 5:52 AM, Michael Hudson wrote:
>
>> Well, yes.  C sucks seriously for things like this, though.  It's
>> frankly embarassing that *every* time, say, ''.split() is called, some
>> silly string is being parsed.  Unclear what to do about this (excpet
>> PyPy, I guess).
>
> Surely there's other reasonable options.  For example, we could start
> using something like Pyrex that could be modified to generate whatever
> gnarly C code needs to happen for optimal runtime performance with
> minimal input ugliness :)

Hmm, yes, I hadn't thought of pyrex.

It also hadn't occured to me that pyrex might be desirable because the
result might be more efficient, but I guess that's not so surprising.

Cheers,
mwh

-- 
  Guido (like us!) is a bit schizophrenic here: he wants to be a
  benevolent dictator, but also wants to treat people like
  grownups. This probably worked better before Python got a large
  American audience <0.9 wink>.             -- Tim Peters, 10 Feb 2000

From tismer at stackless.com  Mon Mar  1 08:04:37 2004
From: tismer at stackless.com (Christian Tismer)
Date: Mon Mar  1 08:04:41 2004
Subject: [Python-Dev] Idea for a fast calling convention
In-Reply-To: <2mn070hm9u.fsf@starship.python.net>
References: <5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>	<5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>	<5.1.0.14.0.20040228103037.038218c0@mail.telecommunity.com>	<2msmgvhza5.fsf@starship.python.net>
	<4040DA26.3090508@stackless.com>
	<2mn070hm9u.fsf@starship.python.net>
Message-ID: <404334E5.5000202@stackless.com>

Michael Hudson wrote:

> Christian Tismer <tismer@stackless.com> writes:
> 
> 
>>p.s.: I believe some automatic source analysis and rewrite might pay
>>off in other areas as well. Grepping through the sources, there are
>>still very many similar patterns of PyArg_ParseTupleXXX calls, which
>>could be replaced by less general, optimized versions.  This would
>>even *not* cause code bloat, since all those calling sequences would
>>be smaller than now.
> 
> 
> Well, yes.  C sucks seriously for things like this, though.  It's
> frankly embarassing that *every* time, say, ''.split() is called, some
> silly string is being parsed.  Unclear what to do about this (excpet
> PyPy, I guess).

Why not a switch over the arg tuple size?
Or do you mean, this happens in so many different places that
it would be tedious to change that by hand.
Sure, we should not try to morph Pytho into PyPy in the first
place.
But maybe some runtime analysis with a modified version
of PyArg_Parse... things, we could see how often which
format string appears, and how often which optional paths
are used, as a hint for hand optimisation.

I could do that with Stackless, btw. For Win32, I have a few
introspection tools which can do C stack analysis from
a debug build and find out what is being called. They were meant
for later C stack synthesis, which never worked, but this would
give them some use, again.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@stackless.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  mobile +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


From jeremy at alum.mit.edu  Mon Mar  1 09:53:11 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon Mar  1 09:53:36 2004
Subject: [Python-Dev] How to debug pyexpat SIGSEGV with GDB?
In-Reply-To: <403F84D3.15067.13B6DF33@localhost>
References: <403F84D3.15067.13B6DF33@localhost>
Message-ID: <1078152791.29097.18.camel@localhost.localdomain>

On Fri, 2004-02-27 at 18:00, Brad Clements wrote:
> So, how can I figure out where in the Python source the function call is coming from 
> using gdb? I'm sure it involves "print" and some casts.. I couldn't find a howto on 
> python.org

First, make sure that the code from Misc/gdbinit is in your .gdbinit
file.  Get the stack trace in gdb and move up/down until you get to an
eval_frame() frame.  Then call the function pyframe.  It will print the
filename, function name, and line number of the current frame.  The
lineno usually points to the first line of the function.

Jeremy



From skip at pobox.com  Mon Mar  1 10:09:07 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar  1 10:09:20 2004
Subject: [Python-Dev] How to debug pyexpat SIGSEGV with GDB?
In-Reply-To: <1078152791.29097.18.camel@localhost.localdomain>
References: <403F84D3.15067.13B6DF33@localhost>
	<1078152791.29097.18.camel@localhost.localdomain>
Message-ID: <16451.21011.985387.708415@montanaro.dyndns.org>


    >> So, how can I figure out where in the Python source the function call
    >> is coming from using gdb? I'm sure it involves "print" and some
    >> casts.. I couldn't find a howto on python.org

    Jeremy> First, make sure that the code from Misc/gdbinit is in your
    Jeremy> .gdbinit file.  Get the stack trace in gdb and move up/down
    Jeremy> until you get to an eval_frame() frame.  Then call the function
    Jeremy> pyframe.  It will print the filename, function name, and line
    Jeremy> number of the current frame.  The lineno usually points to the
    Jeremy> first line of the function.

I have this in my .gdbinit file:

    define ppystack
        while $pc < Py_Main || $pc > Py_GetArgcArgv
            if $pc > eval_frame && $pc < PyEval_EvalCodeEx
                set $__fn = PyString_AsString(co->co_filename)
                set $__n = PyString_AsString(co->co_name)
                printf "%s (%d): %s\n",  $__fn, f->f_lineno, $__n
            end
            up-silently 1
        end
        select-frame 0
    end

Skip

From jeremy at alum.mit.edu  Mon Mar  1 10:16:14 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon Mar  1 10:16:41 2004
Subject: [Python-Dev] How to debug pyexpat SIGSEGV with GDB?
In-Reply-To: <16451.21011.985387.708415@montanaro.dyndns.org>
References: <403F84D3.15067.13B6DF33@localhost>
	<1078152791.29097.18.camel@localhost.localdomain>
	<16451.21011.985387.708415@montanaro.dyndns.org>
Message-ID: <1078154173.29097.28.camel@localhost.localdomain>

On Mon, 2004-03-01 at 10:09, Skip Montanaro wrote:
>     >> So, how can I figure out where in the Python source the function call
>     >> is coming from using gdb? I'm sure it involves "print" and some
>     >> casts.. I couldn't find a howto on python.org
> 
>     Jeremy> First, make sure that the code from Misc/gdbinit is in your
>     Jeremy> .gdbinit file.  Get the stack trace in gdb and move up/down
>     Jeremy> until you get to an eval_frame() frame.  Then call the function
>     Jeremy> pyframe.  It will print the filename, function name, and line
>     Jeremy> number of the current frame.  The lineno usually points to the
>     Jeremy> first line of the function.
> 
> I have this in my .gdbinit file:
> 
>     define ppystack
>         while $pc < Py_Main || $pc > Py_GetArgcArgv
>             if $pc > eval_frame && $pc < PyEval_EvalCodeEx
>                 set $__fn = PyString_AsString(co->co_filename)
>                 set $__n = PyString_AsString(co->co_name)
>                 printf "%s (%d): %s\n",  $__fn, f->f_lineno, $__n
>             end
>             up-silently 1
>         end
>         select-frame 0
>     end

That's nice!  I never learned how to write real programs in gdb.
You should add a copy to gdbinit.

Jeremy



From barry at python.org  Mon Mar  1 10:25:36 2004
From: barry at python.org (Barry Warsaw)
Date: Mon Mar  1 10:25:42 2004
Subject: [Python-Dev] How to debug pyexpat SIGSEGV with GDB?
In-Reply-To: <1078154173.29097.28.camel@localhost.localdomain>
References: <403F84D3.15067.13B6DF33@localhost>
	<1078152791.29097.18.camel@localhost.localdomain>
	<16451.21011.985387.708415@montanaro.dyndns.org>
	<1078154173.29097.28.camel@localhost.localdomain>
Message-ID: <1078154735.31390.23.camel@anthem.wooz.org>

On Mon, 2004-03-01 at 10:16, Jeremy Hylton wrote:

> That's nice!  I never learned how to write real programs in gdb.
> You should add a copy to gdbinit.

Yes Skip, please do!

-Barry



From skip at pobox.com  Mon Mar  1 10:52:27 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar  1 10:52:36 2004
Subject: [Python-Dev] How to debug pyexpat SIGSEGV with GDB?
In-Reply-To: <1078154173.29097.28.camel@localhost.localdomain>
References: <403F84D3.15067.13B6DF33@localhost>
	<1078152791.29097.18.camel@localhost.localdomain>
	<16451.21011.985387.708415@montanaro.dyndns.org>
	<1078154173.29097.28.camel@localhost.localdomain>
Message-ID: <16451.23611.63088.548890@montanaro.dyndns.org>


    >> I have this in my .gdbinit file:
    ...
    Jeremy> That's nice!  I never learned how to write real programs in gdb.
    Jeremy> You should add a copy to gdbinit.

Done.  I renamed it simply "pystack" and added a short comment describing
its while and if tests as well as flag comments to the relevant files
alerting people to the dependency of pystack on those files.

Skip




From tismer at stackless.com  Mon Mar  1 14:23:34 2004
From: tismer at stackless.com (Christian Tismer)
Date: Mon Mar  1 14:26:09 2004
Subject: [Python-Dev] Idea for a fast calling convention
In-Reply-To: <20040301170809.GA10717@vicky.ecs.soton.ac.uk>
References: <5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<5.1.0.14.0.20040228103037.038218c0@mail.telecommunity.com>
	<2msmgvhza5.fsf@starship.python.net>
	<4040DA26.3090508@stackless.com>
	<20040301170809.GA10717@vicky.ecs.soton.ac.uk>
Message-ID: <40438DB6.2060008@stackless.com>

Armin Rigo wrote:

> Hi Christian!
> 
> On Sat, Feb 28, 2004 at 07:12:54PM +0100, Christian Tismer wrote:
> 
>>p.s.: I believe some automatic source analysis and rewrite might
>>pay off in other areas as well. Grepping through the sources,
>>there are still very many similar patterns of PyArg_ParseTupleXXX
>>calls, which could be replaced by less general, optimized versions.
> 
> 
> I tried that some time ago, when python-dev first talked about METH_O.  I did
> a script that looks for these PyArg_ParseTuple(args, "xxx", ...) calls and
> replace them with some in-line equivalent, e.g. "PyTuple_GET_SIZE(args) == 2
> && PyTuple_GET_ITEM(args, 0)->ob_type == etc. && etc.".
> 
> It didn't pay out.  The METH_O optimization was a much better one.  It seems
> that PyArg_ParseTuple() isn't just called often enough in practice now.
> 
> BTW this remark also applies to the proposals for METH_STACK to avoid the
> tuple creation altogether.  What amount of time is really spent there?

Maybe a big profiling run could clarify if and where something
can be saved. It is also a relative measure:
If the function that I call takes up lots of computation time,
I won't care too much about parameter processing.
On the other hand, a candidate like len() is so cheap to compute
that it really would pay off (but that one is already taken, sure).

For METH_STACK, I'm not sure. Does it raise complication, or
does it make things even clearer?
 From my Stackless POV, throwing parameters into the callee and
not having to care about the arg tuple is much more
"continuation-like". I *would* have a problem with the current style
if the callee wouldn't increfs its parameters, anyway, so I'm allowed
to drop the arg tuple early. Fortunately here is some redundance
of refcounts.

I guess it has a little improvements not only by saving tuple
creation but also that many decrefs, probably not soo much.
A consequent implementation through the whole source seems to
be quite a lot of effort. Maybe a relative measure for some common
cases can give some figures?

ciao -  chris
-- 
Christian Tismer             :^)   <mailto:tismer@stackless.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  mobile +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


From scav at blueyonder.co.uk  Mon Mar  1 17:09:41 2004
From: scav at blueyonder.co.uk (Peter Harris)
Date: Mon Mar  1 17:09:02 2004
Subject: [Python-Dev] PEP 309 re-written
Message-ID: <4043B4A5.9030007@blueyonder.co.uk>

Hi

The latest version of PEP 309 has been published in the usual place. I 
hope much of the woollyness (sp?) of the early versions has been sheared 
off, leaving more ..um..
[metaphor panic!] .. mutton?

I have settled on calling the whole thing "partial function 
application", broadly including
object methods, classes and other callable objects.  I want to name the 
constructor partial(),
because that name will do for now and doesn't do violence to accepted 
terminology the way
curry() or closure() would.

I really want to keep the name the same from now until the PEP is ready 
for review: this little proposal doesn't deserve as many name changes as 
the Mozilla project after all.

Not sure what sort of feedback the PEP needs before review, so I'm open 
to any comments about the wording of the proposal and the usefulness or 
otherwise of its intent.

If anyone can think of any really elegant hacks that are naturally 
expressed by partial function application I'd like to see them, and with 
permission add them to the PEP.  I feel that it's more likely to get 
accepted if there is a better example than my cheesy Tkinter callbacks!

Peter Harris


From paul at prescod.net  Mon Mar  1 23:14:40 2004
From: paul at prescod.net (Paul Prescod)
Date: Mon Mar  1 23:17:31 2004
Subject: [Python-Dev] Idea for a fast calling convention
In-Reply-To: <2mn070hm9u.fsf@starship.python.net>
References: <5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<007b01c3fd0a$88f406a0$e1b52c81@oemcomputer>
	<5.1.1.6.0.20040227170742.01ee7910@telecommunity.com>
	<5.1.0.14.0.20040228103037.038218c0@mail.telecommunity.com>
	<2msmgvhza5.fsf@starship.python.net> <4040DA26.3090508@stackless.com>
	<2mn070hm9u.fsf@starship.python.net>
Message-ID: <40440A30.70204@prescod.net>

Michael Hudson wrote:

>...
> 
> Well, yes.  C sucks seriously for things like this, though.  It's
> frankly embarassing that *every* time, say, ''.split() is called, some
> silly string is being parsed.  Unclear what to do about this (excpet
> PyPy, I guess).

Or Pyrex.

Obviously either PyPy or Pyrex takes quite a bit of code rewriting. Fun 
rewriting but rewriting nevertheless. If we're going to require 
rewriting isn't there a more short-term way to simply eliminate the 
requirement to use PyArgParseTuple in most cases?

If functions declared their type signatures in a new-fangled PyMethodDef 
then Python could optimize away common cases at module-initialization or 
type-initialization time. I don't understand why we usually (as opposed 
to occasionally) want to declare our type signature only when the 
function is run rather than earlier when the runtime could do useful 
stuff with it.

Putting aside performance for a second: introspection on C functions is 
pretty weak for the same reason.

  Paul Prescod



From bac at OCF.Berkeley.EDU  Tue Mar  2 00:24:05 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Tue Mar  2 00:24:17 2004
Subject: [Python-Dev] Boundary checks on arguments to time.strftime()
In-Reply-To: <403A9650.80007@ocf.berkeley.edu>
References: <004a01c3f9df$c500e7e0$f72dc797@oemcomputer>	<200402231553.i1NFruD03217@guido.python.org>
	<403A9650.80007@ocf.berkeley.edu>
Message-ID: <40441A75.7070703@ocf.berkeley.edu>

Brett C. wrote:

> OK, so I will raise TypeError and check everything for time.strftime(). 
>  asctime() is the only iffy thing.  I say either leave it be or force 
> the month value to January if it is out of the proper range.  Any 
> opinions on one or the other?  I am leaning towards the former personally.
> 

OK, added the checks and checked them in.  I left asctime() alone.

-Brett

From vse at srasys.co.in  Tue Mar  2 07:40:11 2004
From: vse at srasys.co.in (V.Selvam)
Date: Tue Mar  2 02:10:31 2004
Subject: [Python-Dev] python installation in solaris machine
Message-ID: <006101c40053$810858f0$0fa8a8c0@selvam>

Skipped content of type multipart/alternative-------------- next part --------------
sh-2.03# ls
Setup               config.o            makexp_aix          signalmodule.c
Setup.config        cryptmodule.c       mathmodule.c        signalmodule.o
Setup.config.in     cstubs              md5.h               socketmodule.c
Setup.dist          dbmmodule.c         md5c.c              sre.h
Setup.local         dlmodule.c          md5module.c         sre_constants.h
_codecsmodule.c     errnomodule.c       mmapmodule.c        stropmodule.c
_curses_panel.c     errnomodule.o       mpzmodule.c         structmodule.c
_cursesmodule.c     fcntlmodule.c       newmodule.c         sunaudiodev.c
_hotshot.c          flmodule.c          newmodule.o         svmodule.c
_localemodule.c     fmmodule.c          nismodule.c         symtablemodule.c
_sre.c              fpectlmodule.c      operator.c          symtablemodule.o
_sre.o              fpetestmodule.c     parsermodule.c      syslogmodule.c
_testcapimodule.c   gcmodule.c          pcre-int.h          tclNotify.c
_tkinter.c          gcmodule.o          pcre.h              termios.c
_weakref.c          gdbmmodule.c        pcremodule.c        testcapi_long.h
addrinfo.h          getaddrinfo.c       posixmodule.c       threadmodule.c
almodule.c          getbuildinfo.c      posixmodule.o       threadmodule.o
ar_beos             getbuildinfo.o      puremodule.c        timemodule.c
arraymodule.c       getnameinfo.c       pwdmodule.c         timing.h
audioop.c           getpath.c           pyexpat.c           timingmodule.c
binascii.c          getpath.o           pypcre.c            tkappinit.c
bsddbmodule.c       glmodule.c          python.c            unicodedata.c
cPickle.c           grpmodule.c         python.o            unicodedata_db.h
cStringIO.c         imageop.c           readline.c          unicodename_db.h
ccpython.cc         imgfile.c           regexmodule.c       xreadlinesmodule.c
cdmodule.c          ld_so_aix           regexpr.c           xxmodule.c
cgen.py             ld_so_beos          regexpr.h           xxsubtype.c
cgensupport.c       libpython2.2.a      resource.c          xxsubtype.o
cgensupport.h       license.terms       rgbimgmodule.c      yuv.h
clmodule.c          linuxaudiodev.c     rotormodule.c       yuvconvert.c
cmathmodule.c       main.c              selectmodule.c      zlibmodule.c
config.c            main.o              sgimodule.c
config.c.in         makesetup           shamodule.c
bash-2.03# cd .. 
bash-2.03# ls
Demo             Makefile.pre     Parser           config.log
Doc              Makefile.pre.in  Python           config.status
Grammar          Misc             README           configure
Include          Modules          RISCOS           configure.in
LICENSE          Objects          Tools            install-sh
Lib              PC               acconfig.h       pyconfig.h
Mac              PCbuild          buildno          pyconfig.h.in
Makefile         PLAN.txt         config.cache     setup.py
bash-2.03# /usr/ccs/bin/make 
rm -f libpython2.2.a
ar cr libpython2.2.a Modules/getbuildinfo.o
sh: ar: not found
*** Error code 1
make: Fatal error: Command failed for target `libpython2.2.a'
bash-2.03# /usr/ccs/bin/make clean
find . -name '*.o' -exec rm -f {} ';'
find . -name '*.s[ol]' -exec rm -f {} ';'
find . -name '*.py[co]' -exec rm -f {} ';'
bash-2.03# /usr/ccs/bin/make 
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Modules/python.o Modules/python.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/acceler.o Parser/acceler.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/grammar1.o Parser/grammar1.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/listnode.o Parser/listnode.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/node.o Parser/node.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/parser.o Parser/parser.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/parsetok.o Parser/parsetok.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/tokenizer.o Parser/tokenizer.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/bitset.o Parser/bitset.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/metagrammar.o Parser/metagrammar.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/myreadline.o Parser/myreadline.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/abstract.o Objects/abstract.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/bufferobject.o Objects/bufferobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/cellobject.o Objects/cellobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/classobject.o Objects/classobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/cobject.o Objects/cobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/complexobject.o Objects/complexobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/descrobject.o Objects/descrobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/fileobject.o Objects/fileobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/floatobject.o Objects/floatobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/frameobject.o Objects/frameobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/funcobject.o Objects/funcobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/intobject.o Objects/intobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/iterobject.o Objects/iterobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/listobject.o Objects/listobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/longobject.o Objects/longobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/dictobject.o Objects/dictobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/methodobject.o Objects/methodobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/moduleobject.o Objects/moduleobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/object.o Objects/object.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/rangeobject.o Objects/rangeobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/sliceobject.o Objects/sliceobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/stringobject.o Objects/stringobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/structseq.o Objects/structseq.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/tupleobject.o Objects/tupleobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/typeobject.o Objects/typeobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/weakrefobject.o Objects/weakrefobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/unicodeobject.o Objects/unicodeobject.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Objects/unicodectype.o Objects/unicodectype.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/bltinmodule.o Python/bltinmodule.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/exceptions.o Python/exceptions.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/ceval.o Python/ceval.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/mysnprintf.o Python/mysnprintf.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/firstsets.o Parser/firstsets.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/grammar.o Parser/grammar.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/pgen.o Parser/pgen.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/printgrammar.o Parser/printgrammar.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Parser/pgenmain.o Parser/pgenmain.c
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes  Parser/acceler.o  Parser/grammar1.o  Parser/listnode.o  Parser/node.o  Parser/parser.o  Parser/parsetok.o  Parser/tokenizer.o  Parser/bitset.o  Parser/metagrammar.o Python/mysnprintf.o  Parser/firstsets.o  Parser/grammar.o  Parser/pgen.o  Parser/printgrammar.o  Parser/pgenmain.o -lsocket -lnsl -ldl  -lpthread -lthread -o Parser/pgen
Parser/pgen ./Grammar/Grammar ./Include/graminit.h ./Python/graminit.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/compile.o Python/compile.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/codecs.o Python/codecs.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/errors.o Python/errors.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/frozen.o Python/frozen.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/frozenmain.o Python/frozenmain.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/future.o Python/future.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/getargs.o Python/getargs.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/getcompiler.o Python/getcompiler.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/getcopyright.o Python/getcopyright.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/getmtime.o Python/getmtime.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H -DPLATFORM='"sunos5"' -o Python/getplatform.o ./Python/getplatform.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/getversion.o Python/getversion.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/graminit.o Python/graminit.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/import.o Python/import.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H -I. -o Python/importdl.o ./Python/importdl.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/marshal.o Python/marshal.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/modsupport.o Python/modsupport.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/mystrtoul.o Python/mystrtoul.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/pyfpe.o Python/pyfpe.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/pystate.o Python/pystate.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/pythonrun.o Python/pythonrun.c
Python/pythonrun.c: In function `initsigs':
Python/pythonrun.c:1348: warning: function declaration isn't a prototype
Python/pythonrun.c: In function `PyOS_getsig':
Python/pythonrun.c:1448: warning: function declaration isn't a prototype
Python/pythonrun.c: In function `PyOS_setsig':
Python/pythonrun.c:1470: warning: function declaration isn't a prototype
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/structmember.o Python/structmember.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/symtable.o Python/symtable.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/sysmodule.o Python/sysmodule.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/traceback.o Python/traceback.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/getopt.o Python/getopt.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/dynload_shlib.o Python/dynload_shlib.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Python/thread.o Python/thread.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Modules/config.o Modules/config.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -DPYTHONPATH='":plat-sunos5:lib-tk"' \
 -DPREFIX='"/usr/local"' \
 -DEXEC_PREFIX='"/usr/local"' \
 -DVERSION='"2.2"' \
 -DVPATH='""' \
 -o Modules/getpath.o ./Modules/getpath.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Modules/main.o Modules/main.c
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -o Modules/gcmodule.o Modules/gcmodule.c
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H   -c ./Modules/threadmodule.c -o Modules/threadmodule.o
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H   -c ./Modules/signalmodule.c -o Modules/signalmodule.o
./Modules/signalmodule.c:89: warning: function declaration isn't a prototype
./Modules/signalmodule.c: In function `signal_signal':
./Modules/signalmodule.c:213: warning: function declaration isn't a prototype
./Modules/signalmodule.c:215: warning: function declaration isn't a prototype
./Modules/signalmodule.c:226: warning: function declaration isn't a prototype
./Modules/signalmodule.c: In function `initsignal':
./Modules/signalmodule.c:333: warning: function declaration isn't a prototype
./Modules/signalmodule.c:337: warning: function declaration isn't a prototype
./Modules/signalmodule.c:356: warning: function declaration isn't a prototype
./Modules/signalmodule.c:358: warning: function declaration isn't a prototype
./Modules/signalmodule.c: In function `finisignal':
./Modules/signalmodule.c:562: warning: function declaration isn't a prototype
./Modules/signalmodule.c:570: warning: function declaration isn't a prototype
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H   -c ./Modules/posixmodule.c -o Modules/posixmodule.o
./Modules/posixmodule.c: In function `posix_fstatvfs':
./Modules/posixmodule.c:4393: warning: passing arg 2 of `fstatvfs64' from incompatible pointer type
./Modules/posixmodule.c: In function `posix_statvfs':
./Modules/posixmodule.c:4420: warning: passing arg 2 of `statvfs64' from incompatible pointer type
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H   -c ./Modules/errnomodule.c -o Modules/errnomodule.o
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H   -c ./Modules/_sre.c -o Modules/_sre.o
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H   -c ./Modules/newmodule.c -o Modules/newmodule.o
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H   -c ./Modules/symtablemodule.c -o Modules/symtablemodule.o
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H   -c ./Modules/xxsubtype.c -o Modules/xxsubtype.o
if test -f buildno; then \
 expr `cat buildno` + 1 >buildno1; \
 mv -f buildno1 buildno; \
else echo 1 >buildno; fi
gcc -c -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DHAVE_CONFIG_H  -DBUILD=`cat buildno` -o Modules/getbuildinfo.o ./Modules/getbuildinfo.c
rm -f libpython2.2.a
ar cr libpython2.2.a Modules/getbuildinfo.o
sh: ar: not found
*** Error code 1
make: Fatal error: Command failed for target `libpython2.2.a'


From lists at andreas-jung.com  Tue Mar  2 02:22:22 2004
From: lists at andreas-jung.com (Andreas Jung)
Date: Tue Mar  2 02:22:28 2004
Subject: [Python-Dev] python installation in solaris machine
In-Reply-To: <006101c40053$810858f0$0fa8a8c0@selvam>
References: <006101c40053$810858f0$0fa8a8c0@selvam>
Message-ID: <2147483647.1078215742@[192.168.0.102]>



--On Dienstag, 2. M?rz 2004 18:10 Uhr +0530 "V.Selvam" <vse@srasys.co.in> 
wrote:

> Hi,
> Im newb to python. I tried to install Python in my sun Solaris

Please address your problem to the python@python.org list.
This list is not for general Python questions.

> sh: ar: not found
> *** Error code 1

The error message says it all. You system is either badly installed
or your PATH is broken (go and search for 'ar' command which is
needed to build the corresponding library).

-aj


From aahz at pythoncraft.com  Tue Mar  2 03:38:20 2004
From: aahz at pythoncraft.com (Aahz)
Date: Tue Mar  2 03:38:24 2004
Subject: [Python-Dev] python installation in solaris machine
In-Reply-To: <2147483647.1078215742@[192.168.0.102]>
References: <006101c40053$810858f0$0fa8a8c0@selvam>
	<2147483647.1078215742@[192.168.0.102]>
Message-ID: <20040302083820.GA21429@panix.com>

On Tue, Mar 02, 2004, Andreas Jung wrote:
> --On Dienstag, 2. M?rz 2004 18:10 Uhr +0530 "V.Selvam" <vse@srasys.co.in> 
> wrote:
>>
>>Im newb to python. I tried to install Python in my sun Solaris
> 
> Please address your problem to the python@python.org list.
> This list is not for general Python questions.

Close, but no cigar.  The correct address is python-list@python.org, and
is often better accessed as the newsgroup comp.lang.python.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From arigo at tunes.org  Mon Mar  1 11:12:15 2004
From: arigo at tunes.org (Armin Rigo)
Date: Tue Mar  2 04:56:40 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
Message-ID: <20040301161215.GA3963@vicky.ecs.soton.ac.uk>

Hello,

We are about to apply a patch that both speeds up frame allocation and make it 
simpler by removing the frame freelist (patch #876206).

The drawback is that recursive functions would be slower now.  The two 
alternatives that we have are thus:


(1) Ignore the recursive function problem and apply the patch.

def f(n, m):
    if n > 0 and m > 0:
        return f(n-1, m) + f(n, m-1)
    else:
        return 1
f(11, 11)

Takes 3.26s instead of just 2.64s.  This is a major speed hit (20%).  On the
other hand, recursive functions are not so common in Python, and the patch
simplifies the C source interestingly.


(2) Don't remove the frame freelist but combine it with the new patch.

This would give the best of both worlds performance-wise, but the frame
allocation code becomes convoluted.  It would amount to add a third way to
allocate a frame, and all three ways give a frame with different guaranteed
invariants (i.e. different sets of fields that we know to be already correct
and thus don't have to initialize).

The question is whether (1) or (2) is better.


Armin


From andreas at andreas-jung.com  Tue Mar  2 02:20:24 2004
From: andreas at andreas-jung.com (Andreas Jung)
Date: Tue Mar  2 09:13:59 2004
Subject: [Python-Dev] python installation in solaris machine
In-Reply-To: <006101c40053$810858f0$0fa8a8c0@selvam>
References: <006101c40053$810858f0$0fa8a8c0@selvam>
Message-ID: <2147483647.1078215624@[192.168.0.102]>



--On Dienstag, 2. M?rz 2004 18:10 Uhr +0530 "V.Selvam" <vse@srasys.co.in> 
wrote:

> Hi,
> Im newb to python. I tried to install Python in my sun Solaris

Please address your problem to the python@python.org list.
This list is not for general Python questions.

> sh: ar: not found
> *** Error code 1

The error message says it all. You system is either badly installed
or your PATH is broken (go and search for 'ar' command which is
needed to build the corresponding library).

-aj

From skip at pobox.com  Tue Mar  2 09:50:04 2004
From: skip at pobox.com (Skip Montanaro)
Date: Tue Mar  2 09:50:19 2004
Subject: [Python-Dev] python installation in solaris machine
In-Reply-To: <006101c40053$810858f0$0fa8a8c0@selvam>
References: <006101c40053$810858f0$0fa8a8c0@selvam>
Message-ID: <16452.40732.497958.411244@montanaro.dyndns.org>


    VS> ar cr libpython2.2.a Modules/getbuildinfo.o
    VS> sh: ar: not found

As others have pointed out, c.l.py (aka python-list@python.org) is the
correct place for this sort of question.  You need /usr/ccs/bin in your
PATH.

Skip

From allison at sumeru.stanford.EDU  Tue Mar  2 10:58:52 2004
From: allison at sumeru.stanford.EDU (Dennis Allison)
Date: Tue Mar  2 10:59:00 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <20040301161215.GA3963@vicky.ecs.soton.ac.uk>
Message-ID: <Pine.LNX.4.10.10403020756010.2543-100000@sumeru.stanford.EDU>

I would be cautious about anything that slows recursive functions becasue
almost any interesting data structure traversal is recursive.  

On Mon, 1 Mar 2004, Armin Rigo wrote:

> Hello,
> 
> We are about to apply a patch that both speeds up frame allocation and make it 
> simpler by removing the frame freelist (patch #876206).
> 
> The drawback is that recursive functions would be slower now.  The two 
> alternatives that we have are thus:
> 
> 
> (1) Ignore the recursive function problem and apply the patch.
> 
> def f(n, m):
>     if n > 0 and m > 0:
>         return f(n-1, m) + f(n, m-1)
>     else:
>         return 1
> f(11, 11)
> 
> Takes 3.26s instead of just 2.64s.  This is a major speed hit (20%).  On the
> other hand, recursive functions are not so common in Python, and the patch
> simplifies the C source interestingly.
> 
> 
> (2) Don't remove the frame freelist but combine it with the new patch.
> 
> This would give the best of both worlds performance-wise, but the frame
> allocation code becomes convoluted.  It would amount to add a third way to
> allocate a frame, and all three ways give a frame with different guaranteed
> invariants (i.e. different sets of fields that we know to be already correct
> and thus don't have to initialize).
> 
> The question is whether (1) or (2) is better.
> 
> 
> Armin
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/allison%40sumeru.stanford.edu
> 


From guido at python.org  Tue Mar  2 11:00:09 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar  2 11:00:21 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: Your message of "Mon, 01 Mar 2004 16:12:15 GMT."
	<20040301161215.GA3963@vicky.ecs.soton.ac.uk> 
References: <20040301161215.GA3963@vicky.ecs.soton.ac.uk> 
Message-ID: <200403021600.i22G09a31576@guido.python.org>

> We are about to apply a patch that both speeds up frame allocation
> and make it simpler by removing the frame freelist (patch #876206).
> 
> The drawback is that recursive functions would be slower now.

How come?  Is it because of the deep nesting in your example, or
because of association of frames with function objects?

(Sorry, no time to read the patch.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From python at rcn.com  Tue Mar  2 11:42:48 2004
From: python at rcn.com (Raymond Hettinger)
Date: Tue Mar  2 11:44:38 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <200403021600.i22G09a31576@guido.python.org>
Message-ID: <005801c40075$67101f60$7027c797@oemcomputer>

> > The drawback is that recursive functions would be slower now.
> 
> How come?

With the current freelist approach, there can be a large pool of
pre-made frames available for each level of recursion.  On the
plus side, this means fewer calls to malloc().  On the minus side,
the pooled frames are generic to any code block and take a long
time to initialize.

The patch eliminates the freelist in favor of keeping a single
pre-made frame for each code block.  In addition to saving 
a call to malloc(), the advantage is that the pre-made frame
is custom to the code block and only half of fields need to
be updated each time the code block is called.  

This is a nice net win for normal code blocks.  However, recursive
functions use up their one pre-made frame on the first level of
recursion.  On subsequent calls, they have to call malloc() resulting
in a net slowdown.

As is, the patch is slim and elegant.  However, it could built
out to have both a code block specific pre-made frame and a freelist.
The patch would then be much larger and somewhat ugly, but it would
avoid the speed loss for recursive functions.

Armin's quick and dirty timings for the current patch show a 10%
speedup for non-recursive functions and a 20% slowdown of 
recursive functions.

The question is whether to clutter the patch in order to save the
20% on recursive functions.


Raymond Hettinger


From guido at python.org  Tue Mar  2 11:54:00 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar  2 11:54:05 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: Your message of "Tue, 02 Mar 2004 11:42:48 EST."
	<005801c40075$67101f60$7027c797@oemcomputer> 
References: <005801c40075$67101f60$7027c797@oemcomputer> 
Message-ID: <200403021654.i22Gs0131675@guido.python.org>

> > > The drawback is that recursive functions would be slower now.
> > 
> > How come?
> 
> With the current freelist approach, there can be a large pool of
> pre-made frames available for each level of recursion.  On the
> plus side, this means fewer calls to malloc().  On the minus side,
> the pooled frames are generic to any code block and take a long
> time to initialize.
> 
> The patch eliminates the freelist in favor of keeping a single
> pre-made frame for each code block.  In addition to saving 
> a call to malloc(), the advantage is that the pre-made frame
> is custom to the code block and only half of fields need to
> be updated each time the code block is called.  
> 
> This is a nice net win for normal code blocks.  However, recursive
> functions use up their one pre-made frame on the first level of
> recursion.  On subsequent calls, they have to call malloc() resulting
> in a net slowdown.
> 
> As is, the patch is slim and elegant.  However, it could built
> out to have both a code block specific pre-made frame and a freelist.
> The patch would then be much larger and somewhat ugly, but it would
> avoid the speed loss for recursive functions.
> 
> Armin's quick and dirty timings for the current patch show a 10%
> speedup for non-recursive functions and a 20% slowdown of 
> recursive functions.
> 
> The question is whether to clutter the patch in order to save the
> 20% on recursive functions.

Yes, I think it's worth the extra effort.  The last thing we need is
feeding the meme that recursion is a feature that should be avoided.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From skip at pobox.com  Tue Mar  2 11:57:44 2004
From: skip at pobox.com (Skip Montanaro)
Date: Tue Mar  2 11:57:52 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <005801c40075$67101f60$7027c797@oemcomputer>
References: <200403021600.i22G09a31576@guido.python.org>
	<005801c40075$67101f60$7027c797@oemcomputer>
Message-ID: <16452.48392.280847.742883@montanaro.dyndns.org>


    Raymond> As is, the patch is slim and elegant.  However, it could built
    Raymond> out to have both a code block specific pre-made frame and a
    Raymond> freelist.  

Why not a list of pre-made frames for each code block (default length 1)?

Skip

From jcarlson at uci.edu  Tue Mar  2 11:56:07 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Tue Mar  2 11:59:22 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <Pine.LNX.4.10.10403020756010.2543-100000@sumeru.stanford.EDU>
References: <20040301161215.GA3963@vicky.ecs.soton.ac.uk>
	<Pine.LNX.4.10.10403020756010.2543-100000@sumeru.stanford.EDU>
Message-ID: <20040302084051.1E50.JCARLSON@uci.edu>

> I would be cautious about anything that slows recursive functions becasue
> almost any interesting data structure traversal is recursive.  

Any recursion can be made iterative.  Sure, it can be a pain to do, but
it is always a good exercise.  See this thread for a generic and involved
example:
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&safe=off&th=d04dcc083dc6d219&rnum=1

hierCodeTreePanel.new_heirarchy in the PyPE source code (pype.sourceforge.net)
also has an example.


Whether the patch should be made, I don't know, 20% slowdown is pretty
hefty.  Is this a result of the ideas for finding a fast calling
convention discussed recently?  If so, then it doesn't seem to be that
terribly fast (10% faster for standard functions, 20% slower on
recursive).  Any ideas on the speed of alternating recursions (each
level alternates between some small set of functions)?

 - Josiah


From jeremy at alum.mit.edu  Tue Mar  2 12:08:33 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue Mar  2 12:09:03 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <005801c40075$67101f60$7027c797@oemcomputer>
References: <005801c40075$67101f60$7027c797@oemcomputer>
Message-ID: <1078247313.29097.134.camel@localhost.localdomain>

On Tue, 2004-03-02 at 11:42, Raymond Hettinger wrote:
> The patch eliminates the freelist in favor of keeping a single
> pre-made frame for each code block.  In addition to saving 
> a call to malloc(), the advantage is that the pre-made frame
> is custom to the code block and only half of fields need to
> be updated each time the code block is called.  
> 
> This is a nice net win for normal code blocks.  However, recursive
> functions use up their one pre-made frame on the first level of
> recursion.  On subsequent calls, they have to call malloc() resulting
> in a net slowdown.

Would this effect also obtain in a multi-threaded program if two
different threads called the same function concurrently?

Jeremy



From bob at redivi.com  Tue Mar  2 12:17:44 2004
From: bob at redivi.com (Bob Ippolito)
Date: Tue Mar  2 12:14:29 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <200403021654.i22Gs0131675@guido.python.org>
References: <005801c40075$67101f60$7027c797@oemcomputer>
	<200403021654.i22Gs0131675@guido.python.org>
Message-ID: <8513D5DE-6C6D-11D8-8169-000A95686CD8@redivi.com>

On Mar 2, 2004, at 11:54 AM, Guido van Rossum wrote:

>>>> The drawback is that recursive functions would be slower now.
>>>
>>> How come?
>>
>> With the current freelist approach, there can be a large pool of
>> pre-made frames available for each level of recursion.  On the
>> plus side, this means fewer calls to malloc().  On the minus side,
>> the pooled frames are generic to any code block and take a long
>> time to initialize.
>>
>> The patch eliminates the freelist in favor of keeping a single
>> pre-made frame for each code block.  In addition to saving
>> a call to malloc(), the advantage is that the pre-made frame
>> is custom to the code block and only half of fields need to
>> be updated each time the code block is called.
>>
>> This is a nice net win for normal code blocks.  However, recursive
>> functions use up their one pre-made frame on the first level of
>> recursion.  On subsequent calls, they have to call malloc() resulting
>> in a net slowdown.
>>
>> As is, the patch is slim and elegant.  However, it could built
>> out to have both a code block specific pre-made frame and a freelist.
>> The patch would then be much larger and somewhat ugly, but it would
>> avoid the speed loss for recursive functions.
>>
>> Armin's quick and dirty timings for the current patch show a 10%
>> speedup for non-recursive functions and a 20% slowdown of
>> recursive functions.
>>
>> The question is whether to clutter the patch in order to save the
>> 20% on recursive functions.
>
> Yes, I think it's worth the extra effort.  The last thing we need is
> feeding the meme that recursion is a feature that should be avoided.

Well it's already true to some small extent because of the recursion 
depth limit.  Though, this has only been a problem for me once, and I 
rewrote it as a large ugly iterative version.

-bob


From jeremy at alum.mit.edu  Tue Mar  2 12:22:57 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue Mar  2 12:23:27 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <8513D5DE-6C6D-11D8-8169-000A95686CD8@redivi.com>
References: <005801c40075$67101f60$7027c797@oemcomputer>
	<200403021654.i22Gs0131675@guido.python.org>
	<8513D5DE-6C6D-11D8-8169-000A95686CD8@redivi.com>
Message-ID: <1078248177.29097.141.camel@localhost.localdomain>

On Tue, 2004-03-02 at 12:17, Bob Ippolito wrote:
> Well it's already true to some small extent because of the recursion 
> depth limit.  Though, this has only been a problem for me once, and I 
> rewrote it as a large ugly iterative version.

Don't know the particulars, but I thought it worth mentioning that the
recursion limit is set fairly conservatively.  In many cases, it's
possible to boost the recursion limit.  I don't know if it's ever
practical, because if something goes wrong you get a segfault.

Different kinds of recursive Python calls generated different numbers of
C stack frames.  So if you have a recursive __repr__(), it will consume
more C stack than a simple Python function.

Python's recursion limit is set to 1000 in 2.3.  On my box (RH9) the
first crash I get is with a recursion limit of 5800.  It blows up on the
repr case.  For simple recursive functions, I can push it all the way up
to 21,500.

Jeremy



From tim.one at comcast.net  Tue Mar  2 12:23:29 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar  2 12:23:33 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <16452.48392.280847.742883@montanaro.dyndns.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>

[Skip]
> Why not a list of pre-made frames for each code block (default length 1)?

Or, IOW, per-code-block frame freelists (plural).  Recursive generators have
gotten quite popular in my code <wink>.

BTW, if a gazillion distinct functions run when starting up a large app, do
we hang on to the memory for their gazillion distinct frames forever?
Recycling from a common frame pool has memory benefits in cases other than
just recursion.  Experiment:  run test.py from a Zope 2 or Zope 3 checkout,
and look at highwater memory consumption with and without the patch.


From allison at sumeru.stanford.EDU  Tue Mar  2 12:24:10 2004
From: allison at sumeru.stanford.EDU (Dennis Allison)
Date: Tue Mar  2 12:24:18 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <20040302084051.1E50.JCARLSON@uci.edu>
Message-ID: <Pine.LNX.4.10.10403020914110.2543-100000@sumeru.stanford.EDU>


Josiah,

Since "any recursiton can be made iterative", would you support removing
recursion from Python (-:

Recursion is a natural way to program many things, particularly in Python.  
I believe it's a bad design choice to penalize recursive functions.  On
the other hand, specializing the code for non-recurive functions is
appealing--so, keep the free list and add the pre-built frame and bite the
bullet on complexity per Guido's comment.


On Tue, 2 Mar 2004, Josiah Carlson wrote:

> > I would be cautious about anything that slows recursive functions becasue
> > almost any interesting data structure traversal is recursive.  
> 
> Any recursion can be made iterative.  Sure, it can be a pain to do, but
> it is always a good exercise.  See this thread for a generic and involved
> example:
> http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&safe=off&th=d04dcc083dc6d219&rnum=1
> 
> hierCodeTreePanel.new_heirarchy in the PyPE source code (pype.sourceforge.net)
> also has an example.


From mwh at python.net  Tue Mar  2 12:26:56 2004
From: mwh at python.net (Michael Hudson)
Date: Tue Mar  2 12:27:00 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <1078248177.29097.141.camel@localhost.localdomain> (Jeremy
	Hylton's message of "Tue, 02 Mar 2004 12:22:57 -0500")
References: <005801c40075$67101f60$7027c797@oemcomputer>
	<200403021654.i22Gs0131675@guido.python.org>
	<8513D5DE-6C6D-11D8-8169-000A95686CD8@redivi.com>
	<1078248177.29097.141.camel@localhost.localdomain>
Message-ID: <2moerfcg7z.fsf@starship.python.net>

Jeremy Hylton <jeremy@alum.mit.edu> writes:

> On Tue, 2004-03-02 at 12:17, Bob Ippolito wrote:
>> Well it's already true to some small extent because of the recursion 
>> depth limit.  Though, this has only been a problem for me once, and I 
>> rewrote it as a large ugly iterative version.
>
> Don't know the particulars, but I thought it worth mentioning that the
> recursion limit is set fairly conservatively.  In many cases, it's
> possible to boost the recursion limit.  I don't know if it's ever
> practical, because if something goes wrong you get a segfault.
>
> Different kinds of recursive Python calls generated different numbers of
> C stack frames.  So if you have a recursive __repr__(), it will consume
> more C stack than a simple Python function.
>
> Python's recursion limit is set to 1000 in 2.3.  On my box (RH9) the
> first crash I get is with a recursion limit of 5800.  It blows up on the
> repr case.  For simple recursive functions, I can push it all the way up
> to 21,500.

But list_sort pushes quite a large object onto the stack.  So if you
arrange for that too appear on the stack repeatedly, you get a crash
much earlier.

Python's recursion limit is a nasty hack.  Unfortunately, all the
alternatives are worse :-( (except some variant of stackless, I guess,
but that probably doesn't help the list_sort business).

Cheers,
mwh

-- 
  I have a cat, so I know that when she digs her very sharp claws into
  my chest or stomach it's really a sign of affection, but I don't see
  any reason for programming languages to show affection with pain.
                                        -- Erik Naggum, comp.lang.lisp

From skip at pobox.com  Tue Mar  2 12:35:00 2004
From: skip at pobox.com (Skip Montanaro)
Date: Tue Mar  2 12:35:12 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
References: <16452.48392.280847.742883@montanaro.dyndns.org>
	<LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
Message-ID: <16452.50628.620206.348577@montanaro.dyndns.org>


    Tim> [Skip]
    >> Why not a list of pre-made frames for each code block (default length
    >> 1)?

    Tim> Or, IOW, per-code-block frame freelists (plural).  Recursive
    Tim> generators have gotten quite popular in my code <wink>.

Yeah.  I don't know how to handle the memory release issues, but it seems
cleaner to me to do things just one way instead of having both a single
per-code block frame *and* a general-purpose frame free list.

Skip

From tjreedy at udel.edu  Tue Mar  2 14:37:45 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue Mar  2 14:37:48 2004
Subject: [Python-Dev] Re: [ python-Patches-876206 ] scary frame speed hacks
References: <20040301161215.GA3963@vicky.ecs.soton.ac.uk>
Message-ID: <c22nq7$ucb$1@sea.gmane.org>


"Armin Rigo" <arigo@tunes.org> wrote in message
news:20040301161215.GA3963@vicky.ecs.soton.ac.uk...
> Hello,
>
> Takes 3.26s instead of just 2.64s.  This is a major speed hit (20%).

Yes.  People will notice.  I feel safe in predicting that an avoidable 20%
slowdown will generate 'strong' objections, especially from devotees of the
functional/recursive style.

> On the other hand, recursive functions are not so common in Python,

I think the frequency varies considerably depending on the problem domain
and the inductional persuasion of the programmer: iterationist vs.
recursionist vs. inbetweenist.

While Python, with its 'for' statements, encourages the iterative form of
linear induction, branching recursion is usually much clearer that the
iterative equivalent, and Python is meant to encourage readable code.
Since branching recursion usually generates a wide but shallow (virtual)
tree of argument structures, the depth limit is usually not a problem for
such cases.

> (2) Don't remove the frame freelist but combine it with the new patch.
> This would give the best of both worlds performance-wise, but the frame
> allocation code becomes convoluted.

I have sometimes thought that maybe there should be a means of 'declaring',
or perhaps 'detecting' a function to be recursive, with two ramifications:

1. Recursive functions would be faster if the definition name of the
function, when used for recursive calls within the function, could be coded
as a local rather than global, and perhaps even a local constants, so that
the code was hard-coded to call itself.  (I am aware that the separation of
code and function might make this a bit tricky.)

2. Multiple execution frames per function are only needed for recursion
(which is why old Fortran, with only one per, forbade recursion, even
indirectly).  So non-recursive functions could be perhaps sped up -- as you
are proposing to do,

(Automated recursion to iteration transformation is a more pie-in-the-sky
third possibility.)

If function decorators are added, perhaps there could be a builtin
recursive() decorator that would manipulate the function and maybe the code
for faster running.
To justify asking people to run around revising their code, this should
preferably be faster than at present, and not just as fast.

Terry J. Reedy




From python at rcn.com  Tue Mar  2 14:42:08 2004
From: python at rcn.com (Raymond Hettinger)
Date: Tue Mar  2 14:43:57 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
Message-ID: <002701c4008e$737c52a0$ebad2c81@oemcomputer>

> [Skip]
> > Why not a list of pre-made frames for each code block (default
length
> 1)?
> 
> Or, IOW, per-code-block frame freelists (plural).  Recursive
generators
> have
> gotten quite popular in my code <wink>.
> 
> BTW, if a gazillion distinct functions run when starting up a large
app,
> do
> we hang on to the memory for their gazillion distinct frames forever?
> Recycling from a common frame pool has memory benefits in cases other
than
> just recursion.  Experiment:  run test.py from a Zope 2 or Zope 3
> checkout,
> and look at highwater memory consumption with and without the patch.

We should look at keeping the freelist and when a code block needs a
frame,
it can request the one that it last used if available.  Roughly:

def getblock(idnum):
   if idnum in freelist:
       return freelist[id]
   if len(freelist):
       return freelist.pop()
   return makeNewFrame()


Raymond


From pje at telecommunity.com  Tue Mar  2 15:03:40 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar  2 15:03:44 2004
Subject: [Python-Dev] Re: [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <c22nq7$ucb$1@sea.gmane.org>
References: <20040301161215.GA3963@vicky.ecs.soton.ac.uk>
Message-ID: <5.1.1.6.0.20040302145805.02b894c0@telecommunity.com>

At 02:37 PM 3/2/04 -0500, Terry Reedy wrote:

>"Armin Rigo" <arigo@tunes.org> wrote in message
>news:20040301161215.GA3963@vicky.ecs.soton.ac.uk...
> > Hello,
> >
> > Takes 3.26s instead of just 2.64s.  This is a major speed hit (20%).
>
>Yes.  People will notice.  I feel safe in predicting that an avoidable 20%
>slowdown will generate 'strong' objections, especially from devotees of the
>functional/recursive style.
>
> > On the other hand, recursive functions are not so common in Python,
>
>I think the frequency varies considerably depending on the problem domain
>and the inductional persuasion of the programmer: iterationist vs.
>recursionist vs. inbetweenist.
>
>While Python, with its 'for' statements, encourages the iterative form of
>linear induction, branching recursion is usually much clearer that the
>iterative equivalent, and Python is meant to encourage readable code.
>Since branching recursion usually generates a wide but shallow (virtual)
>tree of argument structures, the depth limit is usually not a problem for
>such cases.

It's not just recursive functions at issue.  It's also:

* a method that might call the same method on another instance (possibly 
indirectly via several other function or method calls)

* a generator function that might have more than one outstanding instance

* a function or method called from multiple threads

ALL of these scenarios involve multiple, simultaneously-active frames for a 
given code object, and most are non-obvious at the point of inspection.


From greg at cosc.canterbury.ac.nz  Tue Mar  2 19:29:00 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar  2 19:30:40 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
Message-ID: <200403030029.i230T0129393@oma.cosc.canterbury.ac.nz>

Tim Peters <tim.one@comcast.net>:

> BTW, if a gazillion distinct functions run when starting up a large app, do
> we hang on to the memory for their gazillion distinct frames forever?

If you have a gazillion distinct functions in memory at
once, you've got a gazillion code objects, plus associated
function objects, arg name tuples, etc... Adding a stack
frame to each of these probably won't make a huge difference,
relatively speaking.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Tue Mar  2 19:21:44 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar  2 19:31:31 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <005801c40075$67101f60$7027c797@oemcomputer>
Message-ID: <200403030021.i230Lim29377@oma.cosc.canterbury.ac.nz>

Raymond Hettinger <python@rcn.com>:

> The patch eliminates the freelist in favor of keeping a single
> pre-made frame for each code block.  In addition to saving 
> a call to malloc(), the advantage is that the pre-made frame
> is custom to the code block and only half of fields need to
> be updated each time the code block is called.  

How about keeping a list of pre-made frames for each
code block? That would give you the best of both while
still only having two initialisation cases.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From tismer at stackless.com  Tue Mar  2 19:34:48 2004
From: tismer at stackless.com (Christian Tismer)
Date: Tue Mar  2 19:35:14 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
Message-ID: <40452828.1060706@stackless.com>

Tim Peters wrote:

> [Skip]
> 
>>Why not a list of pre-made frames for each code block (default length 1)?
> 
> 
> Or, IOW, per-code-block frame freelists (plural).  Recursive generators have
> gotten quite popular in my code <wink>.

This is exactly what I was going to propose.
A freelist per code object would be almost as
efficient as the current freelist.

But the drawback... (well, Tim is first here, too :)

> BTW, if a gazillion distinct functions run when starting up a large app, do
> we hang on to the memory for their gazillion distinct frames forever?
> Recycling from a common frame pool has memory benefits in cases other than
> just recursion.  Experiment:  run test.py from a Zope 2 or Zope 3 checkout,
> and look at highwater memory consumption with and without the patch.

What I used in Stackless was an array of cached objects
which is indexed by object size, only for small sizes of course.
When the total number of cached zombies reaches some watermark,
I do a total clear of all the cache.

My proposal for this frame caching is as follows:
Keep a small array of cached pre-initialized frames,
but don't index it by size, but a very simple hash
function.
Allocation is then dividing the address (==id()) of the code
object by some prime number, index the cache and see if
there is the identical code object in the frame. If it is,
take it, otherwise malloc a new frame.
On deallocation, insert the frame into its proper slot.
Test the total number of cached frames, and if it reaches the
watermark, clear the whole cache.

cheers - chris
-- 
Christian Tismer             :^)   <mailto:tismer@stackless.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  mobile +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


From pje at telecommunity.com  Tue Mar  2 20:31:39 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar  2 20:26:53 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed
  hacks
In-Reply-To: <200403030029.i230T0129393@oma.cosc.canterbury.ac.nz>
References: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
Message-ID: <5.1.0.14.0.20040302202921.020d8020@mail.telecommunity.com>

At 01:29 PM 3/3/04 +1300, Greg Ewing wrote:
>Tim Peters <tim.one@comcast.net>:
>
> > BTW, if a gazillion distinct functions run when starting up a large app, do
> > we hang on to the memory for their gazillion distinct frames forever?
>
>If you have a gazillion distinct functions in memory at
>once, you've got a gazillion code objects, plus associated
>function objects, arg name tuples, etc... Adding a stack
>frame to each of these probably won't make a huge difference,
>relatively speaking.

Frames are over three times larger than a function and a code object put 
together:

Python 2.2.2 (#37, Oct 14 2002, 17:02:34) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
IDLE 0.8 -- press F1 for help
 >>> import types
 >>> types.FrameType.__basicsize__
336
 >>> types.FunctionType.__basicsize__
40
 >>> types.CodeType.__basicsize__
64

Of course, this doesn't include the bytecode length, but bytecode is quite 
compact.


From greg at cosc.canterbury.ac.nz  Tue Mar  2 21:03:19 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar  2 21:04:02 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <5.1.0.14.0.20040302202921.020d8020@mail.telecommunity.com>
Message-ID: <200403030203.i2323J101697@oma.cosc.canterbury.ac.nz>

"Phillip J. Eby" <pje@telecommunity.com>:

> Frames are over three times larger than a function and a code object put 
> together:

Yow! I hadn't realised that.

Do frames *really* need to be that big...?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From neal at metaslash.com  Tue Mar  2 21:15:41 2004
From: neal at metaslash.com (Neal Norwitz)
Date: Tue Mar  2 21:15:48 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <200403030203.i2323J101697@oma.cosc.canterbury.ac.nz>
References: <5.1.0.14.0.20040302202921.020d8020@mail.telecommunity.com>
	<200403030203.i2323J101697@oma.cosc.canterbury.ac.nz>
Message-ID: <20040303021541.GB469@epoch.metaslash.com>

On Wed, Mar 03, 2004 at 03:03:19PM +1300, Greg Ewing wrote:
> "Phillip J. Eby" <pje@telecommunity.com>:
> 
> > Frames are over three times larger than a function and a code object put 
> > together:
> 
> Yow! I hadn't realised that.
> 
> Do frames *really* need to be that big...?

In Include/frameobject.h, b_type and b_level can be combined to
a single 32-bit value, rather than two in PyTryBlock.  There
is a bit more processing to pull the values apart.  IIRC,
there was a very small, but measurable performance hit.
You can also decrease CO_MAXBLOCKS.  I was able to drop the
size to under 256 bytes.  But perf was still a bit off.

My goal was to drop the frame size small enough for PyMalloc.
I don't think I ever tried to change the allocs.  But since
I never got it faster, I dropped it.

Neal

From tim.one at comcast.net  Tue Mar  2 21:20:16 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar  2 21:20:22 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <200403030029.i230T0129393@oma.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEPJEAB.tim.one@comcast.net>

[Greg Ewing]
> If you have a gazillion distinct functions in memory at
> once, you've got a gazillion code objects, plus associated
> function objects, arg name tuples, etc... Adding a stack
> frame to each of these probably won't make a huge difference,
> relatively speaking.

These aren't C frames -- PyFrameObject is an extraordinarily large struct (>
350 bytes baseline, for a bare frame with no locals, no cells, and no eval
stack).


From tismer at stackless.com  Tue Mar  2 21:22:51 2004
From: tismer at stackless.com (Christian Tismer)
Date: Tue Mar  2 21:23:04 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <20040303021541.GB469@epoch.metaslash.com>
References: <5.1.0.14.0.20040302202921.020d8020@mail.telecommunity.com>	<200403030203.i2323J101697@oma.cosc.canterbury.ac.nz>
	<20040303021541.GB469@epoch.metaslash.com>
Message-ID: <4045417B.3040906@stackless.com>

Neal Norwitz wrote:

> On Wed, Mar 03, 2004 at 03:03:19PM +1300, Greg Ewing wrote:
> 
>>"Phillip J. Eby" <pje@telecommunity.com>:
>>
>>
>>>Frames are over three times larger than a function and a code object put 
>>>together:
>>
>>Yow! I hadn't realised that.
>>
>>Do frames *really* need to be that big...?
> 
> 
> In Include/frameobject.h, b_type and b_level can be combined to
> a single 32-bit value, rather than two in PyTryBlock.  There
> is a bit more processing to pull the values apart.  IIRC,
> there was a very small, but measurable performance hit.
> You can also decrease CO_MAXBLOCKS.  I was able to drop the
> size to under 256 bytes.  But perf was still a bit off.

I think instead of or in addition to folding block items,
the maximum size of the blockstack *could* be computed
at compile time, instead of a fixed 20 level deep structure.
It would complicate things a little more, again, but memory
savings would be great.

ciao - chris

p.s.:
For a small but simple saving: f_tstate can be completely dropped.
Did it for Stackless, already. Maybe I'll submit a patch. :-)

-- 
Christian Tismer             :^)   <mailto:tismer@stackless.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  mobile +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


From tim.one at comcast.net  Tue Mar  2 22:45:46 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar  2 22:45:54 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <20040303021541.GB469@epoch.metaslash.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFFJEAB.tim.one@comcast.net>

[Neal Norwitz]
> In Include/frameobject.h, b_type and b_level can be combined to
> a single 32-bit value, rather than two in PyTryBlock.  There
> is a bit more processing to pull the values apart.  IIRC,
> there was a very small, but measurable performance hit.
> You can also decrease CO_MAXBLOCKS.  I was able to drop the
> size to under 256 bytes.  But perf was still a bit off.

PyTryBlock is indeed the frameobject memory pig, but it shouldn't need to
be -- the size needed varies by code object, but is fixed per code object,
and is usually very small (IIRC, it's the max nesting depth in the body of
the code, counting only nested loops and "try" structures (not counting
nested def, class, or "if" structures)).  So it should be possible, e.g., to
shove it off to the variable-size tail of a frame, and allocate only exactly
as much as a given code object needs.  For that matter, we wouldn't need an
arbitrary upper bound (currently 20) on nesting depth then either.

> My goal was to drop the frame size small enough for PyMalloc.
> I don't think I ever tried to change the allocs.  But since
> I never got it faster, I dropped it.

Note that since frames participate in cyclic gc, each needs another 12
(Linux) or 16 (Windows) bytes for the gc header too.  That's why I said ">
350" at the start when everyone else was quoting basicsize as if the latter
had something to do with reality <wink>.


From bac at OCF.Berkeley.EDU  Tue Mar  2 23:50:15 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Tue Mar  2 23:58:54 2004
Subject: [Python-Dev] second opinion for proposed fix for bug #754449
Message-ID: <40456407.3060505@ocf.berkeley.edu>

So http://python.org/sf/754449 is a bug report about threading and an 
exception being raised when the interpreter is being shut down, setting 
a function in threading to None because of teardown, and then trying to 
execute that now killed function in the shutdown code for 
threading.Thread (what a mouthful).

Basically the exception stems from a call to threading.currentThread in 
the _Condition class (which is just Condition) which, according to the 
comment along with the call, is just for its side-effect.  Well, I 
looked and I can't find the need for the side-effect.  I took out the 
code and ran the regression tests and nothing failed because of it.

Since the code says the call is for the explicit side-effect I just 
wanted another pair of eyes to quickly scan the code in threading.py and 
either agree with me or tell me I am blind and point out where the 
side-effect is needed for Condition instances.  The offending 
currentThread calls can be found on lines 196 and 238 (can also just 
search for "side-effect" to find them).

-Brett

From aahz at pythoncraft.com  Wed Mar  3 04:58:23 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar  3 04:58:37 2004
Subject: [Python-Dev] second opinion for proposed fix for bug #754449
In-Reply-To: <40456407.3060505@ocf.berkeley.edu>
References: <40456407.3060505@ocf.berkeley.edu>
Message-ID: <20040303095823.GA20148@panix.com>

On Tue, Mar 02, 2004, Brett C. wrote:
> 
> Basically the exception stems from a call to threading.currentThread in 
> the _Condition class (which is just Condition) which, according to the 
> comment along with the call, is just for its side-effect.  Well, I 
> looked and I can't find the need for the side-effect.  I took out the 
> code and ran the regression tests and nothing failed because of it.
> 
> Since the code says the call is for the explicit side-effect I just 
> wanted another pair of eyes to quickly scan the code in threading.py and 
> either agree with me or tell me I am blind and point out where the 
> side-effect is needed for Condition instances.  The offending 
> currentThread calls can be found on lines 196 and 238 (can also just 
> search for "side-effect" to find them).

The only generic side-effect I can see from calling currentThread()
(after drilling down to all the .h files for OS-specific stuff) is that
if the current thread isn't a thread created by threading, it calls
_DummyThread().  Dunno whether this is actually needed, no brainpower to
figure that out.  Do the regression tests make sure that threading works
with threads created by ``thread``?
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From guido at python.org  Wed Mar  3 10:04:02 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar  3 10:04:09 2004
Subject: [Python-Dev] second opinion for proposed fix for bug #754449
In-Reply-To: Your message of "Tue, 02 Mar 2004 20:50:15 PST."
	<40456407.3060505@ocf.berkeley.edu> 
References: <40456407.3060505@ocf.berkeley.edu> 
Message-ID: <200403031504.i23F42401410@guido.python.org>

> So http://python.org/sf/754449 is a bug report about threading and an 
> exception being raised when the interpreter is being shut down, setting 
> a function in threading to None because of teardown, and then trying to 
> execute that now killed function in the shutdown code for 
> threading.Thread (what a mouthful).

Ouch.

> Basically the exception stems from a call to threading.currentThread in 
> the _Condition class (which is just Condition) which, according to the 
> comment along with the call, is just for its side-effect.  Well, I 
> looked and I can't find the need for the side-effect.  I took out the 
> code and ran the regression tests and nothing failed because of it.
> 
> Since the code says the call is for the explicit side-effect I just 
> wanted another pair of eyes to quickly scan the code in threading.py and 
> either agree with me or tell me I am blind and point out where the 
> side-effect is needed for Condition instances.  The offending 
> currentThread calls can be found on lines 196 and 238 (can also just 
> search for "side-effect" to find them).

This is my code.  It was long ago.  I think the intended side effect
is creating a dummy thread *IF* the calling thread was not created by
the threading module but by thread.start_new_thread().  The current
thread in turn is only relevant when the Condition's lock is an RLock.
But the only thing that is ever done with that is comparing it with
the outcome of another call to currentThread().  So I *THINK* that the
side effect isn't particularly needed.

(But the threading module references other globals, so I'm not sure
that ripping out the currentThread() calls is really enough, except in
this particular app's case.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From jim.jewett at eds.com  Wed Mar  3 10:20:08 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Wed Mar  3 10:20:19 2004
Subject: [Python-Dev] Pep 318 - new syntax for wrappers
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D360@USAHM010.amer.corp.eds.com>

I think everyone agrees that decorators are useful, and the 
current idiom is awkward; the only real argument is over how 
to improve it.  Proponents want something easy to type; others 
want something explicit.  


(1)  Eventually, there will be another good idea.

Therefore, we should define a general extension syntax, 
rather than a special case for this particular extension.

    def func(args) [wrapper1, wrapper2]:

uses up the syntax forever.

    def func(args) mods [wrapper1, wrapper2]:

allows later extensions like

    def func(args) alias [wrapper1, wrapper2]:
    def func(args) compiler_hints [wrapper1, wrapper2]:
    def func(args) my_domain_special_use [wrapper1, wrapper2]:


(2)  Wrappers are not essential to understanding functions, 
and should not pretend to be.  

In most languages, classifiers are always relevant.  Defaults
might save you from *writing* "int", "public", or "synchronized", 
but the classification still happens.  If you want your code 
to be robust, you need to at least understand the possible
classifiers, and when to use them.  This means you need to
learn something about each of them soon after "Hello World".

In python, these extensions really are optional.  The syntax 
should make it clear that newbies can ignore the whole 
construction, and do not need to know any specific wrappers.

    def func(args) [wrapper1, wrapper2]:

suggests that the extensions are part of basic function syntax.
Lack of documentation for wrapper1 and wrapper2 (which may not 
be part of the core language) presents an extra barrier.  So
does the fact that this is a new meaning for [], at least under
the current patch.  People can learn to ignore it -- but only if
they have already committed to python.  Each new barrier to 
entry makes a newbie more likely to give up before starting.

    def func(args) decorated_by [wrapper1, wrapper2]:

looks optional.  Shorter keywords are OK too, if they're still
clear; these were the shortest suggestions:

    def func(args) mods [wrapper1, wrapper2]:  #MODifierS
    def func(args) as [wrapper1, wrapper2]:    #!but import as=alias
    def func(args) X [wrapper1, wrapper2]:     #x=Transform

Also note that using a keyword (such as "mods", "as", or "X") 
makes it easier to replace literal "[ ]" with a more general 
(sequence) expression later.

-jJ

From skip at pobox.com  Wed Mar  3 10:38:30 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar  3 10:38:41 2004
Subject: [Python-Dev] Pep 318 - new syntax for wrappers
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D360@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D360@USAHM010.amer.corp.eds.com>
Message-ID: <16453.64502.610729.377892@montanaro.dyndns.org>


    Jim> (1)  Eventually, there will be another good idea.

    Jim> Therefore, we should define a general extension syntax, rather than
    Jim> a special case for this particular extension.

    Jim>     def func(args) [wrapper1, wrapper2]:

    Jim> uses up the syntax forever.

    Jim>     def func(args) mods [wrapper1, wrapper2]:

    Jim> allows later extensions like

    Jim>     def func(args) alias [wrapper1, wrapper2]:
    Jim>     def func(args) compiler_hints [wrapper1, wrapper2]:
    Jim>     def func(args) my_domain_special_use [wrapper1, wrapper2]:

I don't think the second version is needed.  Consider the compiler_hints
extension:

    def func (args) [hint1, hint2]:
        pass

    def hint1(func):
        func.hint1 = True
        return func

    def hint2(func):
        mess_with_func.func_code()
        return func

There's no requirement that the decorations have to return a different
function.  They can return the same function and just operate through side
effects on the input function.

Skip

From arigo at tunes.org  Wed Mar  3 11:22:35 2004
From: arigo at tunes.org (Armin Rigo)
Date: Wed Mar  3 11:25:24 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <2moerfcg7z.fsf@starship.python.net>
References: <005801c40075$67101f60$7027c797@oemcomputer>
	<200403021654.i22Gs0131675@guido.python.org>
	<8513D5DE-6C6D-11D8-8169-000A95686CD8@redivi.com>
	<1078248177.29097.141.camel@localhost.localdomain>
	<2moerfcg7z.fsf@starship.python.net>
Message-ID: <20040303162235.GA9808@vicky.ecs.soton.ac.uk>

Hello Michael,

On Tue, Mar 02, 2004 at 05:26:56PM +0000, Michael Hudson wrote:
> But list_sort pushes quite a large object onto the stack.  So if you
> arrange for that too appear on the stack repeatedly, you get a crash
> much earlier.
> 
> Python's recursion limit is a nasty hack.  Unfortunately, all the
> alternatives are worse :-( (except some variant of stackless, I guess,
> but that probably doesn't help the list_sort business).

What about limiting the byte size of the C stack, instead of the recursion
level?  You can estimate the current size of the stack in a C program fairly
easily (although it is a clear hack, I expect it to work on any platform): get
the difference between a pointer to some local variable and a (previously
saved) pointer to a local variable in some C frame near the bottom.


Armin


From mwh at python.net  Wed Mar  3 13:27:40 2004
From: mwh at python.net (Michael Hudson)
Date: Wed Mar  3 14:15:37 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <5.1.0.14.0.20040302202921.020d8020@mail.telecommunity.com>
	(Phillip J. Eby's message of "Tue, 02 Mar 2004 20:31:39 -0500")
References: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
	<5.1.0.14.0.20040302202921.020d8020@mail.telecommunity.com>
Message-ID: <2my8qhbxb7.fsf@starship.python.net>

"Phillip J. Eby" <pje@telecommunity.com> writes:

> Of course, this doesn't include the bytecode length, but bytecode is
> quite compact.

The median length of bytecode strings in Lib/*.py (not counting module
top level code, I think) was 52 bytes (~36 instructions) a short while
back, if memory serves correctly.

I can't remember why I calculated this...

Cheers,
mwh

-- 
  That one is easily explained away as massively intricate
  conspiracy, though.            -- Chris Klein, alt.sysadmin.recovery

From jim.jewett at eds.com  Wed Mar  3 14:39:37 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Wed Mar  3 14:43:05 2004
Subject: [Python-Dev] Pep 318 - new syntax (esp for wrappers)
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D365@USAHM010.amer.corp.eds.com>

Bob Ippolito

> ANYTHING that transforms a function in 
> ANY way can be represented by this decorator syntax.

"can" != "should".

The transforms *can* be represented in the current 
syntax, but we agree that the current idiom should 
be improved.  Different changes may be preferable 
for other patterns.

>> (2)  Wrappers are not essential to understanding 
>> functions, and should not pretend to be. ...

>> In most languages, classifiers are always relevant.
>> {int, public, synchronized, ...}

> [In python, these wrappers] are not essential to
> understanding how to make your own functions or methods,

exactly.

> Since when do newbies read the "language lawyer" documentation?

Depends on the newbie, and the definition of "language lawyer".

Most programmers want a manual or reference when they're first
learning, and will turn to it when they get confused.  (Hey, 
I got a syntax error!  What's up?  Hey, this function just 
changed its signature!)  Then they'll see the full syntax.  

> No tutorials or existing code ... use this syntax yet, so newbies 
> are obviously not going to see it.

There will always be more newbies -- even a year after this
enters the language.

-jJ

From bob at redivi.com  Wed Mar  3 14:59:26 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar  3 14:56:18 2004
Subject: [Python-Dev] Pep 318 - new syntax (esp for wrappers)
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D365@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D365@USAHM010.amer.corp.eds.com>
Message-ID: <466B23E6-6D4D-11D8-8126-000A95686CD8@redivi.com>

On Mar 3, 2004, at 2:39 PM, Jewett, Jim J wrote:

> Bob Ippolito
>
>> ANYTHING that transforms a function in
>> ANY way can be represented by this decorator syntax.
>
> "can" != "should".

Eh.  "can" doesn't IMPLY "should", but that doesn't mean they're always 
inequal.  In other words, I can't think of an 
explicit-function-transform that I would want to represent in a 
different way, unless I wanted a copy of the original hanging around.

> The transforms *can* be represented in the current
> syntax, but we agree that the current idiom should
> be improved.  Different changes may be preferable
> for other patterns.

Like what?

>>> (2)  Wrappers are not essential to understanding
>>> functions, and should not pretend to be. ...
>
>>> In most languages, classifiers are always relevant.
>>> {int, public, synchronized, ...}
>
>> [In python, these wrappers] are not essential to
>> understanding how to make your own functions or methods,
>
> exactly.
>
>> Since when do newbies read the "language lawyer" documentation?
>
> Depends on the newbie, and the definition of "language lawyer".

from http://www.python.org/doc/2.3.3/

	Language Reference
	(for language lawyers)

It's a 'little' dense for a newbie.

> Most programmers want a manual or reference when they're first
> learning, and will turn to it when they get confused.  (Hey,
> I got a syntax error!  What's up?  Hey, this function just
> changed its signature!)  Then they'll see the full syntax.

And they'll also see that they haven't needed it yet, and it's 
presumably not in the beginner tutorials, so they don't need to learn 
it just yet.  I don't see how it's any different than say, using 
__radd__, the yield keyword, the exec statement, or some of the more 
advanced exception handling constructs.  From what I've seen, beginners 
don't really even notice that these things exist, and that's good.  I 
don't see extended function syntax being any different.

>> No tutorials or existing code ... use this syntax yet, so newbies
>> are obviously not going to see it.
>
> There will always be more newbies -- even a year after this
> enters the language.

That doesn't mean the tutorials and code that newbies are going to be 
looking at are going to start using this syntax a year from when this 
enters the language.

-bob


From bob at redivi.com  Wed Mar  3 13:55:08 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar  3 15:01:24 2004
Subject: [Python-Dev] Pep 318 - new syntax for wrappers
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D360@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D360@USAHM010.amer.corp.eds.com>
Message-ID: <4AF65864-6D44-11D8-8126-000A95686CD8@redivi.com>

On Mar 3, 2004, at 10:20 AM, Jewett, Jim J wrote:

> I think everyone agrees that decorators are useful, and the
> current idiom is awkward; the only real argument is over how
> to improve it.  Proponents want something easy to type; others
> want something explicit.
>
>
> (1)  Eventually, there will be another good idea.
>
> Therefore, we should define a general extension syntax,
> rather than a special case for this particular extension.
>
>     def func(args) [wrapper1, wrapper2]:
>
> uses up the syntax forever.

I agree with Skip on this one, ANYTHING that transforms a function in 
ANY way can be represented by this decorator syntax.

The only thing we can't change is the fact that Python will compile the 
code block once before we get ahold of it, and we have no control over 
the Python compiler/grammar while it is being compiled.  These might be 
more relevant for something like PyPy though, but there's nothing that 
would really stop you from throwing keywords *inside* the [wrapper] 
block if you need to let the compiler know about something if/when that 
ever happens.

> (2)  Wrappers are not essential to understanding functions,
> and should not pretend to be.

They are not essential to understanding how to make your own functions 
or methods, but they are essential to understanding other people's code 
that uses them:

	def foo(args) [classmethod]:
		pass

is quite different than what would happen otherwise.  The default 
behavior of code and class blocks is fine for most people, but once you 
start doing a lot of metamagic and turn python into a domain specific 
language, they become very relevant.

> In most languages, classifiers are always relevant.  Defaults
> might save you from *writing* "int", "public", or "synchronized",
> but the classification still happens.  If you want your code
> to be robust, you need to at least understand the possible
> classifiers, and when to use them.  This means you need to
> learn something about each of them soon after "Hello World".
>
> In python, these extensions really are optional.  The syntax
> should make it clear that newbies can ignore the whole
> construction, and do not need to know any specific wrappers.
>
>     def func(args) [wrapper1, wrapper2]:
>
> suggests that the extensions are part of basic function syntax.
> Lack of documentation for wrapper1 and wrapper2 (which may not
> be part of the core language) presents an extra barrier.  So
> does the fact that this is a new meaning for [], at least under
> the current patch.  People can learn to ignore it -- but only if
> they have already committed to python.  Each new barrier to
> entry makes a newbie more likely to give up before starting.

Since when do newbies read the "language lawyer" documentation?  They 
read the tutorials and existing code.  No tutorials or existing code 
(on top of the regular python compiler) use this syntax yet, so newbies 
are obviously not going to see it.

-bob
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2357 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20040303/8d2d9c4e/smime.bin
From skip at pobox.com  Wed Mar  3 15:32:31 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar  3 15:33:23 2004
Subject: [Python-Dev] importing from files with currently invalid names
Message-ID: <16454.16607.477490.775348@montanaro.dyndns.org>

I just responded to and closed a bug report which asked why

    import report-manager

or

    import "report-manager"

doesn't work.  It got me to thinking, with the import ... as ... form Python
could support importing from non-identifier file names by giving some
variant of the file name as a string literal:

    import "report-manager" as report_manager

    import "report-manager.py" as report_manager

    import "/etc/site/parameters" as report_manager

Is that extra flexibility possibly worthwhile?  I know Guido's not keen on
polluting the import statement with path information (the third case), but
the others seem like they might be useful.

Adding this support shouldn't break anything since the language grammar
doesn't currently support string literals as the module name.

Skip

From tismer at stackless.com  Wed Mar  3 16:29:30 2004
From: tismer at stackless.com (Christian Tismer)
Date: Wed Mar  3 17:15:38 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <20040303184056.GA21282@vicky.ecs.soton.ac.uk>
References: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
	<40452828.1060706@stackless.com>
	<20040303170629.GB9808@vicky.ecs.soton.ac.uk>
	<404622D7.2080401@stackless.com>
	<20040303184056.GA21282@vicky.ecs.soton.ac.uk>
Message-ID: <40464E3A.8040203@stackless.com>

Armin Rigo wrote:

> Hello Christian,
> 
> On Wed, Mar 03, 2004 at 07:24:23PM +0100, Christian Tismer wrote:
> 
>>What's bad about my proposal?
>>In short: No frame cache in the code object.
>>Have a small array of chains of cached frames.
>>Index this by the address of the code object modulo some
>>prime which is the array size.
> 
> 
> Oh, I missed the part about chaining cached frames.  Read-reading your
> proposal, it seems that there can be at most one cached frame per hash value.  
> This also raises the question of how bad hash collisions could degrade
> performance if you're unlucky.  I was trying to work around these two problems
> based on your proposal but couldn't get something clean...

How that?
For every hash value, I get a slot in the array.
That slot has a chain of all cached frames for that hash value.
On hash collision:
- we could either walk through the slot and free frames until
   we either find a matching frame and create a new one if
   we don't find it
- we could instead assign a unique number to each code object
   and let that array grow. Variants are possible, of course.

Completely different approach without an extra array:
Put all code objects into a doubly linked list.
Maintain a chain of zombie frames for every code object.
 From time to time, walk through all the code objects
and clear their caches.

Again different approach:
Maintain a chain of zombie frames for every code object.
Furthermore, make code object garbage collected objects,
this is probably not so expensive?
Then make code objects clear their zombie cache on every
n'th (for some reasonable n) call of inquiry.
This saves us from maintaining an extra linked list,
but might be a little bit slower, since there is more gc work.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@stackless.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  mobile +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


From bac at OCF.Berkeley.EDU  Wed Mar  3 17:32:18 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Wed Mar  3 17:32:57 2004
Subject: [Python-Dev] second opinion for proposed fix for bug #754449
In-Reply-To: <200403031504.i23F42401410@guido.python.org>
References: <40456407.3060505@ocf.berkeley.edu>
	<200403031504.i23F42401410@guido.python.org>
Message-ID: <40465CF2.1020709@ocf.berkeley.edu>

Guido van Rossum wrote:

<SNIP>
>>Basically the exception stems from a call to threading.currentThread in 
>>the _Condition class (which is just Condition) which, according to the 
>>comment along with the call, is just for its side-effect.  Well, I 
>>looked and I can't find the need for the side-effect.  I took out the 
>>code and ran the regression tests and nothing failed because of it.
>>
>>Since the code says the call is for the explicit side-effect I just 
>>wanted another pair of eyes to quickly scan the code in threading.py and 
>>either agree with me or tell me I am blind and point out where the 
>>side-effect is needed for Condition instances.  The offending 
>>currentThread calls can be found on lines 196 and 238 (can also just 
>>search for "side-effect" to find them).
> 
> 
> This is my code.  It was long ago.  I think the intended side effect
> is creating a dummy thread *IF* the calling thread was not created by
> the threading module but by thread.start_new_thread().  The current
> thread in turn is only relevant when the Condition's lock is an RLock.
> But the only thing that is ever done with that is comparing it with
> the outcome of another call to currentThread().

Right.  And RLock makes the call itself so it seems RLock on its own 
will trigger the side-effect when it needs it.

>  So I *THINK* that the
> side effect isn't particularly needed.
> 

OK.  That's good enough for me.

> (But the threading module references other globals, so I'm not sure
> that ripping out the currentThread() calls is really enough, except in
> this particular app's case.)
> 

Yeah.  To force this problem (couldn't trigger it) I set everything to 
None in threading and then tried this.  It triggered the right error, 
but the function registered with atexit also threw a fit.  But that is a 
forced error since atexit will do its thing before teardown, right?

Regardless, I will try to go through the code and see if there are any 
other points during the shutdown the reference a global and see if there 
is a way to solve it cleanly.  Otherwise I guess people just need to 
shutdown there threading instances properly.

-Brett

From tismer at stackless.com  Wed Mar  3 13:24:23 2004
From: tismer at stackless.com (Christian Tismer)
Date: Wed Mar  3 17:48:16 2004
Subject: [Python-Dev] [ python-Patches-876206 ] scary frame speed hacks
In-Reply-To: <20040303170629.GB9808@vicky.ecs.soton.ac.uk>
References: <LNBBLJKPBEHFEDALKOLCEEBGJEAB.tim.one@comcast.net>
	<40452828.1060706@stackless.com>
	<20040303170629.GB9808@vicky.ecs.soton.ac.uk>
Message-ID: <404622D7.2080401@stackless.com>

Armin Rigo wrote:

> Hello,
> 
> In summary, we need to:
> 
> - cache possibly more than one frame per code object;
> - free old frames for code objects that haven't been run for a while.
> 
> At this point I need more thinking.  If someone comes up with an elegant way
> to do both (as opposed, say, to abusing a lot of fields to put PyFrameObject
> structures into two linked lists at the same time), he'd be welcome :-)

What's bad about my proposal?
In short: No frame cache in the code object.
Have a small array of chains of cached frames.
Index this by the address of the code object modulo some
prime which is the array size.

-- 
Christian Tismer             :^)   <mailto:tismer@stackless.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  mobile +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


From guido at python.org  Wed Mar  3 18:37:47 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar  3 18:37:55 2004
Subject: [Python-Dev] second opinion for proposed fix for bug #754449
In-Reply-To: Your message of "Wed, 03 Mar 2004 14:32:18 PST."
	<40465CF2.1020709@ocf.berkeley.edu> 
References: <40456407.3060505@ocf.berkeley.edu>
	<200403031504.i23F42401410@guido.python.org> 
	<40465CF2.1020709@ocf.berkeley.edu> 
Message-ID: <200403032337.i23Nbmq02270@guido.python.org>

> Yeah.  To force this problem (couldn't trigger it) I set everything to 
> None in threading and then tried this.  It triggered the right error, 
> but the function registered with atexit also threw a fit.  But that is a 
> forced error since atexit will do its thing before teardown, right?

It better. :-)

> Regardless, I will try to go through the code and see if there are any 
> other points during the shutdown the reference a global and see if there 
> is a way to solve it cleanly.  Otherwise I guess people just need to 
> shutdown there threading instances properly.

Note that None is also a global... :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)

From jim.jewett at eds.com  Thu Mar  4 13:34:19 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Thu Mar  4 13:35:04 2004
Subject: [Python-Dev] reusing frames
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D373@USAHM010.amer.corp.eds.com>

What are the savings from already having (today) "a frame",
as opposed to (proposed) "a frame for this code"?

If much of the work is to create "a frame", then it might
be bad to reserve frames for only the function that created
them.  (i.e., is reinitializing a generic frame twice still
faster than creating a new frame from scratch.)

If it does make sense to create more frames, is there any
reason they couldn't be tied to the code object by a weak
reference?  The reuse could still happen if there was no
garbage collection in between, but code used in initialization
won't keep its frames around forever.  

I suppose further tuning could protect frames from gc level < n, 
or "free" them back to a generic frame pool instead of a generic 
memory pool.

-jJ


From gerrit at nl.linux.org  Fri Mar  5 09:51:17 2004
From: gerrit at nl.linux.org (Gerrit)
Date: Fri Mar  5 09:51:23 2004
Subject: [Python-Dev] Re: [Python-checkins] python/dist/src README, 1.179,
	1.180
In-Reply-To: <E1AzGOB-0002g8-IC@sc8-pr-cvs1.sourceforge.net>
References: <E1AzGOB-0002g8-IC@sc8-pr-cvs1.sourceforge.net>
Message-ID: <20040305145117.GA4760@nl.linux.org>

montanaro@users.sourceforge.net wrote:
> ***************
> *** 1,4 ****
> ! This is Python version 2.3 release candidate 2
> ! ==============================================
>   
>   Copyright (c) 2001, 2002, 2003 Python Software Foundation.
> --- 1,4 ----
> ! This is Python version 2.4 alpha 1
> ! ==================================
>   
>   Copyright (c) 2001, 2002, 2003 Python Software Foundation.

Shouldn't this include 2004 as well?

Gerrit.

-- 
Weather in Twenthe, Netherlands 05/03 14:25 UTC:
	6.0?C Broken clouds mostly cloudy wind 4.5 m/s E (57 m above NAP)
-- 
Asperger's Syndrome - a personal approach:
	http://people.nl.linux.org/~gerrit/english/

From skip at pobox.com  Fri Mar  5 11:22:48 2004
From: skip at pobox.com (Skip Montanaro)
Date: Fri Mar  5 11:23:15 2004
Subject: [Python-Dev] Re: [Python-checkins] python/dist/src README, 1.179, 
	1.180
In-Reply-To: <20040305145117.GA4760@nl.linux.org>
References: <E1AzGOB-0002g8-IC@sc8-pr-cvs1.sourceforge.net>
	<20040305145117.GA4760@nl.linux.org>
Message-ID: <16456.43352.349122.591933@montanaro.dyndns.org>


    >> Copyright (c) 2001, 2002, 2003 Python Software Foundation.

    gerrit> Shouldn't this include 2004 as well?

There are probably plenty of copyright dates which need updating.  I noticed
the "2.3 release candidate 2" and knew that couldn't possibly be correct
anymore so made that single change.

Skip

From bob at redivi.com  Fri Mar  5 15:23:00 2004
From: bob at redivi.com (Bob Ippolito)
Date: Fri Mar  5 15:19:33 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
Message-ID: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>

I don't want this topic to be forgotten about, again, because it's 
extremely important to the practicality of domain specific uses of 
Python (such as PyObjC, ctypes, etc.) but can also have positive 
effects on the rest of Python (for example, people will actually use 
descriptors where appropriate, metaclasses will be abused less often, 
etc.).

What's good:
	It has the possibility to bring some of the alternate python compilers 
back to Python-compatible syntax (Quixote, Pyrex, etc.)
	It's the only reasonable way of transforming the result of a "def" 
block (adding metadata, such as interface compliance, or changing 
behavior, as in staticmethod).
	When extended to class definitions, it can put an end a lot of 
__metaclass__ and stack introspection abuse.
	It doesn't break anything, because you don't have to use the extended 
syntax.

What's bad:
	A syntax has not been definitively chosen.  The PEP itself is too 
vague on this point.

I propose that we just go with the syntax we *already have an 
implementation for*.  I don't see anything wrong with it, I believe it 
is the most popular, and I personally don't like new keywords.  I have 
also committed to doing whatever needs to be done to get this into 
Python 2.4 (writing documentation, examples, etc.), but I obviously 
can't do that until we have decided which syntax wins (though this one 
is my preference, for practical and aesthetic reasons).  I don't have 
infinite free time, so the sooner the decision the better.  In other 
words, I think it is *extremely* counter productive to keep bringing up 
the topic of other ways to spell this.  I would like to focus this 
thread to, specifically, if *this syntax* should become part of Python. 
  It is already clear the the idea of function/method/class decoration 
is worthy.  When you give your -1,+0,+1 etc, please do it ONLY for this 
syntax, and do NOT vote based upon the *idea* if and only if combined 
with another syntax (without brackets, brackets on the left, keywords, 
ampersands, dollar signs, I don't care - please - don't bring them up).

This proposed new syntax is:

	funcdef: 'def' NAME parameters ['[' exprlist ']' ] ':' suite

	classdef: 'class' NAME ['(' testlist ')'] ['[' exprlist ']'] ':' suite

Entirely too trivial examples are as follows:

	def addArgumentsTogether(*args) [staticmethod]:
		return reduce(operator.add, args)

	class ClassUsingDocTests(object) [UseDocTests]:
		"""
		This could potentially modify the class instance to use doc tests,
		contracts, or what have you.. without adding a nasty __metaclass__,
		that can cause problems when subclassing, etc.
		"""
		pass

Practical domain specific examples for PyObjC:

	def doOpenFileService_userData_error_(self, pboard, data) 
[doServiceSelector]:
		# this is a PyObjC example, in this case, doServiceSelector would be a
		# decorator that does:
		#
		# doOpenFileService_userData_error_ = 
objc.selector(doOpenFileService_userData_error_, signature="v@:@@o^@")
		#
		# this wrapping is ABSOLUTELY NECESSARY since this selector is called 
from ObjC
		# and uses a NSObject ** that must be set if an error occurs.  Since 
this is a custom-named selector
		# and doesn't come from a protocol, then PyObjC can't possibly know 
what selector to use by the name alone (without heuristics)

	#
	# the accessor wrappings are also ABSOLUTELY NECESSARY to participate 
in Key Value Coding,
	# because they take non-object arguments and are called by ObjC.
	#
	def objectInFooAtIndex_(self, index) [objc.accessor]:
		return self.myFoo[index]
	
	def insertObject_inFooAtIndex_(self, obj, index) [objc.accessor]:
		self.myFoo.insert(index, obj)

I can create practical examples for other domain specific Python 
frameworks (PyProtocols, PEAK, ctypes, etc.) if that is necessary to 
convince people.

Please, "import this" before you decide to -1 or -0.. specifically:
... Simple is better than complex.
... Flat is better than nested.
... Although practicality beats purity.
... There should be one-- and preferably only one --obvious way to do 
it.
... Now is better than never.

I would love to see a decision by PyCon.  I will dedicate some of my 
sprint time on documentation, tests, implementation tweaks, etc. for 
this new syntax if it is accepted.

-bob


From barry at python.org  Fri Mar  5 15:31:11 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar  5 15:31:17 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
Message-ID: <1078518671.18592.75.camel@anthem.wooz.org>

On Fri, 2004-03-05 at 15:23, Bob Ippolito wrote:
> I don't want this topic to be forgotten about, again

It's probably time to ask the BDFL for a pronouncement on the PEP. 
Maybe at Pycon, if not before.

-Barry



From skip at pobox.com  Fri Mar  5 17:09:26 2004
From: skip at pobox.com (Skip Montanaro)
Date: Fri Mar  5 17:09:39 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
Message-ID: <16456.64150.141426.323665@montanaro.dyndns.org>


    Bob> I propose that we just go with the syntax we *already have an
    Bob> implementation for*.  

+1.

Skip

From pf_moore at yahoo.co.uk  Fri Mar  5 17:12:31 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Fri Mar  5 17:12:39 2004
Subject: [Python-Dev] Re: PEP 318 - function/method/class decoration
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
Message-ID: <fzcneyeo.fsf@yahoo.co.uk>

Bob Ippolito <bob@redivi.com> writes:

> I propose that we just go with the syntax we *already have an
> implementation for*.

+1

Paul
-- 
This signature intentionally left blank


From fdrake at acm.org  Fri Mar  5 17:16:22 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri Mar  5 17:16:48 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
Message-ID: <200403051716.22394.fdrake@acm.org>

On Friday 05 March 2004 03:23 pm, Bob Ippolito wrote:
 > I propose that we just go with the syntax we *already have an
 > implementation for*.  I don't see anything wrong with it, I believe it

+1 from me as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From pedronis at bluewin.ch  Fri Mar  5 17:29:49 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Fri Mar  5 17:25:51 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
Message-ID: <5.2.1.1.0.20040305232647.02ce9d90@pop.bluewin.ch>

At 15:23 05.03.2004 -0500, Bob Ippolito wrote:
>I propose that we just go with the syntax we *already have an 
>implementation for*.

+1 


From mcherm at mcherm.com  Fri Mar  5 17:31:36 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Fri Mar  5 17:31:38 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
Message-ID: <1078525896.4048ffc8275a4@mcherm.com>

> I would like to focus this 
> thread to, specifically, if *this syntax* should become part of Python.

Well, you get my +1, but I also want to protest that I think you may
have rigged the vote. As you say,

> It is already clear the the idea of function/method/class decoration 
> is worthy.

Your reason for voting on just this syntax is:

> I propose that we just go with the syntax we *already have an 
> implementation for*.  I don't see anything wrong with it, I believe it 
> is the most popular, and I personally don't like new keywords.

If getting an implementation were a problem, that would be convicing,
but there have been volunteers to implement whatever syntax is chosen.
I think the choice of syntax should be based on which syntax is
better, not which happened to be implemented in the proof-of-concept.
So I'll second Barry's call:

> It's probably time to ask the BDFL for a pronouncement on the PEP.

Oh, and by the way... I LIKE the syntax as implemented, although I still 
have a nagging desire for a keyword.

-- Michael Chermside


From raymond.hettinger at verizon.net  Sat Mar  6 08:27:02 2004
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Sat Mar  6 08:28:59 2004
Subject: [Python-Dev] New opcode to simplifiy/speedup list comprehensions
Message-ID: <000001c4037e$b68441a0$e841fea9@oemcomputer>

If there are no objections, I would like to add an opcode for calling
PyList_Append().  This simplifies the generated code for list
comprehensions and reduces the overhead on each pass, improving the
timings by about 35% on [i for i in itertools.repeat(None, 500)].

The patch is amazingly brief and clear:
	www.python.org/sf/910929


Raymond Hettinger




##### Current disassembly of a list comprehension
>>> from dis import dis
>>> def f(x):
	y = [i+i for i in x]

>>> dis(f)
  2           0 BUILD_LIST               0
              3 DUP_TOP             
              4 LOAD_ATTR                0 (append)    <-- delete this
line
              7 STORE_FAST               3 (_[1])
             10 LOAD_FAST                0 (x)
             13 GET_ITER            
        >>   14 FOR_ITER                20 (to 37)
             17 STORE_FAST               2 (i)
             20 LOAD_FAST                3 (_[1])
             23 LOAD_FAST                2 (i)
             26 LOAD_FAST                2 (i)
             29 BINARY_ADD          
             30 CALL_FUNCTION            1              --> replace with
LIST_APPEND
             33 POP_TOP                                 <-- delete this
line
             34 JUMP_ABSOLUTE           14
        >>   37 DELETE_FAST              3 (_[1])
             40 STORE_FAST               1 (y)
             43 LOAD_CONST               0 (None)
             46 RETURN_VALUE        


##### Proposed disassembly of a list comprehension
  2           0 BUILD_LIST               0
              3 DUP_TOP             
              4 STORE_FAST               3 (_[1])
              7 LOAD_FAST                0 (x)
             10 GET_ITER            
        >>   11 FOR_ITER                17 (to 31)
             14 STORE_FAST               2 (i)
             17 LOAD_FAST                3 (_[1])
             20 LOAD_FAST                2 (i)
             23 LOAD_FAST                2 (i)
             26 BINARY_ADD          
             27 LIST_APPEND         
             28 JUMP_ABSOLUTE           11
        >>   31 DELETE_FAST              3 (_[1])
             34 STORE_FAST               1 (y)
             37 LOAD_CONST               0 (None)
             40 RETURN_VALUE


From python at rcn.com  Sat Mar  6 08:53:56 2004
From: python at rcn.com (Raymond Hettinger)
Date: Sat Mar  6 08:55:49 2004
Subject: [Python-Dev] New opcode to simplifiy/speedup list comprehensions
In-Reply-To: <000001c4037e$b68441a0$e841fea9@oemcomputer>
Message-ID: <000701c40382$7833a9a0$e841fea9@oemcomputer>

> >>> def f(x):
> 	y = [i+i for i in x]
> 
> >>> dis(f)
>   2           0 BUILD_LIST               0
>               3 DUP_TOP
>               4 LOAD_ATTR                0 (append)    <-- delete this
> line
>               7 STORE_FAST               3 (_[1])
>              10 LOAD_FAST                0 (x)
>              13 GET_ITER
>         >>   14 FOR_ITER                20 (to 37)
>              17 STORE_FAST               2 (i)
>              20 LOAD_FAST                3 (_[1])
>              23 LOAD_FAST                2 (i)
>              26 LOAD_FAST                2 (i)
>              29 BINARY_ADD
>              30 CALL_FUNCTION            1              --> replace
with
> LIST_APPEND
>              33 POP_TOP                                 <-- delete
this
> line
>              34 JUMP_ABSOLUTE           14
>         >>   37 DELETE_FAST              3 (_[1])


P.S.  The patch also removes the DELETE_FAST at position 37


>              40 STORE_FAST               1 (y)
>              43 LOAD_CONST               0 (None)
>              46 RETURN_VALUE


Raymond Hettinger


From guido at python.org  Sat Mar  6 09:22:13 2004
From: guido at python.org (Guido van Rossum)
Date: Sat Mar  6 09:22:18 2004
Subject: [Python-Dev] New opcode to simplifiy/speedup list comprehensions
In-Reply-To: Your message of "Sat, 06 Mar 2004 08:27:02 EST."
	<000001c4037e$b68441a0$e841fea9@oemcomputer> 
References: <000001c4037e$b68441a0$e841fea9@oemcomputer> 
Message-ID: <200403061422.i26EMDv08205@guido.python.org>

> If there are no objections, I would like to add an opcode for calling
> PyList_Append().

+1

--Guido van Rossum (home page: http://www.python.org/~guido/)

From aahz at pythoncraft.com  Sat Mar  6 09:41:10 2004
From: aahz at pythoncraft.com (Aahz)
Date: Sat Mar  6 09:41:33 2004
Subject: [Python-Dev] New opcode to simplifiy/speedup list comprehensions
In-Reply-To: <000001c4037e$b68441a0$e841fea9@oemcomputer>
References: <000001c4037e$b68441a0$e841fea9@oemcomputer>
Message-ID: <20040306144109.GA10539@panix.com>

On Sat, Mar 06, 2004, Raymond Hettinger wrote:
>
> If there are no objections, I would like to add an opcode for calling
> PyList_Append().  This simplifies the generated code for list
> comprehensions and reduces the overhead on each pass, improving the
> timings by about 35% on [i for i in itertools.repeat(None, 500)].

Not objections per se, but every ceval.c gets mucked with, it has a
tendency to change overall speed.  Have you checked Python benchmarks on
at least two platforms?
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From pete at shinners.org  Sat Mar  6 11:17:41 2004
From: pete at shinners.org (Pete Shinners)
Date: Sat Mar  6 11:17:47 2004
Subject: [Python-Dev] Re: PEP 318 - function/method/class decoration
In-Reply-To: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
Message-ID: <c2ctj5$bln$1@sea.gmane.org>

Bob Ippolito wrote:
> I would love to see a decision by PyCon.  I will dedicate some of my 
> sprint time on documentation, tests, implementation tweaks, etc. for 
> this new syntax if it is accepted.

I'm +1 for the function syntax, but -0 for the class syntax.

I don't believe the class descriptors have been fully explored, and they 
seem to directly violate metaclasses? Then again, nice to have one way of 
doing things.



From bh at intevation.de  Sat Mar  6 12:17:34 2004
From: bh at intevation.de (Bernhard Herzog)
Date: Sat Mar  6 12:17:40 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com> (Bob
	Ippolito's message of "Fri, 5 Mar 2004 15:23:00 -0500")
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
Message-ID: <s9z3c8lopxt.fsf@salmakis.intevation.de>

Bob Ippolito <bob@redivi.com> writes:

> When you give your -1,+0,+1 etc, please do it
> ONLY for this syntax, and do NOT vote based upon the *idea* if and only
> if combined with another syntax

+1

> This proposed new syntax is:
>
> 	funcdef: 'def' NAME parameters ['[' exprlist ']' ] ':' suite
>
> 	classdef: 'class' NAME ['(' testlist ')'] ['[' exprlist ']'] ':' suite


Why are the decorators an exprlist while the base classes are a
testlist?

  Bernhard

-- 
Intevation GmbH                                 http://intevation.de/
Skencil                                http://sketch.sourceforge.net/
Thuban                                  http://thuban.intevation.org/

From bob at redivi.com  Sat Mar  6 12:39:36 2004
From: bob at redivi.com (Bob Ippolito)
Date: Sat Mar  6 12:36:12 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <s9z3c8lopxt.fsf@salmakis.intevation.de>
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
	<s9z3c8lopxt.fsf@salmakis.intevation.de>
Message-ID: <3C8E80AB-6F95-11D8-8E4E-000A95686CD8@redivi.com>


On Mar 6, 2004, at 12:17 PM, Bernhard Herzog wrote:

> Bob Ippolito <bob@redivi.com> writes:
>
>> This proposed new syntax is:
>>
>> 	funcdef: 'def' NAME parameters ['[' exprlist ']' ] ':' suite
>>
>> 	classdef: 'class' NAME ['(' testlist ')'] ['[' exprlist ']'] ':' 
>> suite
>
> Why are the decorators an exprlist while the base classes are a
> testlist?

The testlist is the list of base classes..  In both cases, the 
decorators are an '[' exprlist ']'

-bob


From bh at intevation.de  Sat Mar  6 12:42:40 2004
From: bh at intevation.de (Bernhard Herzog)
Date: Sat Mar  6 12:42:46 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <3C8E80AB-6F95-11D8-8E4E-000A95686CD8@redivi.com> (Bob
	Ippolito's message of "Sat, 6 Mar 2004 12:39:36 -0500")
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
	<s9z3c8lopxt.fsf@salmakis.intevation.de>
	<3C8E80AB-6F95-11D8-8E4E-000A95686CD8@redivi.com>
Message-ID: <s9zvflhna7j.fsf@salmakis.intevation.de>

Bob Ippolito <bob@redivi.com> writes:

> On Mar 6, 2004, at 12:17 PM, Bernhard Herzog wrote:
>
>> Bob Ippolito <bob@redivi.com> writes:
>>
>>> This proposed new syntax is:
>>>
>>> 	funcdef: 'def' NAME parameters ['[' exprlist ']' ] ':' suite
>>>
>>> 	classdef: 'class' NAME ['(' testlist ')'] ['[' exprlist ']'] ':'
>>> suite
>>
>> Why are the decorators an exprlist while the base classes are a
>> testlist?
>
> The testlist is the list of base classes..  In both cases, the
> decorators are an '[' exprlist ']'

That much was obvious enough :).  What I meant was: Why are the
decorators an exprlist and not a testlist?  The base classes are a test
list and the elements of a list-literal that (listmaker in Grammar) are
effectively a testlist too, so it's not obvious why the decorators
should not also be a testlist.

   Bernhard

-- 
Intevation GmbH                                 http://intevation.de/
Skencil                                http://sketch.sourceforge.net/
Thuban                                  http://thuban.intevation.org/

From bob at redivi.com  Sat Mar  6 12:53:04 2004
From: bob at redivi.com (Bob Ippolito)
Date: Sat Mar  6 12:50:39 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <s9zvflhna7j.fsf@salmakis.intevation.de>
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
	<s9z3c8lopxt.fsf@salmakis.intevation.de>
	<3C8E80AB-6F95-11D8-8E4E-000A95686CD8@redivi.com>
	<s9zvflhna7j.fsf@salmakis.intevation.de>
Message-ID: <1E6963A9-6F97-11D8-8E4E-000A95686CD8@redivi.com>


On Mar 6, 2004, at 12:42 PM, Bernhard Herzog wrote:

> Bob Ippolito <bob@redivi.com> writes:
>
>> On Mar 6, 2004, at 12:17 PM, Bernhard Herzog wrote:
>>
>>> Bob Ippolito <bob@redivi.com> writes:
>>>
>>>> This proposed new syntax is:
>>>>
>>>> 	funcdef: 'def' NAME parameters ['[' exprlist ']' ] ':' suite
>>>>
>>>> 	classdef: 'class' NAME ['(' testlist ')'] ['[' exprlist ']'] ':'
>>>> suite
>>>
>>> Why are the decorators an exprlist while the base classes are a
>>> testlist?
>>
>> The testlist is the list of base classes..  In both cases, the
>> decorators are an '[' exprlist ']'
>
> That much was obvious enough :).  What I meant was: Why are the
> decorators an exprlist and not a testlist?  The base classes are a test
> list and the elements of a list-literal that (listmaker in Grammar) are
> effectively a testlist too, so it's not obvious why the decorators
> should not also be a testlist.

Oh, sorry.. haven't had my coffee yet.

In that case, I'm not sure.  This is mwh's implementation, I'm sure 
there was a reason.

-bob


From bac at OCF.Berkeley.EDU  Sat Mar  6 15:47:53 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Sat Mar  6 15:48:04 2004
Subject: [Python-Dev] New opcode to simplifiy/speedup list comprehensions
In-Reply-To: <000001c4037e$b68441a0$e841fea9@oemcomputer>
References: <000001c4037e$b68441a0$e841fea9@oemcomputer>
Message-ID: <404A38F9.3040009@ocf.berkeley.edu>

Raymond Hettinger wrote:

> If there are no objections, I would like to add an opcode for calling
> PyList_Append().  This simplifies the generated code for list
> comprehensions and reduces the overhead on each pass, improving the
> timings by about 35% on [i for i in itertools.repeat(None, 500)].
> 
> The patch is amazingly brief and clear:
> 	www.python.org/sf/910929
> 

Ah, nuts!  That was going to be part of my thesis!  =)  I was going to 
along the lines of "Type-specific Opcodes for Python Based on Static 
Type Inferencing of Local Variables".  At this rate the the first four 
words won't be needed.  =)

Joking aside, I am +1 since I was planning on adding that myself as long 
as my thesis pans out and actually is a gain.

Oh, and beware the evil cache monkey and his notorious way of making the 
eval loop not fit in the CPU cache when a single thing is added to it 
(or at least I remember a comment along those lines last time people 
tried fiddling with the eval loop).

-Brett

From pete at shinners.org  Sat Mar  6 15:52:13 2004
From: pete at shinners.org (Pete Shinners)
Date: Sat Mar  6 15:52:19 2004
Subject: [Python-Dev] Bug 911080, string split oddness
Message-ID: <c2ddlu$bl7$1@sea.gmane.org>

I just filed Bug #911080. Sorry if this has been discussed before, at 
minimum I believe there should be a documentation fix.

http://sourceforge.net/tracker/index.php?func=detail&aid=911080&group_id=5470&atid=105470


Basically this documents the difference between
     "a b  c".split()
     "a b  c".split(" ")

This is a quirky little change in behavior that I've seen confuse newbies. 
Unfortunately there are needs for both styles of splitting.

In a world where backward breaking changes aren't a big deal, I would think 
split() would always work like the first version, and splitfields() would 
change to work like the second version. But that would break just about 
everything, so I suppose split either needs another optional argument or a 
different split method. (splitgroup? splitseq?)

The only workaround is to use regular expression splitting. But that's not a 
direction I like to point newbies towards too quickly.


From jepler at unpythonic.net  Sat Mar  6 18:21:21 2004
From: jepler at unpythonic.net (Jeff Epler)
Date: Sat Mar  6 18:22:16 2004
Subject: [Python-Dev] Bug 911080, string split oddness
In-Reply-To: <c2ddlu$bl7$1@sea.gmane.org>
References: <c2ddlu$bl7$1@sea.gmane.org>
Message-ID: <20040306232119.GA7662@unpythonic.net>

I glanced at the bug report.  Do you have a suggested wording?
I think the current version documentation intends to explain the
no-argument form with this sentence:
    If sep is not specified or None, any whitespace string is a separator.
    [http://python.org/doc/current/lib/string-methods.html#l2h-197]

As for splitfields() maybe it should juts be marked for deprecation.  It
doesn't exist as a string method, for one thing, just in the string
module.  Here's the current implementation (in string.py):
    splitfields = split

Jeff

From pete at shinners.org  Sun Mar  7 01:12:12 2004
From: pete at shinners.org (Pete Shinners)
Date: Sun Mar  7 01:12:35 2004
Subject: [Python-Dev] Re: Bug 911080, string split oddness
In-Reply-To: <20040306232119.GA7662@unpythonic.net>
References: <c2ddlu$bl7$1@sea.gmane.org> <20040306232119.GA7662@unpythonic.net>
Message-ID: <c2eeft$jjs$1@sea.gmane.org>

Jeff Epler wrote:
> I think the current version documentation intends to explain the
> no-argument form with this sentence:
>     If sep is not specified or None, any whitespace string is a separator.
>     [http://python.org/doc/current/lib/string-methods.html#l2h-197]

I'm having a hard time making this clear. By reading the documentation I 
would guess that "split()" would be equivalent to 
"split(string.whitespace)". In fact they work very differently.

With no separator argument, the separators are grouped. The problem comes 
when your splitted string has multiple separators next to each other. To 
make them work the same is more like this

   mystr.split()
   filter(None, mystr.split(string.whitespace))


From jepler at unpythonic.net  Sun Mar  7 09:20:33 2004
From: jepler at unpythonic.net (Jeff Epler)
Date: Sun Mar  7 09:21:24 2004
Subject: [Python-Dev] Re: Bug 911080, string split oddness
In-Reply-To: <c2eeft$jjs$1@sea.gmane.org>
References: <c2ddlu$bl7$1@sea.gmane.org> <20040306232119.GA7662@unpythonic.net>
	<c2eeft$jjs$1@sea.gmane.org>
Message-ID: <20040307142032.GA11072@unpythonic.net>

How about:
"If sep is not specified or None, then any sequence of whitespace is a
separator."

Jeff

From tismer at stackless.com  Sun Mar  7 11:22:34 2004
From: tismer at stackless.com (Christian Tismer)
Date: Sun Mar  7 11:22:39 2004
Subject: [Python-Dev] Re: Bug 911080, string split oddness
In-Reply-To: <20040307142032.GA11072@unpythonic.net>
References: <c2ddlu$bl7$1@sea.gmane.org>
	<20040306232119.GA7662@unpythonic.net>	<c2eeft$jjs$1@sea.gmane.org>
	<20040307142032.GA11072@unpythonic.net>
Message-ID: <404B4C4A.9010802@stackless.com>

Jeff Epler wrote:

> How about:
> "If sep is not specified or None, then any sequence of whitespace is a
> separator."

Yes. But a thing that I was always missing, that in this
mode, I have no chance to specify what whitespace is.
I would like to add a way to spell this out.
Do you see any sane way for a syntax?

Or would it be better to see the current behavior as
a special case to be deprecated, and better have an extra method
like "splitspan" or something?

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@stackless.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  mobile +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


From guido at python.org  Sun Mar  7 11:56:33 2004
From: guido at python.org (Guido van Rossum)
Date: Sun Mar  7 11:56:41 2004
Subject: [Python-Dev] Re: Bug 911080, string split oddness
In-Reply-To: Your message of "Sun, 07 Mar 2004 17:22:34 +0100."
	<404B4C4A.9010802@stackless.com> 
References: <c2ddlu$bl7$1@sea.gmane.org> <20040306232119.GA7662@unpythonic.net>
	<c2eeft$jjs$1@sea.gmane.org>
	<20040307142032.GA11072@unpythonic.net> 
	<404B4C4A.9010802@stackless.com> 
Message-ID: <200403071656.i27GuX118490@guido.python.org>

> > How about:
> > "If sep is not specified or None, then any sequence of whitespace is a
> > separator."
> 
> Yes. But a thing that I was always missing, that in this
> mode, I have no chance to specify what whitespace is.
> I would like to add a way to spell this out.
> Do you see any sane way for a syntax?

Use the re module.

> Or would it be better to see the current behavior as
> a special case to be deprecated, and better have an extra method
> like "splitspan" or something?

No way.  The current way is just right.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From tdelaney at avaya.com  Sun Mar  7 17:28:26 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Sun Mar  7 17:28:34 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE013E7154@au3010avexu1.global.avaya.com>

As I'm sure was obvious from my previous post, +1.

Tim Delaney

From greg at cosc.canterbury.ac.nz  Sun Mar  7 18:27:04 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar  7 18:27:22 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
Message-ID: <200403072327.i27NR4a22841@oma.cosc.canterbury.ac.nz>

> This proposed new syntax is:
> 
> 	funcdef: 'def' NAME parameters ['[' exprlist ']' ] ':' suite
> 
> 	classdef: 'class' NAME ['(' testlist ')'] ['[' exprlist ']'] ':' suite

+1

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Sun Mar  7 18:29:17 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar  7 18:29:42 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <1078525896.4048ffc8275a4@mcherm.com>
Message-ID: <200403072329.i27NTH822852@oma.cosc.canterbury.ac.nz>

> I think the choice of syntax should be based on which syntax is
> better

For what it's worth, the +1 I just gave is because I
do like this particular syntax.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Sun Mar  7 19:03:53 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar  7 19:04:01 2004
Subject: [Python-Dev] Bug 911080, string split oddness
In-Reply-To: <20040306232119.GA7662@unpythonic.net>
Message-ID: <200403080003.i2803qh23002@oma.cosc.canterbury.ac.nz>

Jeff Epler <jepler@unpythonic.net>:

> I think the current version documentation intends to explain the
> no-argument form with this sentence:
>     If sep is not specified or None, any whitespace string is a separator.
>     [http://python.org/doc/current/lib/string-methods.html#l2h-197]

I'm inclined to agree with Pete Shinners that the above
explanation is perhaps a little too compressed.

Maybe "any string of consecutive whitespace characters is a
separator". Perhaps with an explicit note that this is NOT
equivalent to .split(" \t\n") and the reason why.

Pete Shinners <pete@shinners.org>:

> I suppose split either needs another optional argument or a 
> different split method. (splitgroup? splitseq?)

splitany? (splits on "any" of the chars in the arg)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From kbk at shore.net  Sun Mar  7 22:02:12 2004
From: kbk at shore.net (Kurt B. Kaiser)
Date: Sun Mar  7 22:02:19 2004
Subject: [Python-Dev] Weekly Python Bug/Patch Summary
Message-ID: <200403080302.i2832CBW020041@hydra.localdomain>


Patch / Bug Summary
___________________

Patches :  250 open ( +7) /  2322 closed ( +3) /  2572 total (+10)
Bugs    :  733 open ( +5) /  3914 closed (+10) /  4647 total (+15)
RFE     :  131 open ( +1) /   119 closed ( +0) /   250 total ( +1)

New / Reopened Patches
______________________

move test() in SimpleDialog.py  (2004-03-06)
       http://python.org/sf/911176  opened by  tox-the-wanderer 

dict.update should take a 2-tuple sequence like dict.__init_  (2004-02-25)
CLOSED http://python.org/sf/904720  reopened by  rhettinger

support CVS version of tcl/tk ("8.5")  (2004-02-27)
       http://python.org/sf/905863  opened by  Jeff Epler 

Fix typos in pystate.h comments  (2004-02-28)
       http://python.org/sf/906501  opened by  Greg Chapman 

Syntax-related improvements to IDLE  (2004-02-29)
       http://python.org/sf/906702  opened by  Noam Raphael 

Improvements to cStringIO.writelines()  (2004-03-01)
       http://python.org/sf/907403  opened by  Raymond Hettinger 

WinSock 2 support on Win32 w/ MSVC++ 6  (2004-03-02)
       http://python.org/sf/908631  opened by  Jeff Connelly 

asyncore fixes and improvements  (2004-03-03)
       http://python.org/sf/909005  opened by  Alexey Klimkin 

_ssl.c compatibility  (2004-03-03)
       http://python.org/sf/909007  opened by  Samuel Nicolary 

Optimize list comprehensions  (2004-03-06)
CLOSED http://python.org/sf/910929  opened by  Raymond Hettinger 

robot.txt must be robots.txt  (2004-03-08)
       http://python.org/sf/911431  opened by  George Yoshida 

Patches Closed
______________

dict.update should take a 2-tuple sequence like dict.__init_  (2004-02-25)
       http://python.org/sf/904720  closed by  rhettinger

dict.update should take a 2-tuple sequence like dict.__init_  (2004-02-25)
       http://python.org/sf/904720  closed by  nnorwitz

FreeBSD new pthread problem with system scope  (2004-02-23)
       http://python.org/sf/902444  closed by  perky

Optimize list comprehensions  (2004-03-06)
       http://python.org/sf/910929  closed by  rhettinger

New / Reopened Bugs
___________________

str.join() intercepts TypeError raised by iterator  (2004-02-26)
       http://python.org/sf/905389  opened by  Lenard Lindstrom 

BuildApplet needs to get more BuildApplication features  (2004-02-27)
       http://python.org/sf/905737  opened by  Jack Jansen 

Build fails on XP  (2004-02-28)
       http://python.org/sf/906405  opened by  Con 

Docs for os, popen2 omit list argument to popen2() et al  (2004-03-01)
       http://python.org/sf/907457  opened by  Michael Hoffman 

"\textttNone" in Index  (2004-03-01)
CLOSED http://python.org/sf/907575  opened by  Heiko Selber 

pdb should hash stdout  (2004-03-02)
       http://python.org/sf/908316  opened by  Miki Tebeka 

default index for __getslice__ is not sys.maxint  (2004-03-02)
       http://python.org/sf/908441  opened by  Matthias Drochner 

rexec.r_eval() does not work like eval()  (2004-03-03)
       http://python.org/sf/908936  opened by  Philippe Fremy 

Can Not import a module with - (DASH) in filename  (2004-03-03)
CLOSED http://python.org/sf/909226  opened by  Benjamin Schollnick 

bug in idna-encoding-module  (2004-03-03)
       http://python.org/sf/909230  opened by  Rumpeltux 

segmentation fault  (2004-03-03)
       http://python.org/sf/909295  opened by  Chuck Mason 

Embedded Python Interpreter in MFC apps leaks  (2004-03-03)
       http://python.org/sf/909308  opened by  David 

copy.copy fails for array.array  (2004-03-06)
       http://python.org/sf/910986  opened by  Glenn Parker 

string str.split() behaviour inconsistency  (2004-03-06)
CLOSED http://python.org/sf/911080  opened by  Pete Shinners 

Bugs Closed
___________

test_threading  (2003-08-19)
       http://python.org/sf/791542  closed by  bcannon

test_strptime failures on FC2-test  (2004-02-15)
       http://python.org/sf/897817  closed by  bcannon

time.strftime crashes python  (2004-02-15)
       http://python.org/sf/897625  closed by  bcannon

logging.StreamHandler encodes log message in UTF-8  (2003-11-03)
       http://python.org/sf/835353  closed by  vsajip

"\textttNone" in Index  (2004-03-01)
       http://python.org/sf/907575  closed by  akuchling

Platform module not documented  (2004-01-09)
       http://python.org/sf/873652  closed by  bcannon

Can Not import a module with - (DASH) in filename  (2004-03-03)
       http://python.org/sf/909226  closed by  montanaro

string str.split() behaviour inconsistency  (2004-03-06)
       http://python.org/sf/911080  closed by  sjoerd

New / Reopened RFE
__________________

int object need more informative error message  (2004-02-29)
       http://python.org/sf/906746  opened by  George Yoshida 


From sjoerd at acm.org  Mon Mar  8 03:04:52 2004
From: sjoerd at acm.org (Sjoerd Mullender)
Date: Mon Mar  8 03:04:59 2004
Subject: [Python-Dev] Bug 911080, string split oddness
In-Reply-To: <200403080003.i2803qh23002@oma.cosc.canterbury.ac.nz>
References: <200403080003.i2803qh23002@oma.cosc.canterbury.ac.nz>
Message-ID: <404C2924.7040304@acm.org>

Greg Ewing wrote:
> Jeff Epler <jepler@unpythonic.net>:
> 
> 
>>I think the current version documentation intends to explain the
>>no-argument form with this sentence:
>>    If sep is not specified or None, any whitespace string is a separator.
>>    [http://python.org/doc/current/lib/string-methods.html#l2h-197]
> 
> 
> I'm inclined to agree with Pete Shinners that the above
> explanation is perhaps a little too compressed.
> 
> Maybe "any string of consecutive whitespace characters is a
> separator". Perhaps with an explicit note that this is NOT
> equivalent to .split(" \t\n") and the reason why.
> 
> Pete Shinners <pete@shinners.org>:

See the existing bug report #901654 which is specifically about the 
split documentation.  Please add to that.

-- 
Sjoerd Mullender <sjoerd@acm.org>

From mwh at python.net  Mon Mar  8 07:44:49 2004
From: mwh at python.net (Michael Hudson)
Date: Mon Mar  8 07:44:53 2004
Subject: [Python-Dev] PEP 318 - function/method/class decoration
In-Reply-To: <1E6963A9-6F97-11D8-8E4E-000A95686CD8@redivi.com> (Bob
	Ippolito's message of "Sat, 6 Mar 2004 12:53:04 -0500")
References: <E5AB40C2-6EE2-11D8-90DC-000A95686CD8@redivi.com>
	<s9z3c8lopxt.fsf@salmakis.intevation.de>
	<3C8E80AB-6F95-11D8-8E4E-000A95686CD8@redivi.com>
	<s9zvflhna7j.fsf@salmakis.intevation.de>
	<1E6963A9-6F97-11D8-8E4E-000A95686CD8@redivi.com>
Message-ID: <2m8yib1pa6.fsf@starship.python.net>

Bob Ippolito <bob@redivi.com> writes:

> On Mar 6, 2004, at 12:42 PM, Bernhard Herzog wrote:
>
>> Bob Ippolito <bob@redivi.com> writes:
>>
>>> On Mar 6, 2004, at 12:17 PM, Bernhard Herzog wrote:
>>>
>>>> Bob Ippolito <bob@redivi.com> writes:
>>>>
>>>>> This proposed new syntax is:
>>>>>
>>>>> 	funcdef: 'def' NAME parameters ['[' exprlist ']' ] ':' suite
>>>>>
>>>>> 	classdef: 'class' NAME ['(' testlist ')'] ['[' exprlist ']'] ':'
>>>>> suite
>>>>
>>>> Why are the decorators an exprlist while the base classes are a
>>>> testlist?
>>>
>>> The testlist is the list of base classes..  In both cases, the
>>> decorators are an '[' exprlist ']'
>>
>> That much was obvious enough :).  What I meant was: Why are the
>> decorators an exprlist and not a testlist?  The base classes are a test
>> list and the elements of a list-literal that (listmaker in Grammar) are
>> effectively a testlist too, so it's not obvious why the decorators
>> should not also be a testlist.
>
> Oh, sorry.. haven't had my coffee yet.
>
> In that case, I'm not sure.  This is mwh's implementation, I'm sure
> there was a reason.

I wouldn't be :-)

Cheers,
mwh

-- 
  In the 1950s and 60s there was a regular brain drain of young
  Australians from the cities to London, but it was because of 
  money, culture and opportunity, not spiders.
           -- Al Grant, ucam.chat, from Owen Dunn's review of the year

From jim.jewett at eds.com  Mon Mar  8 13:21:19 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Mon Mar  8 13:22:06 2004
Subject: [Python-Dev] (Specific syntax of) PEP 318 - function/method/class
	decoration
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D37F@USAHM010.amer.corp.eds.com>

Bob Ippolito:

> I would like to focus this thread to, specifically, if *this syntax* 
> should become part of Python. 

	def name(args)[something_else]:

-1.  With more explicit and extensible syntax, I would 
be +1, but with *this* syntax, I'm -1.

Why are you so insistent on the bare syntax?  The only
arguments I have seen are:

(1)  We already have it.  

The author himself said that the patch isn't final, and 
he can change the syntax.  Even some semantic changes 
(testlist vs exprlist) may be OK.

(2)  It is shorter to type.  

Adding "as" or "mods" is fairly small compared to the names 
of the wrappers themselves, even in your examples.

(3) New keywords are bad.  

Not as bad as overloading an old one with a new meaning.  

There has already been some confusion over how the elements 
of the "list" will be restricted -- confusion that hasn't 
been cleared up.  If it isn't a "normal" list, right from
the start, then this should be marked.

> Please, "import this" before you decide to -1 or -0.. specifically:

You forgot a few

Explicit is better than implicit.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although never is often better than *right* now.
Namespaces are one honking great idea -- let's do more of those!

-jJ

From pje at telecommunity.com  Mon Mar  8 13:48:09 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon Mar  8 13:43:04 2004
Subject: [Python-Dev] (Specific syntax of) PEP 318 -
	function/method/class decoration
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D37F@USAHM010.amer.cor
	p.eds.com>
Message-ID: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>

At 01:21 PM 3/8/04 -0500, Jewett, Jim J wrote:
>Bob Ippolito:
>
> > I would like to focus this thread to, specifically, if *this syntax*
> > should become part of Python.
>
>         def name(args)[something_else]:
>
>-1.  With more explicit and extensible syntax, I would
>be +1, but with *this* syntax, I'm -1.
>
>Why are you so insistent on the bare syntax?  The only
>arguments I have seen are:
>
>(1)  We already have it.
>
>The author himself said that the patch isn't final, and
>he can change the syntax.  Even some semantic changes
>(testlist vs exprlist) may be OK.
>
>(2)  It is shorter to type.
>
>Adding "as" or "mods" is fairly small compared to the names
>of the wrappers themselves, even in your examples.
>
>(3) New keywords are bad.
>
>Not as bad as overloading an old one with a new meaning.

(4) C# uses the [] to designate "attributes" of functions, classes, and 
parameters.

The positioning is different, though.  C# puts the bracketed list of 
"attributes" at the *beginning* of the declarations, so if we were really 
following C#'s lead, the syntax would actually be:

class [interface] Foo:

    def [classmethod] foo(cls):
        ...

or, even closer to C#:

[interface] class Foo:

    [classmethod] def foo(cls):
        ...

but that's really ugly.

Anyway, I seem to recall that the bracketed syntax was originally inspired 
by C#'s attribute mechanism, but you'd have to ask the original author to 
be sure.


From guido at python.org  Mon Mar  8 13:53:32 2004
From: guido at python.org (Guido van Rossum)
Date: Mon Mar  8 13:53:38 2004
Subject: [Python-Dev] PEP 318 needs a rewrite
In-Reply-To: Your message of "Mon, 08 Mar 2004 13:48:09 EST."
	<5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com> 
References: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com> 
Message-ID: <200403081853.i28IrW931576@guido.python.org>

Instead of arguing in circles, can someone (or a small group) rewrite
the PEP to contain a summary of the arguments for and against various
forms?

I've been asked by a few folks to pronounce on the PEP, but as it
stands, the PEP doesn't adequately summarize the discussion, so I
can't pronounce.

I haven't been able to follow the discussion here in much detail, but
I've sampled bits and pieces, and I happen to like the Quixote form

  def <name> [<attributes>] (<arguments>): <body>

As a human reader, I like seeing the attributes before the arguments
because some attributes (like classmethod and staticmethod) affect the
interpretation of the argument list.  Also, I suspect that long
argument lists are much more common than long lists of attributes; I
don't like finding attributes hiding after the argument list.

I don't like having the attributes before the name, because that makes
it harder to find the name.  So from this perspective the Quixote form
is optimal IMO.

(Feel free to rewrite the PEP to amtch this opinion. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From barry at python.org  Mon Mar  8 13:57:12 2004
From: barry at python.org (Barry Warsaw)
Date: Mon Mar  8 13:57:24 2004
Subject: [Python-Dev] (Specific syntax of) PEP 318 -
	function/method/class decoration
In-Reply-To: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>
References: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>
Message-ID: <1078772231.5337.151.camel@anthem.wooz.org>

On Mon, 2004-03-08 at 13:48, Phillip J. Eby wrote:

> The positioning is different, though.  C# puts the bracketed list of 
> "attributes" at the *beginning* of the declarations, so if we were really 
> following C#'s lead, the syntax would actually be:
> 
> class [interface] Foo:
> 
>     def [classmethod] foo(cls):
>         ...

After rewriting a bunch of my existing code to use the various proposed
syntaxes in PEP 318, this is the one I ended up preferring.

But again, I think all the issues, opinions, etc. are covered in the
PEP, so it doesn't make much sense to continue to debate the issue. 
Hopefully Guido will pronounce on it soon.

-Barry



From jim.jewett at eds.com  Mon Mar  8 14:21:34 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Mon Mar  8 14:22:05 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on elements
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>

Given that some syntax will be chosen, what are the
restrictions on the wrappers?

Right now, the variable portion of the syntax is:

	'[' wrapper (, wrapper)* ']'

What else should be accepted, to avoid surprises?

	Variables that evaluate to a list?
	Expressions that evaluate to a list?
	List comprehensions?
	Single wrappers?

What are the restrictions on list items?

	Only a function?
	Only a code object?  (I assume so)
	Must the code return the original function?
	Must the code return a code object?  

In other words, which of the following examples
should work, which should fail, and which should
depend on whatever is easiest?

1.	# fail, integer not a callable?  (Don't try to slice the args!)
	def name(args)[0]:

2.	# fail, as w[0] is itself a (non-callable) list?
	w = [wrapper]
	def name(args) [w]:

3.	# fail, not a list?
	def name(args) classmethod:

4.	# ??? lambda would be OK, except for the second ":"
	def name(args)[lambda x: x]:

5.	# should succeed?
	def name(args)[x for x in decorators if x in active]:	

6.	# ??? wrapper does not return a callable.
	y = lambda x: None
	def name(args) [y]:

7.	# a list, but no '[]'.  either surprises.
	def name(args) list((wrap1, wrap2)):

8.	# a list, but no '[]'.  either surprises.
	w = [wrapper]
	def name(args) w:		

9.	# a list, but no '[]'.  either surprises.
	def w(): return [y,y,y]
	def name(args) w():

Do the wrappers have to be defined when the definition starts?
Just defined before the module finishes its definition?
Before the definition is executed?  In other words, are these OK?

def outer():
    # w not defined at module load, but defined before any call
    def inner(x)[w]:
        print x
    return inner

But if that is OK, then which w would be used in the next example?
w is redefined "later", but before inner is actually called.

def w(fn): 
    print "outside w"
    return fn

def outer():
    # w not defined yet, but defined in this
    def inner(x)[w]:
        print x
    def w(fn):
        return fn
    inner(5)

How about this?  There is no guarantee that othermod
will finish loading first, though I suppose that 
already causes problems today.

import othermod
def fn()[othermod.wrap]:
    pass

-jJ

From barry at python.org  Mon Mar  8 14:36:40 2004
From: barry at python.org (Barry Warsaw)
Date: Mon Mar  8 14:36:52 2004
Subject: [Python-Dev] PEP 318 needs a rewrite
In-Reply-To: <200403081853.i28IrW931576@guido.python.org>
References: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>
	<200403081853.i28IrW931576@guido.python.org>
Message-ID: <1078774599.5337.164.camel@anthem.wooz.org>

On Mon, 2004-03-08 at 13:53, Guido van Rossum wrote:

> I haven't been able to follow the discussion here in much detail, but
> I've sampled bits and pieces, and I happen to like the Quixote form
> 
>   def <name> [<attributes>] (<arguments>): <body>
> 
> As a human reader, I like seeing the attributes before the arguments
> because some attributes (like classmethod and staticmethod) affect the
> interpretation of the argument list.  Also, I suspect that long
> argument lists are much more common than long lists of attributes; I
> don't like finding attributes hiding after the argument list.
> 
> I don't like having the attributes before the name, because that makes
> it harder to find the name.  So from this perspective the Quixote form
> is optimal IMO.

Cool.  I just tried rewriting my sample code to use this form, and it's
not bad.  I even hacked out a quick patch to python-mode.el to support
it. :)

http://sourceforge.net/tracker/index.php?func=detail&aid=912205&group_id=86916&atid=581351

-Barry



From kbk at shore.net  Mon Mar  8 14:52:14 2004
From: kbk at shore.net (Kurt B. Kaiser)
Date: Mon Mar  8 14:52:20 2004
Subject: [Python-Dev] PEP 318 needs a rewrite
In-Reply-To: <200403081853.i28IrW931576@guido.python.org> (Guido van
	Rossum's message of "Mon, 08 Mar 2004 10:53:32 -0800")
References: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>
	<200403081853.i28IrW931576@guido.python.org>
Message-ID: <87ishf86c1.fsf@hydra.localdomain>

Guido van Rossum <guido@python.org> writes:

> I don't like having the attributes before the name, because that makes
> it harder to find the name.  So from this perspective the Quixote form
> is optimal IMO.

+1

-- 
KBK

From pje at telecommunity.com  Mon Mar  8 15:01:49 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon Mar  8 14:56:42 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.cor
	p.eds.com>
Message-ID: <5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>

At 02:21 PM 3/8/04 -0500, Jewett, Jim J wrote:
>Given that some syntax will be chosen, what are the
>restrictions on the wrappers?
>
>Right now, the variable portion of the syntax is:
>
>         '[' wrapper (, wrapper)* ']'
>
>What else should be accepted, to avoid surprises?
>
>         Variables that evaluate to a list?
>         Expressions that evaluate to a list?
>         List comprehensions?
>         Single wrappers?

None of the above.  It's a list constant, period.


>What are the restrictions on list items?
>
>         Only a function?
>         Only a code object?  (I assume so)
>         Must the code return the original function?
>         Must the code return a code object?

Any callable object.  May return anything.



>In other words, which of the following examples
>should work, which should fail, and which should
>depend on whatever is easiest?
>
>1.      # fail, integer not a callable?  (Don't try to slice the args!)
>         def name(args)[0]:
>
>2.      # fail, as w[0] is itself a (non-callable) list?
>         w = [wrapper]
>         def name(args) [w]:
>
>3.      # fail, not a list?
>         def name(args) classmethod:

This one's a syntax error.


>4.      # ??? lambda would be OK, except for the second ":"
>         def name(args)[lambda x: x]:

This should work.

>5.      # should succeed?
>         def name(args)[x for x in decorators if x in active]:

Should fail.


>6.      # ??? wrapper does not return a callable.
>         y = lambda x: None
>         def name(args) [y]:

Should succeed.


>7.      # a list, but no '[]'.  either surprises.
>         def name(args) list((wrap1, wrap2)):

Syntax error.


>8.      # a list, but no '[]'.  either surprises.
>         w = [wrapper]
>         def name(args) w:

Syntax error.


>9.      # a list, but no '[]'.  either surprises.
>         def w(): return [y,y,y]
>         def name(args) w():

Syntax error.



>Do the wrappers have to be defined when the definition starts?
>Just defined before the module finishes its definition?
>Before the definition is executed?  In other words, are these OK?

The expressions are evaluated after the function or class object is 
created, but *before* it is bound to the name defined in the def or class 
statement.


>def outer():
>     # w not defined at module load, but defined before any call
>     def inner(x)[w]:
>         print x
>     return inner

This is fine.


>But if that is OK, then which w would be used in the next example?
>w is redefined "later", but before inner is actually called.
>
>def w(fn):
>     print "outside w"
>     return fn
>
>def outer():
>     # w not defined yet, but defined in this
>     def inner(x)[w]:
>         print x
>     def w(fn):
>         return fn
>     inner(5)

You're going to get an UnboundLocalError for this code, because 'w' is 
defined in 'outer', and therefore obscures the global 'w'.

Also, remember that wrapping takes place when the function object is about 
to be assigned to the name 'inner', so the execution of 'inner' has nothing 
to do with anything.


>How about this?  There is no guarantee that othermod
>will finish loading first, though I suppose that
>already causes problems today.
>
>import othermod
>def fn()[othermod.wrap]:
>     pass

What are you talking about?  The only way that's not guaranteed is if 
othermod is importing the current module, and if wrap isn't yet defined, 
then an AttributeError will occur here.


From skip at pobox.com  Mon Mar  8 15:05:12 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar  8 15:05:28 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.cor
	p.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
Message-ID: <16460.53752.49097.660449@montanaro.dyndns.org>


    >> What are the restrictions on list items?
    >> 
    >> Only a function?
    >> Only a code object?  (I assume so)
    >> Must the code return the original function?
    >> Must the code return a code object?

    Phillip> Any callable object.  May return anything.

Must take a single argument, which itself must be a callable, right?

Skip

From fdrake at acm.org  Mon Mar  8 15:14:26 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Mon Mar  8 15:14:39 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <16460.53752.49097.660449@montanaro.dyndns.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.cor
	p.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
Message-ID: <200403081514.26363.fdrake@acm.org>

On Monday 08 March 2004 03:05 pm, Skip Montanaro wrote:
 > Must take a single argument, which itself must be a callable, right?

If I write:

    def foo() [w1, w2]:
        pass

I'd expect w2() to be passed whatever w1() returns, regardless of whether it's 
callable.  It should raise an exception if it gets something it can't handle.



  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From theller at python.net  Mon Mar  8 15:27:36 2004
From: theller at python.net (Thomas Heller)
Date: Mon Mar  8 15:27:56 2004
Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Lib/logging
 handlers.py, 1.11, 1.12
In-Reply-To: <E1B0O4O-0007bp-Pd@sc8-pr-cvs1.sourceforge.net>
	(vsajip@users.sourceforge.net's
	message of "Mon, 08 Mar 2004 08:57:36 -0800")
References: <E1B0O4O-0007bp-Pd@sc8-pr-cvs1.sourceforge.net>
Message-ID: <n06rnkxz.fsf@python.net>

vsajip@users.sourceforge.net writes:

> Update of /cvsroot/python/python/dist/src/Lib/logging
> In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv29178
>
> Modified Files:
> 	handlers.py 
> Log Message:
> Removed spurious import statement
>
> Index: handlers.py
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Lib/logging/handlers.py,v
> retrieving revision 1.11
> retrieving revision 1.12
> diff -C2 -d -r1.11 -r1.12
> *** handlers.py	28 Feb 2004 16:07:46 -0000	1.11
> --- handlers.py	8 Mar 2004 16:57:19 -0000	1.12
> ***************
> *** 30,35 ****
>   import sys, logging, socket, types, os, string, cPickle, struct, time
>   
> - from SocketServer import ThreadingTCPServer, StreamRequestHandler
> - 
>   #
>   # Some constants...

Should be fixed in the 2.3 maintainance branch also, imo.


From skip at pobox.com  Mon Mar  8 15:27:54 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar  8 15:28:12 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403081514.26363.fdrake@acm.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.cor
	p.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
	<200403081514.26363.fdrake@acm.org>
Message-ID: <16460.55114.935938.893587@montanaro.dyndns.org>


    Fred> On Monday 08 March 2004 03:05 pm, Skip Montanaro wrote:

    >> Must take a single argument, which itself must be a callable, right?

    Fred> If I write:

    Fred>     def foo() [w1, w2]:
    Fred>         pass

    Fred> I'd expect w2() to be passed whatever w1() returns, regardless of
    Fred> whether it's callable.  It should raise an exception if it gets
    Fred> something it can't handle.

Yes.  I was thinking of the case where we wanted it to return something
useful which could be bound to the name "foo".  I suppose if you've had too
much caffeine you could dream up a case where w1() returns an AST based on
the original foo and w2() does something with it to cook up a new object,
but I suspect that would be pretty rare.

Skip

From aahz at pythoncraft.com  Mon Mar  8 15:32:13 2004
From: aahz at pythoncraft.com (Aahz)
Date: Mon Mar  8 15:32:18 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403081514.26363.fdrake@acm.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
	<200403081514.26363.fdrake@acm.org>
Message-ID: <20040308203213.GA14819@panix.com>

On Mon, Mar 08, 2004, Fred L. Drake, Jr. wrote:
> On Monday 08 March 2004 03:05 pm, Skip Montanaro wrote:
>>
>> Must take a single argument, which itself must be a callable, right?
> 
> If I write:
> 
>     def foo() [w1, w2]:
>         pass
> 
> I'd expect w2() to be passed whatever w1() returns, regardless of
> whether it's callable.  It should raise an exception if it gets
> something it can't handle.

No, that's not right.  If

    def foo() [w1, w2]: pass

is valid, this must also always be valid:

    def foo() [w2]: pass

I'm not sure to what extent we can/should enforce this, but I'm -1 on
any proposal for which this isn't the documented behavior.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From tdelaney at avaya.com  Mon Mar  8 15:35:13 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Mon Mar  8 15:35:20 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE0145F532@au3010avexu1.global.avaya.com>

> From: Skip Montanaro
> 
> Yes.  I was thinking of the case where we wanted it to return 
> something
> useful which could be bound to the name "foo".  I suppose if 
> you've had too
> much caffeine you could dream up a case where w1() returns an 
> AST based on
> the original foo and w2() does something with it to cook up a 
> new object,
> but I suspect that would be pretty rare.

You could create some lovely obfuscated code ...

def decorator (func):
    return 1

def func [decorator] (a, b):
    return 2

print func(3, 4)

...
TypeError

:)

BTW, I have to say that looking at it, I don't like the decorator before the arguments. I'm immediately trying to read it as:

    func (decorator) ...

It's something I think I could eventually get used to, but I don't find it very readable. I guess there's not enough languages that do something like that.

Tim Delaney

From fdrake at acm.org  Mon Mar  8 15:45:30 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Mon Mar  8 15:45:43 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <16460.55114.935938.893587@montanaro.dyndns.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.cor
	p.eds.com> <200403081514.26363.fdrake@acm.org>
	<16460.55114.935938.893587@montanaro.dyndns.org>
Message-ID: <200403081545.30355.fdrake@acm.org>

On Monday 08 March 2004 03:05 pm, Skip Montanaro wrote:
 > Must take a single argument, which itself must be a callable,
 > right?

I wrote:
 > I'd expect w2() to be passed whatever w1() returns, regardless of
 > whether it's callable.  It should raise an exception if it gets
 > something it can't handle.

On Monday 08 March 2004 03:27 pm, Skip Montanaro wrote:
 > Yes.  I was thinking of the case where we wanted it to return something
 > useful which could be bound to the name "foo".  I suppose if you've had

And I certainly expect that's the typical case; I was mostly reacting to your 
use of the word "must" rather than the idea.  When I read "must", that tells 
me someone is going to check that in the mechanism rather than just passing 
it on.

 > too much caffeine you could dream up a case where w1() returns an AST
 > based on the original foo and w2() does something with it to cook up a new
 > object, but I suspect that would be pretty rare.

Defining foo to some useful object doesn't imply that it's callable, or that 
it can't be further transformed in useful ways.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From skip at pobox.com  Mon Mar  8 15:46:37 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar  8 15:46:47 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040308203213.GA14819@panix.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
	<200403081514.26363.fdrake@acm.org>
	<20040308203213.GA14819@panix.com>
Message-ID: <16460.56237.524168.961669@montanaro.dyndns.org>


    >> I'd expect w2() to be passed whatever w1() returns, regardless of
    >> whether it's callable.  It should raise an exception if it gets
    >> something it can't handle.

    aahz> No, that's not right.  If

    aahz>     def foo() [w1, w2]: pass

    aahz> is valid, this must also always be valid:

    aahz>     def foo() [w2]: pass

Can you explain why this must be the case?  I agree that coupling between w1
and w2 should be discouraged (see my ast example).

    aahz> I'm not sure to what extent we can/should enforce this, but I'm -1
    aahz> on any proposal for which this isn't the documented behavior.

I guess this is an area where PEP 318 should be fleshed out a bit.  I don't
see any reason it shouldn't be expanded to include semantics as well as
syntax.  That might require a title change, but I don't think the semantics
should be left unspecified, nor do I think the syntax and semantics should
reside in separate PEPs.

Skip

From pf_moore at yahoo.co.uk  Mon Mar  8 15:50:38 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Mon Mar  8 15:50:44 2004
Subject: [Python-Dev] Re: PEP 318 - generality of list;
	restrictions on elements
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.cor
	p.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
Message-ID: <wu5vxdup.fsf@yahoo.co.uk>

Skip Montanaro <skip@pobox.com> writes:

>     >> What are the restrictions on list items?
>     >> 
>     >> Only a function?
>     >> Only a code object?  (I assume so)
>     >> Must the code return the original function?
>     >> Must the code return a code object?
>
>     Phillip> Any callable object.  May return anything.
>
> Must take a single argument, which itself must be a callable, right?

I work from the equivalence of

    def f [a, b, c] (args):
        ...

and

    def f(args):
        ...
    f = a(f)
    f = b(f)
    f = c(f)

Hence, all wrappers *will be called with* a single argument. That
argument may not be a callable (if a returns a non-callable, that's
what b will get) and the return value need not be a callable. Such
uses could be considered strange, however...

Paul.
-- 
This signature intentionally left blank


From skip at pobox.com  Mon Mar  8 15:53:24 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar  8 15:54:01 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <338366A6D2E2CA4C9DAEAE652E12A1DE0145F532@au3010avexu1.global.avaya.com>
References: <338366A6D2E2CA4C9DAEAE652E12A1DE0145F532@au3010avexu1.global.avaya.com>
Message-ID: <16460.56644.548695.710081@montanaro.dyndns.org>


    Tim> BTW, I have to say that looking at it, I don't like the decorator
    Tim> before the arguments. I'm immediately trying to read it as:

    Tim>     func (decorator) ...

    Tim> It's something I think I could eventually get used to, but I don't
    Tim> find it very readable. I guess there's not enough languages that do
    Tim> something like that.

Perhaps the Quixote folks can shed some light on their decision about
decorator syntax when they created PTL.  I suspect they gave some
consideration to various alternatives before settling on decorator-before-
the-arg.  Andrew?  Neil?  Greg?

Skip

From goodger at python.org  Mon Mar  8 15:54:20 2004
From: goodger at python.org (David Goodger)
Date: Mon Mar  8 15:54:19 2004
Subject: [Python-Dev] Re: PEP 318 needs a rewrite
In-Reply-To: <200403081853.i28IrW931576@guido.python.org>
References: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>
	<200403081853.i28IrW931576@guido.python.org>
Message-ID: <404CDD7C.7000706@python.org>

Guido van Rossum wrote:
> Instead of arguing in circles, can someone (or a small group) rewrite
> the PEP to contain a summary of the arguments for and against various
> forms?

Kevin Smith, the original PEP author, doesn't frequent python-dev much,
and has indicated that he'd be happy to step down as primary author.
Who would like to build on Kevin's work?

-- David Goodger (wearing the PEP Editor hat)

From aahz at pythoncraft.com  Mon Mar  8 16:00:01 2004
From: aahz at pythoncraft.com (Aahz)
Date: Mon Mar  8 16:01:17 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <16460.56237.524168.961669@montanaro.dyndns.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
	<200403081514.26363.fdrake@acm.org>
	<20040308203213.GA14819@panix.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
Message-ID: <20040308210000.GA20554@panix.com>

On Mon, Mar 08, 2004, Skip Montanaro wrote:
>
> 
>     >> I'd expect w2() to be passed whatever w1() returns, regardless of
>     >> whether it's callable.  It should raise an exception if it gets
>     >> something it can't handle.
> 
>     aahz> No, that's not right.  If
> 
>     aahz>     def foo() [w1, w2]: pass
> 
>     aahz> is valid, this must also always be valid:
> 
>     aahz>     def foo() [w2]: pass
> 
> Can you explain why this must be the case?  I agree that coupling between w1
> and w2 should be discouraged (see my ast example).

Principle of least surprise, essentially.  There are already going to be
enough obscure uses for this; let's try to keep the completely whacky out
of it.  You'll have to come up with an awfully convincing use case to
change my mind.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From skip at pobox.com  Mon Mar  8 16:05:34 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar  8 16:05:43 2004
Subject: [Python-Dev] Re: PEP 318 needs a rewrite
In-Reply-To: <404CDD7C.7000706@python.org>
References: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>
	<200403081853.i28IrW931576@guido.python.org>
	<404CDD7C.7000706@python.org>
Message-ID: <16460.57374.62020.348959@montanaro.dyndns.org>

    David> Guido van Rossum wrote:
    >> Instead of arguing in circles, can someone (or a small group) rewrite
    >> the PEP to contain a summary of the arguments for and against various
    >> forms?

    David> Kevin Smith, the original PEP author, doesn't frequent python-dev
    David> much, and has indicated that he'd be happy to step down as
    David> primary author.  Who would like to build on Kevin's work?

I'd be happy to help, though I'd much prefer to do it as part of a small
group than alone.

Skip


From barry at python.org  Mon Mar  8 16:20:33 2004
From: barry at python.org (Barry Warsaw)
Date: Mon Mar  8 16:20:47 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <16460.56237.524168.961669@montanaro.dyndns.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
	<200403081514.26363.fdrake@acm.org> <20040308203213.GA14819@panix.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
Message-ID: <1078780832.5337.186.camel@anthem.wooz.org>

On Mon, 2004-03-08 at 15:46, Skip Montanaro wrote:

> I guess this is an area where PEP 318 should be fleshed out a bit.  I don't
> see any reason it shouldn't be expanded to include semantics as well as
> syntax.  That might require a title change, but I don't think the semantics
> should be left unspecified, nor do I think the syntax and semantics should
> reside in separate PEPs.

I agree.  FWIW, I think the list of things inside the square brackets
should be simple identifiers, i.e. the names of callables that have been
defined elsewhere.  I'd prefer to keep them simple, meaning no lambdas
or list comprehensions.  But that's just me.

-Barry



From jepler at unpythonic.net  Mon Mar  8 16:29:54 2004
From: jepler at unpythonic.net (Jeff Epler)
Date: Mon Mar  8 16:29:58 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
Message-ID: <20040308212953.GE12662@unpythonic.net>

In the suggested syntaxes, the modifiers are not a list.  There's no
list here:
    >>> def f() [wrapper]: pass
just like there's no tuple* in either of these (or the above):
    >>> def f(x, y): pass
    >>> class C(A, B): pass
and you can't just say
    >>> s1 = (x, y)
    >>> def f s1: pass
or
    >>> s2 = (A, B)
    >>> class C s2: pass

I don't think there'll be any trouble becoming accustomed to this new
syntax, with something "a lot like a list literal (but not quite)".

The answer to any of the specific questions you posed is answered
by the PEP's new grammar rule.  As I understand it, the items must be
callable, they may return anything, and they must take the output from
the inner modifier function (a function object in the case of the
earliest-called modifier).  For instance, the following would be
permitted (but probably not encouraged):

    >>> def call(f): return f()
    >>> def pi() [call]:
    ...     return math.atan(1) * 4
    >>> pi
    3.1415926535897931


Jeff

From bac at OCF.Berkeley.EDU  Mon Mar  8 16:31:53 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Mon Mar  8 16:32:03 2004
Subject: [Python-Dev] Status of patch 764217, fix for tkFond.Font(name=)?
In-Reply-To: <rowen-6693EE.11382625022004@sea.gmane.org>
References: <rowen-6693EE.11382625022004@sea.gmane.org>
Message-ID: <404CE649.9080106@ocf.berkeley.edu>

Russell E. Owen wrote:

> I submitted patch 764217 
>  awhile ago and it got some intial interest, but after applying the
> requested changes I've not heard anything for quite awhile.
> 
> The summary of the patch is: kFont.Font(name=xxx) crashes if a font by 
> the specified name already exists. This is a problem for several 
> reasons, the main one being that it makes life really tough if you want 
> to create a new tkFont.Font object for a given Tcl named font.
> 
> Is there anything I can do to get this moving again? I don't want to be 
> pushy, but I was certainly hoping to get it into Python 2.4.
> 

Nothing beyond what you have done; send an email here.  Now you just 
need to hope that it catches a developer's eye and motivates them enough 
to look over the patch and apply it (unfortunately I am not that 
developer; I don't know anything about Tkinter).

-Brett

From fdrake at acm.org  Mon Mar  8 16:34:37 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Mon Mar  8 16:34:49 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040308203213.GA14819@panix.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<200403081514.26363.fdrake@acm.org>
	<20040308203213.GA14819@panix.com>
Message-ID: <200403081634.37574.fdrake@acm.org>

On Monday 08 March 2004 03:32 pm, Aahz wrote:
 > No, that's not right.  If
 >
 >     def foo() [w1, w2]: pass
 >
 > is valid, this must also always be valid:
 >
 >     def foo() [w2]: pass

Perhaps it should also be valid, but "must" is pretty strong.  This is still 
Python, and the general "consenting adults" philosophy shouldn't be 
abandoned.

 > I'm not sure to what extent we can/should enforce this, but I'm -1 on
 > any proposal for which this isn't the documented behavior.

I think we're on shaky ground if we require any sort of equivalence here, 
simply because it might make no sense at all for specific decorators to be 
stacked out of order or in unusual combinations.  I'm quite happy for the PEP 
and the final documentation to make recommendations, but hard requirements of 
this sort are difficult to tolerate given the difficulty of even defining 
"validity".

As an (admittedly trivial) example, I'd be quite happy for:

    class Color [valuemap]:
        red = rgb(255, 0, 0)
        blue  = rgb(0, 255, 0)
        green = rgb(0, 0, 255)

to cause the name Color to be bound to a non-callable object.  Why must the 
decorators be required to return callables?  It will not make sense in all 
circumstances when a decorator is being used.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From fdrake at acm.org  Mon Mar  8 16:35:43 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Mon Mar  8 16:35:58 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <16460.56237.524168.961669@montanaro.dyndns.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<20040308203213.GA14819@panix.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
Message-ID: <200403081635.43187.fdrake@acm.org>

On Monday 08 March 2004 03:46 pm, Skip Montanaro wrote:
 > I guess this is an area where PEP 318 should be fleshed out a bit.  I
 > don't see any reason it shouldn't be expanded to include semantics as well
 > as syntax.  That might require a title change, but I don't think the
 > semantics should be left unspecified, nor do I think the syntax and
 > semantics should reside in separate PEPs.

Agreed; the sematics should be included in the PEP.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From fdrake at acm.org  Mon Mar  8 16:37:42 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Mon Mar  8 16:38:04 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040308210000.GA20554@panix.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
Message-ID: <200403081637.42008.fdrake@acm.org>

On Monday 08 March 2004 04:00 pm, Aahz wrote:
 > Principle of least surprise, essentially.  There are already going to be
 > enough obscure uses for this; let's try to keep the completely whacky out
 > of it.  You'll have to come up with an awfully convincing use case to
 > change my mind.

I'd be very surprised if the interpreter cared that a decorator returned a 
callable; what should it care?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From guido at python.org  Mon Mar  8 16:43:12 2004
From: guido at python.org (Guido van Rossum)
Date: Mon Mar  8 16:43:19 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: Your message of "Mon, 08 Mar 2004 16:20:33 EST."
	<1078780832.5337.186.camel@anthem.wooz.org> 
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
	<200403081514.26363.fdrake@acm.org>
	<20040308203213.GA14819@panix.com>
	<16460.56237.524168.961669@montanaro.dyndns.org> 
	<1078780832.5337.186.camel@anthem.wooz.org> 
Message-ID: <200403082143.i28LhCQ32119@guido.python.org>

> I agree.  FWIW, I think the list of things inside the square brackets
> should be simple identifiers, i.e. the names of callables that have been
> defined elsewhere.  I'd prefer to keep them simple, meaning no lambdas
> or list comprehensions.  But that's just me.

Actually, allowing things that look like function calls here would be
very useful, e.g. to set C#-like attributes.  (Or so a little birdie
tells me.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From jim.jewett at eds.com  Mon Mar  8 16:43:23 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Mon Mar  8 16:44:04 2004
Subject: [Python-Dev] PEP 318 needs a rewrite
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D381@USAHM010.amer.corp.eds.com>

Guido van Rossum wrote:

> Instead of arguing in circles, can someone (or a
> small group) rewrite the PEP to contain a summary
> of the arguments for and against various forms?

I volunteer to help or do, whichever Kevin (or
others) prefer.

I suspect that any draft will trigger a number of
small objections and clarifications.  To avoid 
recluttering python-dev, I want to iron out as many 
of them as possible before the next PEP posting.  
Please email me if you want to see (or write) the 
interim versions.

-jJ

From bac at OCF.Berkeley.EDU  Mon Mar  8 17:10:41 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Mon Mar  8 17:10:50 2004
Subject: [Python-Dev] python-dev Summary for 2004-02-01 through 2004-02-29
	[rough draft]
Message-ID: <404CEF61.7040809@ocf.berkeley.edu>

OK, usual deal: read, correct, reply with flattery and expensive gifts.  =)

Hope to get this out rather quickly (Wednesday night, perhaps?) so I can 
start on the next summary in hopes of getting back on to my semi-monthly 
habit.

I also left out the header since I figure most of you don't bother to 
read that part every time.  If you actually do, just say so and I will 
make sure to continue to include it when I send out the rough draft.


------------------------------------

=====================
Summary Announcements
=====================
To continue my slight abuse of this space: I am still looking for a 
summer job or internship programming somewhere (does not have to be 
Python).  If you happen to have something at your company or know of 
something somewhere that you think might work for me please email me at 
brett@python.org .  Thanks.

OK, on from pimping myself out for the summer to pimping PyCon_ (or at 
least I think that is how the youngsters these days would phrase that 
sentence =).  You can still register for the conference.  The talks have 
been chosen and scheduled; more info at http://pycon.org/dc2004/talks/ . 
  Talks look really great and cover a huge gamut of areas; bigger 
variety than last year in my opinion.

There is going to be a Stackless sprint in Berlin March 10-14.  See the 
announcement email at 
http://mail.python.org/pipermail/python-dev/2004-February/042829.html .

.. _PyCon: http://www.pycon.org/


=========
Summaries
=========
---------------------------------
Python 2.3 branch open for fixin'
---------------------------------
Anthony Baxter, perpetual release manager for the 2.3 branch, announced 
that CVS commits for 2.3 were open again.  He mentioned he thought 
another release around May or June would probably happen unless a severe 
bug came up the required immediate patching and release.

Contributing threads:
   - `release23-maint is reopened for business. 
<http://mail.python.org/pipermail/python-dev/2004-February/042400.html>`__


--------------------------------------------------------
Early Spring cleaning for what platforms Python supports
--------------------------------------------------------
Skip Montanaro cleaned up the configure.in script for Python and in the 
process removed old support for unneeded OSs as stated in `PEP 11`_.  So 
SunOS 4 support is gone.  There was discussion of making Python 2.4 not 
support libc 5 since Python does not compile with it.  The suggestion 
was made of having configure.in detect libc 5 and if it found it then 
refuse to continue.

Skip also removed optional universal newline support and antiquated 
pthread variants from 1997 and before.

.. _PEP 11: http://python.org/peps/pep-0011.html

Contributing threads:
   - ` zapped several unsupported bits... 
<http://mail.python.org/pipermail/python-dev/2004-February/042427.html>`__
   - `PEP-0011 up-to-date 
<http://mail.python.org/pipermail/python-dev/2004-February/042435.html>`__


--------------------------------------------------
Compiling in profiling support for the interpreter
--------------------------------------------------
Jeremy Hylton discovered his old way of turning on gprof_ profiling 
support when compiling Python no longer worked.  Hye-SHik Chang said he 
got it working by compiling using ``CC="cc -pg" LDFLAGS="-pg" 
./configure``.  Martin v. L??wis suggested running configure with `` 
--without-cxx`` to get around the problem.

.. _gprof: http://www.cs.utah.edu/dept/old/texinfo/as/gprof_toc.html

Contributing threads:
   - `building python with profiling support 
<http://mail.python.org/pipermail/python-dev/2004-February/042431.html>`__


-----------------------------------
Python and C89 play nicely together
-----------------------------------
The question of what version of C Python is required to work with came 
up.  The answer is that C89 is the version of Standard C Python 
requires.  C89 with Amendment 1 (also called C95) nor C99 are required 
for Python to run (which, in this case, meant wchar.h is not required).

Contributing threads:
   - `*grumble* wchar.h 
<http://mail.python.org/pipermail/python-dev/2004-February/042438.html>`__
   - `Another big ANSI-fication change... 
<http://mail.python.org/pipermail/python-dev/2004-February/042480.html>`__


---------------------------------------
Growing lists just got that much faster
---------------------------------------
Raymond Hettinger (along with the help of various other people on his 
initial patch) managed to come up with a way to speed up allocating 
space for list items (either from growth of an exisiting list or 
creation of a new list).  The speed-up is great and at worst has a hit 
of 4 wasted bytes on platforms that use 8-byte alignment.

While this was being discussed, though, the possibility of 3rd party 
code messing with the internal values of list items came up.  While no 
specific use-case was found, it was agreed upon that code outside of the 
Python core should not break the API; there is no guarantee the 
internals won't change.

Contributing threads:
   - `Optimization of the Year 
<http://mail.python.org/pipermail/python-dev/2004-February/042466.html>`__


---------------------------------------------------
Do we really need mutability for exceptions groups?
---------------------------------------------------
The question was raised as to why ``except (TypeError, ValueError):`` is 
acceptable while ``except [TypeError, ValueError]:`` (in other words why 
use tuples to group exceptions for an 'except' statement and not allow 
lists).  The question was in relation as to whether it had something to 
do with a tuple's immutability.

Guido said it all had to do with a visual way of grouping tuples and 
nothing to do with what the underlying container was.  If he had it to 
do over again he would rather change it so that ``except TypeError, 
ValueError:`` did the same thing as above.  That change would alleviate 
the common error of expecting the above behavior using that syntax but 
instead getting the exception instance bound to ValueError.  But the 
change is not planned for any time in the future since Guido has no 
proposal on how to change the syntax to handle the assignment to a local 
variable for the exception instance.

Contributing threads:
   - `Syntax for "except" 
<http://mail.python.org/pipermail/python-dev/2004-February/042473.html>`__


---------------------------
No, you can't subclass Bool
---------------------------
Fran?ois Pinard discovered that you can't subclass the Bool type.  This 
led to the question of "why?"  To this he received the answer that since 
Bool only has two instance, True and False, it shouldn't be allowed to 
be a subclass of anything since that would suggest more instances of 
Bool could exist.

Contributing threads:
   - `bool does not want to be subclassed? 
<http://mail.python.org/pipermail/python-dev/2004-February/042535.html>`__


-

-
Bob Ippolito brought up Michael Hudson's function/method syntactic sugar 
to essentially implement `PEP 318`_ (Michael's patch can be found at 
http://starship.python.net/crew/mwh/hacks/meth-syntax-sugar-3.diff) as 
something he **really** wanted for PyObjC_.

In case you don't know what the syntax is ``def fxn() [fun, stuff, 
here]: pass``, which ends up being the same as::

   def fxn(): pass
   fxn = here(stuff(fun(fxn)))

Common use cases are for defining class

The discussion then drifted in discussing how this syntax could even be 
used with classes and whether it could somehow supplant some metaclass 
uses.  Talking seemed to lead to it not really being that great of a use 
case.

The order or application also came up.  It was suggested that the order 
be the reversed of that shown above so that it reads the way it would be 
written by hand instead of in application order.

Using 'as' instead of brackets was brought up again; ``def fxn() as 
fun`` instead of ``def fxn() [fun]``.  An argument for this or any other 
syntax is that using brackets for this overloads the usage of them in 
Python itself and some feel that is unpythonic.  An explicit argument 
for the brackets, though, is that it is cleaner for multi-line use. 
There was also the issue with using 'as' in terms of how would one look 
up its use in the docs?  Would someone look up under 'as' or under 'def' 
naturally?

.. _PEP 318: http://python.org/peps/pep-0318.html
.. _PyObjC: http://pyobjc.sourceforge.net/

Contributing threads:
   - `Plea for function/method syntax sugar 
<http://mail.python.org/pipermail/python-dev/2004-February/042605.html>`__
   - `new syntax for wrapping 
<http://mail.python.org/pipermail/python-dev/2004-February/042646.html>`__
   - `PEP 318: What Java does 
<http://mail.python.org/pipermail/python-dev/2004-February/042750.html>`__
   - `other uses for "as" 
<http://mail.python.org/pipermail/python-dev/2004-February/042780.html>`__
   - `new syntax for wrapping (PEP 318) - Timothy's summary 
<http://mail.python.org/pipermail/python-dev/2004-February/042824.html>`__
   - `[UPDATED] PEP 318 - Function/Method Decorator Syntax 
<http://mail.python.org/pipermail/python-dev/2004-February/042851.html>`__


--------------------------------------
Building Python with the free .NET SDK
--------------------------------------
Go to the email to see Garth's latest tool and instructions on building 
Python using Microsoft's free .NET SDK.

Contributing threads:
   - `Compiling python with the free .NET SDK 
<http://mail.python.org/pipermail/python-dev/2004-February/042595.html>`__


-------------------------------------
PEP 326 finished (but still rejected)
-------------------------------------
`PEP 326`_ ("A Case for Top and Bottom Values") has its final 
implementation linked from the PEP.  It has been rejected, but the PEP's 
authors will nice enough to go ahead and finish the code for anyone who 
might want to use it anyway.

.. _PEP 326: http://python.org/peps/pep-0326.html

Contributing threads:
   - `PEP 326 
<http://mail.python.org/pipermail/python-dev/2004-February/042672.html>`__


------------------------------------------------
time.strftime() now checks its argument's bounds
------------------------------------------------
Because of a possible crash from using negative values in the time tuple 
when passed to time.strftime(), it was decided to bound checks on all 
values in the time tuple.  This will break existing code that naively 
set all fields to 0 in a passed-in time tuple and thus did not set the 
values within the proper bounds (year, month, day, and day of year 
should not be 0).

Contributing threads:
   - `Boundary checks on arguments to time.strftime() 
<http://mail.python.org/pipermail/python-dev/2004-February/042675.html>`__


--------------------------------------------
OpenVMS throws a fit over universal newlines
--------------------------------------------
For Python 2.4 the option to compile without universal newline support 
was removed.  Well, it turns out that OpenVMS_ doesn't like this. 
Apparently the OS is not stream-oriented for its filesystem but is 
records-based and this leads to there being no newlines in a text file 
unless it is opened as a plain file.

Bug #903339 has been opened to deal with this.

.. _Bug #903339: http://python.org/sf/903339

.. _OpenVMS: http://www.openvms.org/


--------------------------------------------
Forth-style argument passing for C functions
--------------------------------------------
Raymond Hettinger came up with the idea of adding a new function flag 
called METH_STACK that would call a C function to be called with a 
pointer to the execution stack and the number of arguments it is being 
passed.  This would allow the function to pop off its arguments on its 
own and bypass the expense of putting them into a tuple.

The overall idea did not seem to win out.  But it does turn out that 
Michael Hudson has a patch that implements a similar idea at 
http://python.org/sf/876193 .  A discussion of whether more specific 
argument passing like METH_O should be considered.

Contributing threads:
   - `Idea for a fast calling convention 
<http://mail.python.org/pipermail/python-dev/2004-February/042807.html>`__

From shane at zope.com  Mon Mar  8 17:34:18 2004
From: shane at zope.com (Shane Hathaway)
Date: Mon Mar  8 17:34:23 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403081634.37574.fdrake@acm.org>
References: <20040308203213.GA14819@panix.com>
	<200403081634.37574.fdrake@acm.org>
Message-ID: <404CF4EA.5090003@zope.com>

Fred L. Drake, Jr. wrote:
> As an (admittedly trivial) example, I'd be quite happy for:
> 
>     class Color [valuemap]:
>         red = rgb(255, 0, 0)
>         blue  = rgb(0, 255, 0)
>         green = rgb(0, 0, 255)

Ooh, what about this:

     def singleton(klass):
         return klass()

     class MyThing [singleton]:
         ...

That would be splendid IMHO.

However, since MyThing instances aren't necessarily callable, I think my 
code breaks the rule Aahz is proposing.  The 'singleton' decorator has 
to come last in the decorator list.  Also note that the 'singleton' 
function is not a decorator in the design pattern sense, since the 
output (an instance) does not implement the same interface as the input 
(a class).

Shane

From tjreedy at udel.edu  Mon Mar  8 17:34:26 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon Mar  8 17:34:31 2004
Subject: [Python-Dev] Re: PEP 318 - generality of list;
	restrictions on elements
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<20040308212953.GE12662@unpythonic.net>
Message-ID: <c2isdg$196$1@sea.gmane.org>


"Jeff Epler" <jepler@unpythonic.net> wrote in message
news:20040308212953.GE12662@unpythonic.net...
> In the suggested syntaxes, the modifiers are not a list.  There's no
> list here:
>     >>> def f() [wrapper]: pass
> just like there's no tuple* in either of these (or the above):
>     >>> def f(x, y): pass
>     >>> class C(A, B): pass
> and you can't just say
>     >>> s1 = (x, y)
>     >>> def f s1: pass
> or
>     >>> s2 = (A, B)
>     >>> class C s2: pass

There is also no tuple here: print a,b,c,  (nor any rotation in print >>
fff).

tjr




From amk at amk.ca  Mon Mar  8 17:44:35 2004
From: amk at amk.ca (A.M. Kuchling)
Date: Mon Mar  8 17:58:28 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <16460.56644.548695.710081@montanaro.dyndns.org>
References: <338366A6D2E2CA4C9DAEAE652E12A1DE0145F532@au3010avexu1.global.avaya.com>
	<16460.56644.548695.710081@montanaro.dyndns.org>
Message-ID: <20040308224435.GA17531@rogue.amk.ca>

On Mon, Mar 08, 2004 at 02:53:24PM -0600, Skip Montanaro wrote:
> Perhaps the Quixote folks can shed some light on their decision about
> decorator syntax when they created PTL.  I suspect they gave some

At the time it looked like Python was going to go with
decorater-before-args.  For example, see page #7 of Guido's Python10 keynote
at http://www.python.org/doc/essays/ppt/ ; it includes examples such as 

def foo [static] (arg1, arg2): ...

(And personally I do, in fact, prefer the decorator-first syntax. because in
the case of a long argument list that spanned multiple lines, the decorator
would seem kind of lost.)

--amk

From greg at cosc.canterbury.ac.nz  Mon Mar  8 18:07:36 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar  8 18:07:51 2004
Subject: [Python-Dev] (Specific syntax of) PEP 318 -	function/method/class
	decoration
In-Reply-To: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>
Message-ID: <200403082307.i28N7ZG07668@oma.cosc.canterbury.ac.nz>

"Phillip J. Eby" <pje@telecommunity.com>:

> The positioning is different, though.  C# puts the bracketed list of 
> "attributes" at the *beginning* of the declarations, so if we were really 
> following C#'s lead, the syntax would actually be:
> 
> class [interface] Foo:

I don't think we should be following the C style of putting variable
amounts of stuff in front of the name being declared.  It pushes the
names into unpredictable positions and makes it hard to visually scan
down a list of declarations looking for something.

The proposed syntax has things in just the right order, IMO.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Mon Mar  8 18:19:26 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar  8 18:19:49 2004
Subject: [Python-Dev] PEP 318 needs a rewrite
In-Reply-To: <200403081853.i28IrW931576@guido.python.org>
Message-ID: <200403082319.i28NJQV07731@oma.cosc.canterbury.ac.nz>

Guido:

> As a human reader, I like seeing the attributes before the arguments
> because some attributes (like classmethod and staticmethod) affect the
> interpretation of the argument list.

On the other hand, this weakens the syntactic parallels between
function definitions and function calls. Somehow, a def which doesn't
have an open paren immediately after the function name doesn't look
function-definition-ish to me.

With suitable argument naming conventions, the argument list itself
usually gives enough cues to make it clear if it's a classmethod or
staticmethod:

  def foo(self, x, y, z): 
    # self => it's a normal method

  def bar(cls, x, y, z) [classmethod]:
    # cls => it's a class method

  def baz(x, y, z) [staticmethod]:
    # neither self nor cls => it's a static method

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From nas at arctrix.com  Mon Mar  8 18:10:12 2004
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon Mar  8 18:23:10 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <16460.56644.548695.710081@montanaro.dyndns.org>
References: <338366A6D2E2CA4C9DAEAE652E12A1DE0145F532@au3010avexu1.global.avaya.com>
	<16460.56644.548695.710081@montanaro.dyndns.org>
Message-ID: <20040308231012.GA21741@mems-exchange.org>

On Mon, Mar 08, 2004 at 02:53:24PM -0600, Skip Montanaro wrote:
> Perhaps the Quixote folks can shed some light on their decision
> about decorator syntax when they created PTL.

We saw that Guido had used it on one of his slides (last PyCon?).

  Neil

From greg at cosc.canterbury.ac.nz  Mon Mar  8 18:52:39 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar  8 18:52:58 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
Message-ID: <200403082352.i28Nqd307788@oma.cosc.canterbury.ac.nz>

"Jewett, Jim J" <jim.jewett@eds.com>:

> Right now, the variable portion of the syntax is:
> 
> 	'[' wrapper (, wrapper)* ']'
> 
> What else should be accepted, to avoid surprises?

Whatever else happens, I think the square brackets should always be
present. They do *not* denote list construction; they are the
syntactic elements which say "here is a parenthetical note concerning
qualities which this function is to have".

> What are the restrictions on list items?
> 
> 	Only a function?
> 	Only a code object?  (I assume so)

Any callable object should be acceptable as a wrapper. I don't
see any need to restrict them to functions.

To allow for dynamic insertion of a sequence of wrappers, I can
think of two possibilities:

(A) Allow a list or tuple of further items to be used as an item.

(B) Allow a '*' before an item to mean that the item is to
be treated as a sequence of items to be iterated over.

I think I prefer (B) because it eliminates any need for type
testing, and allows any iterable to be used as the items sequence
rather than being restricted to certain sequence types.

> 	Must the code return the original function?
> 	Must the code return a code object?

The wrapper can return any object it likes, as long as it makes sense
to insert it into the class dict as the result of the def, or pass it
on to further wrappers. (In most cases it probably *won't* return the
original function -- that's the whole point of wrapping!)

> 1.	# fail, integer not a callable?  (Don't try to slice the args!)
> 	def name(args)[0]:

Yes, fail.


> 2.	# fail, as w[0] is itself a (non-callable) list?
> 	w = [wrapper]
> 	def name(args) [w]:

Under suggestion (A) above, recurse into the list. Under (B),
fail (item non-callable).

> 3.	# fail, not a list?
> 	def name(args) classmethod:

Syntax error.

> 4.	# ??? lambda would be OK, except for the second ":"
> 	def name(args)[lambda x: x]:

Accepted. I can't see any reason why the second ':' should be
an issue, any more than the multiple ':' in nested lambdas
causes any problem.

> 5.	# should succeed?
> 	def name(args)[x for x in decorators if x in active]:	

On the understanding that there is a generator expression
between the [...]:

  Under (A): Fail, item is not callable and is not a list or tuple
             (it's a generator-iterator).

  Under (B): Fail, item is not callable. This would succeed, however:

     def name(args)[*x for x in decorators if x in active]:

> 6.	# ??? wrapper does not return a callable.
> 	y = lambda x: None
> 	def name(args) [y]:

Succeeds, but leaves 'name' defined as 'None' in the target
namespace, which probably isn't very useful.

> 7.	# a list, but no '[]'.  either surprises.
> 	def name(args) list((wrap1, wrap2)):

Syntax error.

> 8.	# a list, but no '[]'.  either surprises.
> 	w = [wrapper]
> 	def name(args) w:		

Syntax error.

> 9.	# a list, but no '[]'.  either surprises.
> 	def w(): return [y,y,y]
> 	def name(args) w():

Syntax error.

> Do the wrappers have to be defined when the definition starts?

The wrapper expressions are evaluated when the def is executed (same
as with default argument values).

> def outer():
>     # w not defined at module load, but defined before any call
>     def inner(x)[w]:
>         print x
>     return inner

Okay.

But if that is OK, then which w would be used in the next example?

> def w(fn): 
>     print "outside w"
>     return fn
> 
> def outer():
>     # w not defined yet, but defined in this
>     def inner(x)[w]:
>         print x
>     def w(fn):
>         return fn
>     inner(5)

The w in def inner(x)[w]: refers to the local w, which is not
bound when the inner def is executed, so a NameError results
when outer() is called.

> How about this?  There is no guarantee that othermod
> will finish loading first, though I suppose that 
> already causes problems today.
> 
> import othermod
> def fn()[othermod.wrap]:
>     pass

You take your chances. Okay if othermod gets as far as
binding othermod.wrap by the time def fn() is executed
(which presumably it will as long as nothing goes wrong).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Mon Mar  8 18:57:34 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar  8 18:57:56 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on	elements
In-Reply-To: <16460.53752.49097.660449@montanaro.dyndns.org>
Message-ID: <200403082357.i28NvYH07835@oma.cosc.canterbury.ac.nz>

Skip Montanaro <skip@pobox.com>:

>     Phillip> Any callable object.  May return anything.
> 
> Must take a single argument, which itself must be a callable, right?

Or a descriptor, returned by another wrapper.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Mon Mar  8 19:17:59 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar  8 19:18:11 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on	elements
In-Reply-To: <1078780832.5337.186.camel@anthem.wooz.org>
Message-ID: <200403090017.i290Hxs08042@oma.cosc.canterbury.ac.nz>

Mon, 08 Mar 2004 16:20:33 -0500:

> I agree.  FWIW, I think the list of things inside the square brackets
> should be simple identifiers

That would be too restrictive, I think. I can imagine users
like the PyObjC people wanting to have parameterized wrappers
to express things like external function signatures.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From gmccaughan at synaptics-uk.com  Tue Mar  9 04:42:42 2004
From: gmccaughan at synaptics-uk.com (Gareth McCaughan)
Date: Tue Mar  9 04:42:49 2004
Subject: [Python-Dev] (Specific syntax of) PEP 318 - function/method/class
	decoration
In-Reply-To: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>
References: <5.1.0.14.0.20040308133911.026e85a0@mail.telecommunity.com>
Message-ID: <200403090942.42159.gmccaughan@synaptics-uk.com>

On Monday 2004-03-08 18:48, Phillip J. Eby wrote:

> Anyway, I seem to recall that the bracketed syntax was originally inspired
> by C#'s attribute mechanism, but you'd have to ask the original author to
> be sure.

I think I was the first to propose the bracketed syntax
(in <slrna40k88.2h9o.Gareth.McCaughan@g.local>, after Tim Peters
pointed out a problem with a different syntax I'd proposed for
the same purpose in <slrna3va8t.2a7p.Gareth.McCaughan@g.local>.
The proposal certainly wasn't in the least inspired by C#'s
attribute mechanism; I'd never even seen it then. For what
it's worth, I still have a strong preference for the original
"def f(args) [decorators]:" syntax, despite Guido's preference
for "def f [decorators] (args):".

The problem Tim pointed out was that the first-proposed syntax

    def staticmethod f(args): ...

would confuse over-simple ad hoc parsers (in text editors,
code munging tools, etc.) into thinking that the above was
defining a function or method called "staticmethod".
A similar, but (much?) less severe, problem may attend
the syntax that just moves the decorator-spec to before
the argument list. I suspect this isn't serious enough
to count as a real objection.

-- 
g



From bh at intevation.de  Tue Mar  9 05:01:21 2004
From: bh at intevation.de (Bernhard Herzog)
Date: Tue Mar  9 05:01:27 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <200403082352.i28Nqd307788@oma.cosc.canterbury.ac.nz> (Greg
	Ewing's message of "Tue, 09 Mar 2004 12:52:39 +1300 (NZDT)")
References: <200403082352.i28Nqd307788@oma.cosc.canterbury.ac.nz>
Message-ID: <s9z1xo22vbi.fsf@salmakis.intevation.de>

Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

> To allow for dynamic insertion of a sequence of wrappers, I can
> think of two possibilities:
>
> (A) Allow a list or tuple of further items to be used as an item.
>
> (B) Allow a '*' before an item to mean that the item is to
> be treated as a sequence of items to be iterated over.

(C) Define a function that takes a sequence of decorators and returns a
decorator that applies all decorators.

E.g.

def decorators(decs):
    def apply_decorators(f):
        for d in decs:
            f = d(f)
        return f
    return apply_decorators

def f(x) [decorators([foo, bar])]:
    pass


This scheme would have the advantage that it doesn't require any
additional syntax.

   Bernhard

-- 
Intevation GmbH                                 http://intevation.de/
Skencil                                http://sketch.sourceforge.net/
Thuban                                  http://thuban.intevation.org/

From andersjm at dancontrol.dk  Tue Mar  9 05:36:25 2004
From: andersjm at dancontrol.dk (Anders J. Munch)
Date: Tue Mar  9 05:36:32 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
References: <338366A6D2E2CA4C9DAEAE652E12A1DE0145F532@au3010avexu1.global.avaya.com><16460.56644.548695.710081@montanaro.dyndns.org>
	<20040308224435.GA17531@rogue.amk.ca>
Message-ID: <008601c405c2$5ff0d6f0$f901a8c0@hirsevej.dk>

"A.M. Kuchling" <amk@amk.ca> wrote:
> 
> def foo [static] (arg1, arg2): ...
> 
> (And personally I do, in fact, prefer the decorator-first syntax. because in
> the case of a long argument list that spanned multiple lines, the decorator
> would seem kind of lost.)

You think of the decorator expression as a simple expression, usually
just a single identifier, right?

Given that the decorator expression can be an arbitrary Python
expression, it _will_ be used as such. For example:

def foo(arg1, arg2) as release(
        version="1.0",
        author="me",
        status="Well I wrote it so it works, m'kay?",
        warning="You might want to use the Foo class instead"):

Reading code is pattern-matching. Placing the decorator in the middle
of the function-header breaks the usual function-header pattern. The
suffix notations allow the reader to pattern-match the classic
function header first using the skills aquired with unadorned code.

This is also why I believe the 'as' version is better than the bracket
version: In the the 'as' version the classic function header stands by
itself, so you can mentally parse that separately, whereas the bracket
version binds more closely with the argument list to form a single,
more complex pattern.  If the decorator is restricted to a single name
brackets are fine, but I don't think they will be.

If brackets win I think a space between the argument list and the
opening bracket should be recommended style.

- Anders



From mwh at python.net  Tue Mar  9 05:41:07 2004
From: mwh at python.net (Michael Hudson)
Date: Tue Mar  9 05:41:10 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <1078780832.5337.186.camel@anthem.wooz.org> (Barry Warsaw's
	message of "Mon, 08 Mar 2004 16:20:33 -0500")
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
	<200403081514.26363.fdrake@acm.org> <20040308203213.GA14819@panix.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<1078780832.5337.186.camel@anthem.wooz.org>
Message-ID: <2mn06qz4jg.fsf@starship.python.net>

Barry Warsaw <barry@python.org> writes:

> On Mon, 2004-03-08 at 15:46, Skip Montanaro wrote:
>
>> I guess this is an area where PEP 318 should be fleshed out a bit.  I don't
>> see any reason it shouldn't be expanded to include semantics as well as
>> syntax.  That might require a title change, but I don't think the semantics
>> should be left unspecified, nor do I think the syntax and semantics should
>> reside in separate PEPs.
>
> I agree.  FWIW, I think the list of things inside the square brackets
> should be simple identifiers, i.e. the names of callables that have been
> defined elsewhere.  

I think this is a bad idea, as Skip says.

You can do this today:

class C(random.choice([list, object, dict))):
    pass

Doesn't mean people do, though.

> I'd prefer to keep them simple, meaning no lambdas or list
> comprehensions.  But that's just me.

I'm opposed to arbitrary restrictions.

Cheers,
mwh

-- 
     ARTHUR:  Why are there three of you?
  LINTILLAS:  Why is there only one of you?
     ARTHUR:  Er... Could I have notice of that question?
                   -- The Hitch-Hikers Guide to the Galaxy, Episode 11

From jepler at unpythonic.net  Tue Mar  9 08:38:01 2004
From: jepler at unpythonic.net (Jeff Epler)
Date: Tue Mar  9 08:38:54 2004
Subject: [Python-Dev] Who cares about the performance of these opcodes?
Message-ID: <20040309133801.GG12662@unpythonic.net>

Recently it was proposed to make a new LIST_APPEND opcode, and several
contributors pointed out that adding opcodes to Python is always a dicey
business because it may hurt performance for obscure reasons, possibly
related to the size of that 'switch' statement.

To that end, I notice that there are several opcodes which could easily
be converted into function calls.  In my code, these are not typically
performance-critical opcodes (with approximate ceval.c line count):
    BUILD_CLASS             # 9 lines
    MAKE_FUNCTION           # 20 lines
    MAKE_CLOSURE            # 35 lines

    PRINT_EXPR              # 21 lines
    PRINT_ITEM              # 47 lines
    PRINT_ITEM_TO           # 2 lines + fallthrough
    PRINT_NEWLINE           # 12 lines
    PRINT_NEWLINE_TO        # 2 lines + fallthrough

Instead, each of these would be available in the code objects co_consts
when necessary.  For example, instead of
    LOAD_CONST               1 (<code object g at 0x40165ea0, file "<stdin>", line 2>)
    MAKE_FUNCTION            0
    STORE_FAST               0 (g)
you'd have
    LOAD_CONST               1 (type 'function')
    LOAD_CONST               2 (<code object g>)
    LOAD_GLOBALS                                 # new opcode, or call globals()
    LOAD_CONST               1 ("g")
    CALL_FUNCTION            3

Performance for these specific operations will certainly benchmark worse,
but maybe getting rid of something like 150 lines from ceval.c would
help other things by magic.  The new LOAD_GLOBALS opcode would be less
than 10 lines.

No, I don't have a patch.  I assume each and every one of these opcodes
has a staunch defender who will now come to its aid, and save me the
trouble.

Jeff

From pje at telecommunity.com  Tue Mar  9 08:59:52 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar  9 08:54:34 2004
Subject: [Python-Dev] Who cares about the performance of these
  opcodes?
In-Reply-To: <20040309133801.GG12662@unpythonic.net>
Message-ID: <5.1.0.14.0.20040309085026.03b7cb60@mail.telecommunity.com>

At 07:38 AM 3/9/04 -0600, Jeff Epler wrote:
>Recently it was proposed to make a new LIST_APPEND opcode, and several
>contributors pointed out that adding opcodes to Python is always a dicey
>business because it may hurt performance for obscure reasons, possibly
>related to the size of that 'switch' statement.
>
>To that end, I notice that there are several opcodes which could easily
>be converted into function calls.  In my code, these are not typically
>performance-critical opcodes (with approximate ceval.c line count):
>     BUILD_CLASS             # 9 lines
>     MAKE_FUNCTION           # 20 lines
>     MAKE_CLOSURE            # 35 lines
>
>     PRINT_EXPR              # 21 lines
>     PRINT_ITEM              # 47 lines
>     PRINT_ITEM_TO           # 2 lines + fallthrough
>     PRINT_NEWLINE           # 12 lines
>     PRINT_NEWLINE_TO        # 2 lines + fallthrough
>
>Instead, each of these would be available in the code objects co_consts
>when necessary.  For example, instead of
>     LOAD_CONST               1 (<code object g at 0x40165ea0, file 
> "<stdin>", line 2>)
>     MAKE_FUNCTION            0
>     STORE_FAST               0 (g)
>you'd have
>     LOAD_CONST               1 (type 'function')
>     LOAD_CONST               2 (<code object g>)
>     LOAD_GLOBALS                                 # new opcode, or call 
> globals()
>     LOAD_CONST               1 ("g")
>     CALL_FUNCTION            3
>
>Performance for these specific operations will certainly benchmark worse,
>but maybe getting rid of something like 150 lines from ceval.c would
>help other things by magic.  The new LOAD_GLOBALS opcode would be less
>than 10 lines.
>
>No, I don't have a patch.  I assume each and every one of these opcodes
>has a staunch defender who will now come to its aid, and save me the
>trouble.

If the goal is to remove lines from the switch statement, just move the 
code of lesser-used opcodes into a C function.  There's no need to 
eliminate the opcodes themselves.

I personally don't think it'll help much, if the goal is to reduce cache 
misses.  After all, the code is all still there.  But, it should not do as 
badly as the approach you're suggesting, because for your case you'll not 
only have the C-level calls, but also more bytecodes being interpreted.

Hm.  Makes me wonder, actually, if a hand-written eval loop in assembly 
code might not kick some serious butt.  Or maybe a bytecode-to-assembly 
translator, writing loads in-line and using registers as the stack, calling 
functions where necessary.  Ah, if only I were a teenager again, with 
little need to sleep, and unlimited time to hack...  :)


From mwh at python.net  Tue Mar  9 09:57:34 2004
From: mwh at python.net (Michael Hudson)
Date: Tue Mar  9 09:57:38 2004
Subject: [Python-Dev] Who cares about the performance of these opcodes?
In-Reply-To: <5.1.0.14.0.20040309085026.03b7cb60@mail.telecommunity.com>
	(Phillip J. Eby's message of "Tue, 09 Mar 2004 08:59:52 -0500")
References: <5.1.0.14.0.20040309085026.03b7cb60@mail.telecommunity.com>
Message-ID: <2mfzciyso1.fsf@starship.python.net>

"Phillip J. Eby" <pje@telecommunity.com> writes:

> Hm.  Makes me wonder, actually, if a hand-written eval loop in
> assembly code might not kick some serious butt.

Not enough to be worth it, I'd hazard.

> Or maybe a bytecode-to-assembly translator, writing loads in-line
> and using registers as the stack, calling functions where necessary.
> Ah, if only I were a teenager again, with little need to sleep, and
> unlimited time to hack...  :)

This exists already, though.  I'll let you guess the name.

Cheers,
mwh

-- 
  how am I expected to quit smoking if I have to deal with NT
  every day                                                -- Ben Raia

From Andreas.Ames at tenovis.com  Tue Mar  9 06:44:22 2004
From: Andreas.Ames at tenovis.com (Ames Andreas (MPA/DF))
Date: Tue Mar  9 10:04:02 2004
Subject: [Python-Dev] Question about patch #841454
Message-ID: <788E231C269961418F38D3E360D1652526C9EC@tndefr-ws00021.tenovis.corp.lan>

Hi all,

recently I've submitted patch #841454.  It was my first patch against
python and I'm still interested in building python on mingw.

As there was essentially no reaction I wonder if there is something
wrong with it's content or it's outfit.  If so, please let me know how
I should improve it, so that it gets more acceptable.


TIA,

andreas


P.s.:  I'm not subscribed, so please cc me.

From skip at pobox.com  Tue Mar  9 11:58:56 2004
From: skip at pobox.com (Skip Montanaro)
Date: Tue Mar  9 11:59:04 2004
Subject: [Python-Dev] Question about patch #841454
In-Reply-To: <788E231C269961418F38D3E360D1652526C9EC@tndefr-ws00021.tenovis.corp.lan>
References: <788E231C269961418F38D3E360D1652526C9EC@tndefr-ws00021.tenovis.corp.lan>
Message-ID: <16461.63440.659282.896492@montanaro.dyndns.org>


    Andreas> recently I've submitted patch #841454.  It was my first patch
    Andreas> against python and I'm still interested in building python on
    Andreas> mingw.

    Andreas> As there was essentially no reaction I wonder if there is
    Andreas> something wrong with it's content or it's outfit.  If so,
    Andreas> please let me know how I should improve it, so that it gets
    Andreas> more acceptable.

I don't think there's necessarily anything wrong with it.  More likely you
are encountering the fact that a) apparently few, if any, people use mingw
with Python, b) few, if any, people have every needed to cross-compile
Python.  I'm not sure what mingw and cross-compilation have in common.  Does
using mingw require cross-compilation?

Finally, taking a look at your patch, it seems pretty big.  If mingw doesn't
necessarily imply cross-compilation, could your patch be split into two
separate patches, one for each task?

Skip



From oussoren at cistron.nl  Tue Mar  9 04:37:30 2004
From: oussoren at cistron.nl (Ronald Oussoren)
Date: Tue Mar  9 12:00:47 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <1078780832.5337.186.camel@anthem.wooz.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<5.1.0.14.0.20040308145445.0260e500@mail.telecommunity.com>
	<16460.53752.49097.660449@montanaro.dyndns.org>
	<200403081514.26363.fdrake@acm.org>
	<20040308203213.GA14819@panix.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<1078780832.5337.186.camel@anthem.wooz.org>
Message-ID: <62C92219-71AD-11D8-AFCE-0003931CFE24@cistron.nl>


On 8-mrt-04, at 22:20, Barry Warsaw wrote:

> On Mon, 2004-03-08 at 15:46, Skip Montanaro wrote:
>
>> I guess this is an area where PEP 318 should be fleshed out a bit.  I 
>> don't
>> see any reason it shouldn't be expanded to include semantics as well 
>> as
>> syntax.  That might require a title change, but I don't think the 
>> semantics
>> should be left unspecified, nor do I think the syntax and semantics 
>> should
>> reside in separate PEPs.
>
> I agree.  FWIW, I think the list of things inside the square brackets
> should be simple identifiers, i.e. the names of callables that have 
> been
> defined elsewhere.  I'd prefer to keep them simple, meaning no lambdas
> or list comprehensions.  But that's just me.

I'd would prefer allowing function calls as well. One of the reasons 
for using function wrappers is adding type information to functions 
(PyObjC and ctype), like so:

	def myMethod_(self) [signature("i@:i")]:
		pass

Ronald
--
X|support bv            http://www.xsupport.nl/
T:  +31 610271479       F:  +31 204416173


From amk at amk.ca  Tue Mar  9 12:20:31 2004
From: amk at amk.ca (A.M. Kuchling)
Date: Tue Mar  9 12:27:34 2004
Subject: [Python-Dev] Re: Who cares about the performance of these opcodes?
References: <20040309133801.GG12662@unpythonic.net>
	<5.1.0.14.0.20040309085026.03b7cb60@mail.telecommunity.com>
Message-ID: <opr4lught5qmnqtt@news.gmane.org>

On Tue, 09 Mar 2004 08:59:52 -0500, Phillip J. Eby <pje@telecommunity.com> 
wrote:
> I personally don't think it'll help much, if the goal is to reduce cache 
> misses.  After all, the code is all still there.  But, it should not do

For a planned PyCon lightning talk, I'm benchmarking various combinations 
of optimizer options.
One interesting result: CVS Python gets 25997 pystones on my machine when 
compiled with
-O3 (the default), but 26707 when compiled with gcc's -Os flag.  -Os 
optimizes for size,
running the subset of the -O2 optimizations that don't increase code size.

The test script is http://www.amk.ca/files/misc/python-opt-benchmark.sh, 
should anyone want
to run it.  I'm now trying to figure out if -mregparm makes any 
significant difference to Python's
performance.

--amk


From greg at cosc.canterbury.ac.nz  Tue Mar  9 17:26:44 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar  9 17:28:30 2004
Subject: [Python-Dev] Who cares about the performance of these opcodes?
In-Reply-To: <5.1.0.14.0.20040309085026.03b7cb60@mail.telecommunity.com>
Message-ID: <200403092226.i29MQid19201@oma.cosc.canterbury.ac.nz>

"Phillip J. Eby" <pje@telecommunity.com>:

> If the goal is to remove lines from the switch statement, just move the 
> code of lesser-used opcodes into a C function.  There's no need to 
> eliminate the opcodes themselves.

Naively, one might think that, if the code is rarely used, even if
it's inside the switch, it'll rarely be in the cache and thus wouldn't
make much difference.

But I suspect the cacheing issues are more subtle than just the total
amount of code in the switch statement, and the only way to be sure
is to measure.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Tue Mar  9 17:30:18 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar  9 17:30:36 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <62C92219-71AD-11D8-AFCE-0003931CFE24@cistron.nl>
Message-ID: <200403092230.i29MUIF19209@oma.cosc.canterbury.ac.nz>

Ronald Oussoren <oussoren@cistron.nl>:

> I'd would prefer allowing function calls as well. One of the reasons 
> for using function wrappers is adding type information to functions

And as soon as you allow function calls, you can have
arbitrarily complicated expressions as their parameters.
So there's no point in having any restrictions at all,
then.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From pinard at iro.umontreal.ca  Tue Mar  9 17:19:06 2004
From: pinard at iro.umontreal.ca (=?iso-8859-1?Q?Fran=E7ois?= Pinard)
Date: Tue Mar  9 18:00:23 2004
Subject: [Python-Dev] Global __slots__ pondering
Message-ID: <20040309221906.GA7754@titan.progiciels-bpi.ca>

Hi, people.

Since a few days, I'm working at a Python rewrite of a difficult and
complex C application, written by a "power" programmer years ago, who
left the company before I came in.  The rewrite goes extremely well,
Python legibility and facilities are such that I could improve the angle
of attack for the problem, making it appear much, much easier.

The C code mallocs a lot, and the Python rewrite is memory intensive.
Since it handles hundreds of thousands of objects, I thought it was
a good case for using `__slots__' and indeed, the memory savings are
interesting.  Deriving from built-in types for a good amount of objects
(for a limited amount of types), whenever appropriate, is also welcome.

In one module, sub-classing is frequent, and if I understand well how
it works, a `__dict__' is used in sub-types unless `__slots__ = ()'
is explicitely included in the sub-type, and I decided to spare that
`__dict__' whenever possible in that particular module.

So, just after the global `__metaclass = type' at the beginning of the
module, I added a global `__slots__ = ()', with the clear intent of
overriding that `__slots__' assignment only for those particular types
needing slots.  In this module of the application, I do not mind having
to restrain all types so none uses a `__dict__'.

But it does not seem to work that way.  Could it be pondered that, the
same as `__metaclass__' may have a global value, `__slots__' may also
have?  One objection I see is that, while __metaclass__ may be redefined
to the classic class metatype for a specific type, a global `__slots__'
could not be arbitrarily defeated in a specific type.  Nevertheless, in
the precise case I have, it would have been a bit useful.

I do not mind much having to repeat `__slots__' in each and every class
and sub-class for this module, yet I thought about sharing the above
thoughts with Python developers, in case they would like to level the
said difference.  It would be less surprising if `__metaclass__' and
`__slots__' could equally be implied globally.  Or maybe? :-)

-- 
Fran?ois Pinard   http://www.iro.umontreal.ca/~pinard

From greg at cosc.canterbury.ac.nz  Tue Mar  9 18:51:49 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar  9 18:51:59 2004
Subject: [Python-Dev] Global __slots__ pondering
In-Reply-To: <20040309221906.GA7754@titan.progiciels-bpi.ca>
Message-ID: <200403092351.i29NpnI19660@oma.cosc.canterbury.ac.nz>

François Pinard:

> It would be less surprising if `__metaclass__' and
> `__slots__' could equally be implied globally.  Or maybe? :-)

I think Guido regards __slots__ as something of a hack that you should
only use if you have a really good reason. As such it's probably good
that you have to make the explicit decision for each class, rather
than inheriting it from somewhere.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From andymac at bullseye.apana.org.au  Tue Mar  9 17:07:04 2004
From: andymac at bullseye.apana.org.au (Andrew MacIntyre)
Date: Tue Mar  9 19:14:55 2004
Subject: [Python-Dev] Re: Who cares about the performance of these opcodes?
In-Reply-To: <opr4lught5qmnqtt@news.gmane.org>
References: <20040309133801.GG12662@unpythonic.net>
	<5.1.0.14.0.20040309085026.03b7cb60@mail.telecommunity.com>
	<opr4lught5qmnqtt@news.gmane.org>
Message-ID: <20040310084716.D58261@bullseye.apana.org.au>

On Tue, 9 Mar 2004, A.M. Kuchling wrote:

> For a planned PyCon lightning talk, I'm benchmarking various combinations
> of optimizer options.
> One interesting result: CVS Python gets 25997 pystones on my machine when
> compiled with
> -O3 (the default), but 26707 when compiled with gcc's -Os flag.  -Os
> optimizes for size,
> running the subset of the -O2 optimizations that don't increase code size.

In my own experiments along these lines, I found that the best results
with various versions of gcc (2.8.1, 2.95.2, 3.2.1 on OS/2; 2.95.4, 3.2.3,
3.3.2 on FreeBSD) was to compile ceval.c with -Os (-O2 on 2.8.1, which
doesn't have -Os) and the rest of Python with -O3.

The OS/2 builds were running on an AMD Athlon 1.4GHz (Thunderbird core),
and the FreeBSD builds were running on an AMD Athlon XP 1600.  With the
compilers that support -march=athlon[-xp], another 4-5% can be had too.
The OS/2 builds also have -fomit-frame-pointer, which makes them
marginally faster than the FreeBSD builds of the same gcc version and
optimisation level.

I haven't seen your results (full -Os build as fast or faster than full
-O3 build), although compiling with -Os with gcc 3.2.x is as fast as -O3
with 2.95 and earlier.

Regards,
Andrew.

--
Andrew I MacIntyre                     "These thoughts are mine alone..."
E-mail: andymac@bullseye.apana.org.au  (pref) | Snail: PO Box 370
        andymac@pcug.org.au             (alt) |        Belconnen  ACT  2616
Web:    http://www.andymac.org/               |        Australia

From guido at python.org  Wed Mar 10 00:01:11 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 10 00:01:28 2004
Subject: [Python-Dev] PyCON: Pre-registration ends in one week!
Message-ID: <200403100501.i2A51BM03182@guido.python.org>

In one week (Weds March 17), pre-registration for PyCON will close.
Right now, you can register for $250.  After that, you will need to
register at the door, and the cost will be $300.  Save yourself $50!


PyCON is a community-oriented conference targeting developers (both
those using Python and those working on the Python project).  It gives
you opportunities to learn about significant advances in the Python
development community, to participate in a programming sprint with
some of the leading minds in the Open Source community, and to meet
fellow developers from around the world.  The organizers work to make
the conference affordable and accessible to all.

DC 2004 will be held March 24-26, 2004 in Washington, D.C. at the GWU
Cafritz Conference Center.  The keynote speakers are Mitch Kapor, Guido
van Rossum, and Bruce Eckel.  There will be a four-day development sprint
before the conference (March 20-23).

We're looking for volunteers to help run PyCON.  If you're interested,
subscribe to http://mail.python.org/mailman/listinfo/pycon-organizers

Don't miss any PyCON announcements!  Subscribe to
http://mail.python.org/mailman/listinfo/pycon-announce

You can discuss PyCON with other interested people by subscribing to
http://mail.python.org/mailman/listinfo/pycon-interest

The central resource for PyCON DC 2004 is
http://www.pycon.org/dc2004/


See you all there!!!

--Guido van Rossum (home page: http://www.python.org/~guido/)

From raymond.hettinger at verizon.net  Wed Mar 10 02:26:18 2004
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Wed Mar 10 02:28:21 2004
Subject: [Python-Dev] calculator module
Message-ID: <006801c40670$fb894e40$e841fea9@oemcomputer>

I was thinking that if a student's calculator can do something, then
Python ought to handle most of the same tasks right out of the box.

For a while, we've held off on adding math functions because of several
principles:
* the math module is just a wrapper for libm, so nothing else belongs
there
* leaving high end stuff to comprehensive tools like NumPy or Numeric
* avoiding anything that cannot reliably (and provably) be implemented
to the precision expected of 
  numerical libraries

My thought is to create a pure python module with "pretty good"
implementations of things found on low to mid-range calculators like my
personal favorite, the hp32sII student scientific calculator.  It offers
a reasonably small but useful set of functions without any attempt to
comprehensively cover a given category.  For instance, there is a
single, simple solver -- and if it can't handle your function, then you
go to MatLab or somesuch.

Here are a few that may be worthy of a module:
* Combinations, permuations, and factorial (with a full gamma function)
* Two variable linear regression
* Hyperbolic trig
* % and %chg
* A simple integrator (adaptive trapezoidal method I think)
* A simple solver, given f(x)==c and hi/low bounds, solve for x if
possible.  (works like Goal Seek in Excel)
* Polar/Rectangular coordinate conversions
* A handfull of metric/English conversions (these could be skipped)
* Simple fraction implementation with a choice a maximum denominator or
some multiple of a fixed denominator.

Another variant of the calculator also offers:
* simple matrix functions up to 3x3 (addition, multiplication, inverse,
determinant)
* cross-product and dot product for 3 element vectors (x,y,z
coordinates).

To these, the HP12C financial calculator adds:
* solve for missing variable in (n i% pv pmt fv)
* internal rate of return and net present value for a stream of cash
flows

Though not present on the calculator, I could also see the inclusion of
probability distributions to capture what students normally have to look
up in tables and then can't find when they need it.  That is likely why
MS Excel includes these functions but excludes so many others.

By "pretty good" implementation, I mean that the implementations should
remain as straight-forward as possible and neither promise, nor get in a
snit about being accurate to the last place (i.e. a 9 place
approximation of gamma would be considered fine).



Raymond Hettinger



From aahz at pythoncraft.com  Wed Mar 10 05:14:40 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 10 05:14:49 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <006801c40670$fb894e40$e841fea9@oemcomputer>
References: <006801c40670$fb894e40$e841fea9@oemcomputer>
Message-ID: <20040310101440.GB19859@panix.com>

On Wed, Mar 10, 2004, Raymond Hettinger wrote:
>
> I was thinking that if a student's calculator can do something, then
> Python ought to handle most of the same tasks right out of the box.

+1

Write a PEP.  ;-)  I'd run a pre-PEP discussion on c.l.py to see what
people want in the module, then write the PEP after implementation.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From gmccaughan at synaptics-uk.com  Wed Mar 10 05:16:33 2004
From: gmccaughan at synaptics-uk.com (Gareth McCaughan)
Date: Wed Mar 10 05:16:39 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <006801c40670$fb894e40$e841fea9@oemcomputer>
References: <006801c40670$fb894e40$e841fea9@oemcomputer>
Message-ID: <200403101016.33065.gmccaughan@synaptics-uk.com>

On Wednesday 2004-03-10 at 07:26, Raymond Hettinger wrote:

> I was thinking that if a student's calculator can do something, then
> Python ought to handle most of the same tasks right out of the box.
..
> My thought is to create a pure python module with "pretty good"
> implementations of things found on low to mid-range calculators like my
> personal favorite, the hp32sII student scientific calculator.  It offers
> a reasonably small but useful set of functions without any attempt to
> comprehensively cover a given category.  For instance, there is a
> single, simple solver -- and if it can't handle your function, then you
> go to MatLab or somesuch.

Sounds good to me, but
  - would you propose a single "calculator" module, or distributing
    these things among different modules, or what?
  - I don't think I agree with your suggestion that, e.g., a
    9-place approximation is sufficient for special functions
    like gamma. It would be a shame to set a precedent for putting
    known sloppy code into the Python standard library, and users
    may not appreciate that the results are very inaccurate. (In
    comparison with the precision theoretically available with
    Python's floats, that is.)

> Here are a few that may be worthy of a module:
> * Combinations, permutations, and factorial (with a full gamma function)
> * Two variable linear regression
> * Hyperbolic trig

Yes to all these. (Factorials should be exact for integer
inputs, using Python longs for the result where appropriate.)

> * % and %chg

Is there any way of spelling these that's actually shorter
and easier to remember than writing the calculation out
explicitly?

> * A simple integrator (adaptive trapezoidal method I think)
> * A simple solver, given f(x)==c and hi/low bounds, solve for x if
>   possible.  (works like Goal Seek in Excel)

If we have that then there should be a minimizer and maximizer too.
In the interests of ease of use, that should be a "minimize"
function and a "maximize" function, not a single "optimize" that
takes a flag. Of course they could share their implementation
behind the scenes.

Perhaps we also a function for calculating derivatives?

> * Polar/Rectangular coordinate conversions
> * A handfull of metric/English conversions (these could be skipped)

And should be :-). It might be neat, though, to have a "units"
module that understands quantities with units attached and has
a big list of unit values. Like the "units" program on Unix,
only (of course) better.

> * Simple fraction implementation with a choice a maximum denominator or
>   some multiple of a fixed denominator.

Why not one with unlimited denominator? We could add a
function that converts any number to a rational with a
set maximum denominator, which could also be used to
round a rational to have a smaller denominator. This
doesn't look to me as if it belongs in the same module
as the other things here.

> Another variant of the calculator also offers:
> * simple matrix functions up to 3x3 (addition, multiplication, inverse,
>   determinant)
> * cross-product and dot product for 3 element vectors (x,y,z
>   coordinates).

These would be handy to have, though presumably we would
have them for arbitrary-length vectors. But then, my opinion
is that Numeric should be made a part of Python. :-)

> To these, the HP12C financial calculator adds:
> * solve for missing variable in (n i% pv pmt fv)
> * internal rate of return and net present value for a stream of cash
>   flows

I suspect things like this belong in a special finance module,
which should have a lot of other stuff in it too. (I don't
know what other stuff.)

> Though not present on the calculator, I could also see the inclusion of
> probability distributions to capture what students normally have to look
> up in tables and then can't find when they need it.  That is likely why
> MS Excel includes these functions but excludes so many others.

I'd be in favour of this.

> By "pretty good" implementation, I mean that the implementations should
> remain as straight-forward as possible and neither promise, nor get in a
> snit about being accurate to the last place (i.e. a 9 place
> approximation of gamma would be considered fine).

Straightforward, yes. Gratuitously inaccurate, no. If this is
replacing a pocket calculator, then I think the right tradeoff
is with speed rather than accuracy.

-- 
g



From python at rcn.com  Wed Mar 10 05:56:44 2004
From: python at rcn.com (Raymond Hettinger)
Date: Wed Mar 10 05:58:51 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <20040310101440.GB19859@panix.com>
Message-ID: <000b01c4068e$60c28520$e841fea9@oemcomputer>

[Raymond]
> > I was thinking that if a student's calculator can do something, then
> > Python ought to handle most of the same tasks right out of the box.

[Aahz]
> +1
> 
> Write a PEP.  ;-)  I'd run a pre-PEP discussion on c.l.py to see what
> people want in the module, then write the PEP after implementation.

Will do.

This is one subject that everyone will have an opinion about ;-)

Also, I'll clarify the part about numerical accuracy.  The other
respondent took "pretty good" as meaning sloppy.  What was meant was
using the best algorithms and approximations that can be implemented
cleanly and portably.  This is a high standard, but much lower than
would be expected of a system library (down to the last ULP or pretty
close to it).  A practical and important consequence of "pretty good" is
that we can ignore bug reports that say "last two digits of gamma(3.1)
don't agree with Matlab."


Raymond Hettinger


From mwh at python.net  Wed Mar 10 06:03:56 2004
From: mwh at python.net (Michael Hudson)
Date: Wed Mar 10 06:04:01 2004
Subject: [Python-Dev] Who cares about the performance of these opcodes?
In-Reply-To: <200403092226.i29MQid19201@oma.cosc.canterbury.ac.nz> (Greg
	Ewing's message of "Wed, 10 Mar 2004 11:26:44 +1300 (NZDT)")
References: <200403092226.i29MQid19201@oma.cosc.canterbury.ac.nz>
Message-ID: <2my8q9x8tf.fsf@starship.python.net>

Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

> "Phillip J. Eby" <pje@telecommunity.com>:
>
>> If the goal is to remove lines from the switch statement, just move the 
>> code of lesser-used opcodes into a C function.  There's no need to 
>> eliminate the opcodes themselves.
>
> Naively, one might think that, if the code is rarely used, even if
> it's inside the switch, it'll rarely be in the cache and thus wouldn't
> make much difference.

I'm not sure that's so much the issue.  It would be bad if (say) the
top of the switch and the LOAD_FAST opcode were in i-cache conflict,
and it's probably crap like this that accounts for the random-seeming
performance fluctuations as you tinker with eval_frame.

> But I suspect the cacheing issues are more subtle than just the total
> amount of code in the switch statement, and the only way to be sure
> is to measure.

on multiple architectures with multiple compilers, etc, etc.

Chees,
mwh

-- 
  You can lead an idiot to knowledge but you cannot make him 
  think.  You can, however, rectally insert the information, 
  printed on stone tablets, using a sharpened poker.        -- Nicolai
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html

From mwh at python.net  Wed Mar 10 06:09:01 2004
From: mwh at python.net (Michael Hudson)
Date: Wed Mar 10 06:09:07 2004
Subject: [Python-Dev] Global __slots__ pondering
In-Reply-To: <20040309221906.GA7754@titan.progiciels-bpi.ca> (
	=?iso-8859-1?q?Fran=E7ois_Pinard's_message_of?= "Tue,
	9 Mar 2004 17:19:06 -0500")
References: <20040309221906.GA7754@titan.progiciels-bpi.ca>
Message-ID: <2mptblx8ky.fsf@starship.python.net>

Fran?ois Pinard <pinard@iro.umontreal.ca> writes:

> Hi, people.
>
> Since a few days, I'm working at a Python rewrite of a difficult and
> complex C application, written by a "power" programmer years ago, who
> left the company before I came in.  The rewrite goes extremely well,
> Python legibility and facilities are such that I could improve the angle
> of attack for the problem, making it appear much, much easier.
>
> The C code mallocs a lot, and the Python rewrite is memory intensive.
> Since it handles hundreds of thousands of objects, I thought it was
> a good case for using `__slots__' and indeed, the memory savings are
> interesting.  Deriving from built-in types for a good amount of objects
> (for a limited amount of types), whenever appropriate, is also welcome.
>
> In one module, sub-classing is frequent, and if I understand well how
> it works, a `__dict__' is used in sub-types unless `__slots__ = ()'
> is explicitely included in the sub-type, and I decided to spare that
> `__dict__' whenever possible in that particular module.
>
> So, just after the global `__metaclass = type' at the beginning of the
> module, I added a global `__slots__ = ()', with the clear intent of
> overriding that `__slots__' assignment only for those particular types
> needing slots.  In this module of the application, I do not mind having
> to restrain all types so none uses a `__dict__'.
>
> But it does not seem to work that way.  Could it be pondered that, the
> same as `__metaclass__' may have a global value, `__slots__' may also
> have?

You can ponder that, but it's a sick idea, so there :-)

> One objection I see is that, while __metaclass__ may be redefined to
> the classic class metatype for a specific type, a global `__slots__'
> could not be arbitrarily defeated in a specific type.  Nevertheless,
> in the precise case I have, it would have been a bit useful.
>
> I do not mind much having to repeat `__slots__' in each and every class
> and sub-class for this module, 

One thing you can do is have a custom metaclass that sticks
'__slots__':() into the namespace before calling the superclass'
__new__ method.

Cheers,
mwh

-- 
  If I didn't have my part-time performance art income to help pay
  the  bills, I could never afford to support my programming 
  lifestyle.                                -- Jeff Bauer, 21 Apr 2000

From mwh at python.net  Wed Mar 10 06:10:50 2004
From: mwh at python.net (Michael Hudson)
Date: Wed Mar 10 06:10:55 2004
Subject: [Python-Dev] Re: Who cares about the performance of these opcodes?
In-Reply-To: <opr4lught5qmnqtt@news.gmane.org> (A. M. Kuchling's message of
	"Tue, 09 Mar 2004 12:20:31 -0500")
References: <20040309133801.GG12662@unpythonic.net>
	<5.1.0.14.0.20040309085026.03b7cb60@mail.telecommunity.com>
	<opr4lught5qmnqtt@news.gmane.org>
Message-ID: <2mllm9x8hx.fsf@starship.python.net>

"A.M. Kuchling" <amk@amk.ca> writes:

> On Tue, 09 Mar 2004 08:59:52 -0500, Phillip J. Eby
> <pje@telecommunity.com> wrote:
>> I personally don't think it'll help much, if the goal is to reduce
>> cache misses.  After all, the code is all still there.  But, it
>> should not do
>
> For a planned PyCon lightning talk, I'm benchmarking various
> combinations of optimizer options.
> One interesting result: CVS Python gets 25997 pystones on my machine
> when compiled with
> -O3 (the default), but 26707 when compiled with gcc's -Os flag.  -Os
>  optimizes for size,
> running the subset of the -O2 optimizations that don't increase code size.

What architecture?  I played around on my ibook with various
compilation options and running with -fprofile-arcs and so on and
basically came to the conclusion that nothing made very much
difference (once past -O2).  Can't remember if I tried -Os.

Cheers,
mwh

-- 
  If I had wanted your website to make noise I would have licked
  my finger and rubbed it across the monitor.
                           -- signature of "istartedi" on slashdot.org

From aahz at pythoncraft.com  Wed Mar 10 06:32:11 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 10 06:32:19 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403081634.37574.fdrake@acm.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<200403081514.26363.fdrake@acm.org>
	<20040308203213.GA14819@panix.com>
	<200403081634.37574.fdrake@acm.org>
Message-ID: <20040310113210.GA11417@panix.com>

On Mon, Mar 08, 2004, Fred L. Drake, Jr. wrote:
> On Monday 08 March 2004 03:32 pm, Aahz wrote:
>>
>> No, that's not right.  If
>>
>>     def foo() [w1, w2]: pass
>>
>> is valid, this must also always be valid:
>>
>>     def foo() [w2]: pass
> 
> Perhaps it should also be valid, but "must" is pretty strong.  This is
> still Python, and the general "consenting adults" philosophy shouldn't
> be abandoned.

I'd be one of the last people to claim that philosophy should be
abandoned, but that doesn't mean I think that Python should permit every
possible perversion.

>> I'm not sure to what extent we can/should enforce this, but I'm -1 on
>> any proposal for which this isn't the documented behavior.
> 
> I think we're on shaky ground if we require any sort of equivalence
> here, simply because it might make no sense at all for specific
> decorators to be stacked out of order or in unusual combinations.
> I'm quite happy for the PEP and the final documentation to make
> recommendations, but hard requirements of this sort are difficult to
> tolerate given the difficulty of even defining "validity".
>
> As an (admittedly trivial) example, I'd be quite happy for:
> 
>     class Color [valuemap]:
>         red = rgb(255, 0, 0)
>         blue  = rgb(0, 255, 0)
>         green = rgb(0, 0, 255)
> 
> to cause the name Color to be bound to a non-callable object.  Why
> must the decorators be required to return callables?  It will not make
> sense in all circumstances when a decorator is being used.

That's precisely one of the examples that I think should be shot on
sight.  ;-)  If it's not callable, how do you use it?  If you're
intending for it to be a singleton, you should raise an appropriate
TypeError instead of AttributeError.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From aahz at pythoncraft.com  Wed Mar 10 06:33:02 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 10 06:33:07 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403081637.42008.fdrake@acm.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
Message-ID: <20040310113302.GB11417@panix.com>

On Mon, Mar 08, 2004, Fred L. Drake, Jr. wrote:
> On Monday 08 March 2004 04:00 pm, Aahz wrote:
>>
>> Principle of least surprise, essentially.  There are already going to be
>> enough obscure uses for this; let's try to keep the completely whacky out
>> of it.  You'll have to come up with an awfully convincing use case to
>> change my mind.
> 
> I'd be very surprised if the interpreter cared that a decorator
> returned a callable; what should it care?

The interpreter doesn't care; *people* care.  That's precisely why it
should be a documented requirement.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From bob at redivi.com  Wed Mar 10 07:01:22 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 10 06:58:10 2004
Subject: [Python-Dev] Re: Who cares about the performance of these opcodes?
In-Reply-To: <2mllm9x8hx.fsf@starship.python.net>
References: <20040309133801.GG12662@unpythonic.net>
	<5.1.0.14.0.20040309085026.03b7cb60@mail.telecommunity.com>
	<opr4lught5qmnqtt@news.gmane.org>
	<2mllm9x8hx.fsf@starship.python.net>
Message-ID: <A5ECB3EA-728A-11D8-8D99-000A95686CD8@redivi.com>

On Mar 10, 2004, at 12:10 PM, Michael Hudson wrote:

> "A.M. Kuchling" <amk@amk.ca> writes:
>
>> On Tue, 09 Mar 2004 08:59:52 -0500, Phillip J. Eby
>> <pje@telecommunity.com> wrote:
>>> I personally don't think it'll help much, if the goal is to reduce
>>> cache misses.  After all, the code is all still there.  But, it
>>> should not do
>>
>> For a planned PyCon lightning talk, I'm benchmarking various
>> combinations of optimizer options.
>> One interesting result: CVS Python gets 25997 pystones on my machine
>> when compiled with
>> -O3 (the default), but 26707 when compiled with gcc's -Os flag.  -Os
>>  optimizes for size,
>> running the subset of the -O2 optimizations that don't increase code 
>> size.
>
> What architecture?  I played around on my ibook with various
> compilation options and running with -fprofile-arcs and so on and
> basically came to the conclusion that nothing made very much
> difference (once past -O2).  Can't remember if I tried -Os.

If you really want faster code you should tell the compiler about the 
particular architecture you need to run it on.  For example, Apple's 
gcc 3.3 has an optimization flag named "-fast" that will (supposedly) 
produce fast non-PIC code that will only run on 64bit G5 processors 
(but can be scaled back to G4 with -mcpu=7450, and probably back to G3 
in a similar way).

-bob


From skip at pobox.com  Wed Mar 10 07:36:47 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 10 07:36:54 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <006801c40670$fb894e40$e841fea9@oemcomputer>
References: <006801c40670$fb894e40$e841fea9@oemcomputer>
Message-ID: <16463.3039.597228.217607@montanaro.dyndns.org>


    Raymond> I was thinking that if a student's calculator can do something,
    Raymond> then Python ought to handle most of the same tasks right out of
    Raymond> the box.

Hand-held calculators aside, my model for what a computer-based calculator
should be able to do is the graphing calculator which came with Mac OS9.  It
was actually developed by Pacific Tech and bundled by Apple.  There are no
plans for an OS X version.  The screenshots at the URL below will give you
an idea of what it can do.

    http://www.pacifict.com/Gallery.html

Skip

From mcherm at mcherm.com  Wed Mar 10 07:48:57 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Wed Mar 10 07:49:14 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
Message-ID: <1078922937.404f0eb908621@mcherm.com>

Aahz writes:
> On Monday 08 March 2004 03:32 pm, Aahz wrote:
>
> If
>
>     def foo() [w1, w2]: pass
>
> is valid, this must also always be valid:
>
>     def foo() [w2]: pass
    [...]
> I'm not sure to what extent we can/should enforce this, but I'm -1 on
> any proposal for which this isn't the documented behavior.

When Python's "consenting adults" philosophy is raised, he replies:
> I'd be one of the last people to claim that philosophy should be
> abandoned, but that doesn't mean I think that Python should permit every
> possible perversion.

I'd like to defend the idea of allowing arbitrary expressions. If there
is a rule that only identifiers are allowed, or only things which simply
apply decoration and then return a similar function are allowed... no
matter what the rule is, people still have one more rule to memorize.
If anything is allowed, then people need memorize no special rules and
they will get errors when they misuse things.

 * Complex is better than complicated. *
 * Special cases aren't special enough to break the rules. *

Also, I would argue that sometimes people will find a use for things 
which the original designers would never have considered. I would venture
a guess (perhaps I'm wrong) that when Guido originally designed classes,
he intended each instance to have its own __dict__. I doubt that he
thought (at the time) that it would be a good idea to CHANGE that __dict__
or for it not to be unique to the instance. Yet Alex Martelli's Borg
pattern is a brilliant little hack which turns out to work quite nicely.
There are probably hundreds of ways that modifying __dict__ could be
abused which are NOT brilliant little hacks, but instead are confusing,
obfuscating messes. We don't use those. The idea that each "decorator"
is simply applied (in order) after the function (or class) is built is
a simple, clear idea, and if users choose to do things more peculiar than
mere decoration, it is at their own peril.

 * Simple is better than complex. *
 * If the implementation is easy to explain, it may be a good idea. *

-- Michael Chermside


From amk at amk.ca  Wed Mar 10 07:48:54 2004
From: amk at amk.ca (A.M. Kuchling)
Date: Wed Mar 10 07:55:47 2004
Subject: [Python-Dev] Re: Re: Who cares about the performance of these
	opcodes?
References: <20040309133801.GG12662@unpythonic.net>
	<5.1.0.14.0.20040309085026.03b7cb60@mail.telecommunity.com>
	<opr4lught5qmnqtt@news.gmane.org>
	<20040310084716.D58261@bullseye.apana.org.au>
Message-ID: <opr4ncjshaqmnqtt@news.gmane.org>

On Wed, 10 Mar 2004 09:07:04 +1100 (EST), Andrew MacIntyre 
<andymac@bullseye.apana.org.au> wrote:
> In my own experiments along these lines, I found that the best results
> with various versions of gcc (2.8.1, 2.95.2, 3.2.1 on OS/2; 2.95.4, 
> 3.2.3,
> 3.3.2 on FreeBSD) was to compile ceval.c with -Os (-O2 on 2.8.1, which
> doesn't have -Os) and the rest of Python with -O3.

That seems to roughly match what I'm seeing; I haven't tried compiling 
portions of
Python on different optimization settings.

Python doesn't use any architecture-specific settings such as -march=i686 
or
the -fast setting Bob Ippolito mentions.  Most Python installations are 
probably used
on the same machine they're compiled on, so maybe we could add a
--optimize-for-arch switch to configure that took no arguments and figured 
out
the right arch-specific compiler arguments for the current machine.  
Eventually we could
enable it by default and provide a switch to turn it off.

--amk


From python at rcn.com  Wed Mar 10 07:58:23 2004
From: python at rcn.com (Raymond Hettinger)
Date: Wed Mar 10 08:01:07 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <16463.3039.597228.217607@montanaro.dyndns.org>
Message-ID: <002e01c4069f$5f5e5180$e841fea9@oemcomputer>

>     Raymond> I was thinking that if a student's calculator can do
> something,
>     Raymond> then Python ought to handle most of the same tasks right
out
> of
>     Raymond> the box.
> 
> Hand-held calculators aside, my model for what a computer-based
calculator
> should be able to do is the graphing calculator which came with Mac
OS9.
> It
> was actually developed by Pacific Tech and bundled by Apple.  There
are no
> plans for an OS X version.  The screenshots at the URL below will give
you
> an idea of what it can do.
> 
>     http://www.pacifict.com/Gallery.html

Thanks, that was a useful link and will include it in the PEP.

However, I do want to keep the scope *much* smaller than that.
Symbolic algebra and spherical coordinates are too much fire power.  I
would like something that normal folks can use without reading ten pages
of docs.  Ideally, no reading would be required at all, just import it
and go.

OTOH, a separate symbolic package might make python much more attractive
for use in secondary education.


Raymond


From guido at python.org  Wed Mar 10 10:03:04 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 10 10:03:10 2004
Subject: [Python-Dev] calculator module
In-Reply-To: Your message of "Wed, 10 Mar 2004 05:56:44 EST."
	<000b01c4068e$60c28520$e841fea9@oemcomputer> 
References: <000b01c4068e$60c28520$e841fea9@oemcomputer> 
Message-ID: <200403101503.i2AF34n04307@guido.python.org>

May this project should use decimal arithmetic throughout?  After all,
student calculators use decimal, and explaining binary is a source of
unneeded confusion during student's early steps.  It would also be a
good way to get the Decimal numbers project integrated into core
Python.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From pinard at iro.umontreal.ca  Wed Mar 10 09:56:45 2004
From: pinard at iro.umontreal.ca (=?iso-8859-1?Q?Fran=E7ois?= Pinard)
Date: Wed Mar 10 10:03:20 2004
Subject: [Python-Dev] Global __slots__ pondering
In-Reply-To: <2mptblx8ky.fsf@starship.python.net>
References: <20040309221906.GA7754@titan.progiciels-bpi.ca>
	<2mptblx8ky.fsf@starship.python.net>
Message-ID: <20040310145645.GA20151@titan.progiciels-bpi.ca>

> One thing you can do is have a custom metaclass that sticks
> '__slots__':() [as needed] into the namespace before calling the
> superclass' __new__ method.

So simple!  I should have thought about that.  Thanks!

-- 
Fran?ois Pinard   http://www.iro.umontreal.ca/~pinard

From guido at python.org  Wed Mar 10 10:04:38 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 10 10:04:42 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: Your message of "Wed, 10 Mar 2004 06:33:02 EST."
	<20040310113302.GB11417@panix.com> 
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org> 
	<20040310113302.GB11417@panix.com> 
Message-ID: <200403101504.i2AF4c404327@guido.python.org>

> > I'd be very surprised if the interpreter cared that a decorator
> > returned a callable; what should it care?
> 
> The interpreter doesn't care; *people* care.  That's precisely why it
> should be a documented requirement.

So decorators couldn't be used to create read-only properties?

--Guido van Rossum (home page: http://www.python.org/~guido/)

From gmccaughan at synaptics-uk.com  Wed Mar 10 10:26:17 2004
From: gmccaughan at synaptics-uk.com (Gareth McCaughan)
Date: Wed Mar 10 10:26:19 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <200403101503.i2AF34n04307@guido.python.org>
References: <000b01c4068e$60c28520$e841fea9@oemcomputer>
	<200403101503.i2AF34n04307@guido.python.org>
Message-ID: <200403101526.17307.gmccaughan@synaptics-uk.com>

On Wednesday 2004-03-10 at 15:03, Guido van Rossum wrote:

> Maybe this project should use decimal arithmetic throughout?  After all,
> student calculators use decimal, and explaining binary is a source of
> unneeded confusion during student's early steps.  It would also be a
> good way to get the Decimal numbers project integrated into core
> Python.

Decimal doesn't currently support the functions (cos, exp, log, ...)
currently in the "math" module. Making Raymond's proposal work
with Decimal objects would require implementing all those things
for Decimals. That's appreciably more work than implementing just
the new functions for doubles.

Further, I'd argue that since Decimal can work with numbers
of arbitrary precision, an implementation of (say) exp that
works on Decimals ought to work with something like full
precision, unless it's *only* intended for casual use. Writing
implementations of all these things would be a considerable
amount of work.

Doing everything with Decimals would also reduce the utility[1]
of these functions for people who want to do something other
than interactive calculations with them, at least for as long
as such people continue to be mostly using ordinary Python
floats. I'd guess that that's approximately for ever.

    [1] Perhaps only the perceived utility, since there are
        conversions to and from the Decimal type.

To my mind, the loss isn't worth the gain.

-- 
g



From aahz at pythoncraft.com  Wed Mar 10 10:31:26 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 10 10:31:37 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403101504.i2AF4c404327@guido.python.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
Message-ID: <20040310153126.GA3616@panix.com>

On Wed, Mar 10, 2004, Guido van Rossum wrote:
>Aahz:
>>Someone:
>>>
>>> I'd be very surprised if the interpreter cared that a decorator
>>> returned a callable; what should it care?
>> 
>> The interpreter doesn't care; *people* care.  That's precisely why it
>> should be a documented requirement.
> 
> So decorators couldn't be used to create read-only properties?

Maybe I'm misunderstanding something.  I thought that a property contains
a get descriptor, which makes it a kind of callable.  Read-only
properties contain a set descriptor that either does nothing or raises an
exception.  That doesn't affect whether the property is classified as a
callable.

Also, I thought we had explicitly punted on allowing decorators to create
properties because the syntax wasn't sufficiently flexible without
contortions.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From oren-py-l at hishome.net  Wed Mar 10 10:39:49 2004
From: oren-py-l at hishome.net (Oren Tirosh)
Date: Wed Mar 10 10:48:58 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403101504.i2AF4c404327@guido.python.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
Message-ID: <20040310153949.GA32416@hishome.net>

On Wed, Mar 10, 2004 at 07:04:38AM -0800, Guido van Rossum wrote:
> > > I'd be very surprised if the interpreter cared that a decorator
> > > returned a callable; what should it care?
> > 
> > The interpreter doesn't care; *people* care.  That's precisely why it
> > should be a documented requirement.
> 
> So decorators couldn't be used to create read-only properties?

Perhaps the default behavior should be more similar to C# attributes
which are simply associated with the object for introspection.  Only if
the decorator has a method with some __special__ name that method would
be called and the object replaced by its return value.

Builtins like staticmethod or classmethod don't HAVE to remain functions
as long as they remain backward compatible. After all, functions like int 
and str were changed to types but kept backward compatible.

    Oren



From just at letterror.com  Wed Mar 10 10:52:01 2004
From: just at letterror.com (Just van Rossum)
Date: Wed Mar 10 10:51:53 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040310113302.GB11417@panix.com>
Message-ID: <r01050400-1026-DFFAF05E72AA11D8824E003065D5E7E4@[10.0.0.23]>

Aahz wrote:

> On Mon, Mar 08, 2004, Fred L. Drake, Jr. wrote:
> > On Monday 08 March 2004 04:00 pm, Aahz wrote:
> >>
> >> Principle of least surprise, essentially.  There are already going
> >> to be enough obscure uses for this; let's try to keep the
> >> completely whacky out of it.  You'll have to come up with an
> >> awfully convincing use case to change my mind.
> > 
> > I'd be very surprised if the interpreter cared that a decorator
> > returned a callable; what should it care?
> 
> The interpreter doesn't care; *people* care.  That's precisely why it
> should be a documented requirement.

Presumably people also care about contortions like this:

  >>> def blackhole(*args):
  ...   return None
  ... 
  >>> class Foo(object):
  ...   __metaclass__ = blackhole
  ... 
  >>> print Foo
  None
  >>> 

Yet that doesn't mean Python has to disallow it (and indeed it doesn't).

Btw. +1 from me for

  def func(args) [decorators]:

and -1 for

  def func [decorators] (args):

I already frown when people put a space between the function name and
arglist, I wouldn't want to separate them even more.

Just

From pje at telecommunity.com  Wed Mar 10 11:10:21 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 10 11:05:03 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <20040310153126.GA3616@panix.com>
References: <200403101504.i2AF4c404327@guido.python.org>
	<B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
Message-ID: <5.1.0.14.0.20040310110911.020ecca0@mail.telecommunity.com>

At 10:31 AM 3/10/04 -0500, Aahz wrote:
>Maybe I'm misunderstanding something.  I thought that a property contains
>a get descriptor, which makes it a kind of callable.  Read-only
>properties contain a set descriptor that either does nothing or raises an
>exception.  That doesn't affect whether the property is classified as a
>callable.

Properties are not callable.

Python 2.2.2 (#37, Oct 14 2002, 17:02:34) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
IDLE 0.8 -- press F1 for help
 >>> x=property()
 >>> x()
Traceback (most recent call last):
   File "<pyshell#1>", line 1, in ?
     x()
TypeError: 'property' object is not callable
 >>> callable(property())
0
 >>> callable(property(lambda x:None))
0
 >>>



From niemeyer at conectiva.com  Wed Mar 10 11:03:15 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Wed Mar 10 11:06:00 2004
Subject: [Python-Dev] os.rename() behavior
Message-ID: <20040310160315.GB2564@burma.localdomain>

Hello folks,

I've just been asked by Thomas Waldmann (moin maintainer) about the
rename() behavior in Linux (posix) vs.  Windows. Here is the function
documentation:

  rename(src, dst)

  Rename the file or directory src to dst. If dst is a directory,
  OSError will be raised. On Unix, if dst exists and is a file, it will
  be removed silently if the user has permission. The operation may fail
  on some Unix flavors if src and dst are on different filesystems. If
  successful, the renaming will be an atomic operation (this is a POSIX
  requirement). On Windows, if dst already exists, OSError will be
  raised even if it is a file; there may be no way to implement an
  atomic rename when dst names an existing file. Availability:
  Macintosh, Unix, Windows.


While I understand that this is an operating system behavior, and the os
module is meant to hold incompatible functionality, I do belive that
most of us developing in posix systems have never checked if a rename
failed because the file already exists. IOW, this will usually be
discovered when the program blows up for the first time in Windows
(that's how Thomas discovered).

Is it too late to implement the forced rename internally, and leave the
programmer test it with os.path.exists() if he really wants to?

I know this will cause incompatibilities in programs which expect the
current behavior. OTOH, programs that expect this behavior are already
broken in any posix system.

What do you think?

-- 
Gustavo Niemeyer
http://niemeyer.net

From guido at python.org  Wed Mar 10 11:09:25 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 10 11:09:31 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: Your message of "Wed, 10 Mar 2004 10:31:26 EST."
	<20040310153126.GA3616@panix.com> 
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org> 
	<20040310153126.GA3616@panix.com> 
Message-ID: <200403101609.i2AG9PG04567@guido.python.org>

> > So decorators couldn't be used to create read-only properties?
> 
> Maybe I'm misunderstanding something.  I thought that a property
> contains a get descriptor, which makes it a kind of callable.

That's irrelevant; the property itself isn't callable.

> Also, I thought we had explicitly punted on allowing decorators to
> create properties because the syntax wasn't sufficiently flexible
> without contortions.

It would work just fine for read-only descriptors; in fact I believe
that

   def foo [property] (self):
       return ...calculated value of foo...

would be all you need (it would be nice if property tried to get the
docstring out of the get function).

In any case, *I* would prefer not to let the semantics require
anything, and to make this just a (preferred) shorthand for applying
arbitrary transformations to something that starts out as a function.

Two additional thoughts:

1) I'm not sure I like using the same syntax for classes; the use
   cases are so different that using similar syntax only adds
   confusion, and I think the use cases for classes are a lot weaker
   than for methods.

2) The syntax should also apply to regular functions.

3) It needs to be crystal clear that the list of transformations is
   applied at function/method definition time, not at class definition
   time (which is later, after all the methods have been collected in
   a namespace).

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Mar 10 11:14:01 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 10 11:14:06 2004
Subject: [Python-Dev] os.rename() behavior
In-Reply-To: Your message of "Wed, 10 Mar 2004 13:03:15 -0300."
	<20040310160315.GB2564@burma.localdomain> 
References: <20040310160315.GB2564@burma.localdomain> 
Message-ID: <200403101614.i2AGE1d04591@guido.python.org>

> 
>   rename(src, dst)
> 
>   Rename the file or directory src to dst. If dst is a directory,
>   OSError will be raised. On Unix, if dst exists and is a file, it will
>   be removed silently if the user has permission. The operation may fail
>   on some Unix flavors if src and dst are on different filesystems. If
>   successful, the renaming will be an atomic operation (this is a POSIX
>   requirement). On Windows, if dst already exists, OSError will be
>   raised even if it is a file; there may be no way to implement an
>   atomic rename when dst names an existing file. Availability:
>   Macintosh, Unix, Windows.
> 
> 
> While I understand that this is an operating system behavior, and the os
> module is meant to hold incompatible functionality, I do belive that
> most of us developing in posix systems have never checked if a rename
> failed because the file already exists. IOW, this will usually be
> discovered when the program blows up for the first time in Windows
> (that's how Thomas discovered).
> 
> Is it too late to implement the forced rename internally, and leave the
> programmer test it with os.path.exists() if he really wants to?
> 
> I know this will cause incompatibilities in programs which expect the
> current behavior. OTOH, programs that expect this behavior are already
> broken in any posix system.

-1.

The Unix semantics can't be properly emulated on Windows, because it
is guaranteed to be *atomic*.  Python can't know whether an
applications that blindly assumed Unix semantics is relying on the
atomicity; if the application needs this, doing a remove() first under
the cover would be a grave error (especially more grave since the
potential data loss is unlikely to be found through testing).

--Guido van Rossum (home page: http://www.python.org/~guido/)

From aahz at pythoncraft.com  Wed Mar 10 11:30:59 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 10 11:31:20 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403101609.i2AG9PG04567@guido.python.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
	<20040310153126.GA3616@panix.com>
	<200403101609.i2AG9PG04567@guido.python.org>
Message-ID: <20040310163059.GA9944@panix.com>

On Wed, Mar 10, 2004, Guido van Rossum wrote:
>
> In any case, *I* would prefer not to let the semantics require
> anything, and to make this just a (preferred) shorthand for applying
> arbitrary transformations to something that starts out as a function.

All right, I've done my duty as the Loyal Opposition ;-), and I don't
care enough to keep arguing.

That still leaves the question for what *is* allowed within the
brackets.  AFAICT, the options are

* A single identifier (which must be a callable)

* Comma-separated list of identifiers (which must be callables)

* Arbitrary expression (which must produce a sequence of callables)

BTW, have we agreed on the order in which decorators will be applied?
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From FBatista at uniFON.com.ar  Wed Mar 10 11:46:09 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 10 11:47:50 2004
Subject: [Python-Dev] calculator module
Message-ID: <A128D751272CD411BC9200508BC2194D03383718@escpl.tcp.com.ar>

Gareth McCaughan wrote:

#- Decimal doesn't currently support the functions (cos, exp, log, ...)
#- currently in the "math" module. Making Raymond's proposal work
#- with Decimal objects would require implementing all those things
#- for Decimals. That's appreciably more work than implementing just
#- the new functions for doubles.

You're right. And the specification of Cowlishaw doesn't tell us how to make
a cos of a Decimal, for example.


#- Further, I'd argue that since Decimal can work with numbers
#- of arbitrary precision, an implementation of (say) exp that
#- works on Decimals ought to work with something like full
#- precision, unless it's *only* intended for casual use. Writing
#- implementations of all these things would be a considerable
#- amount of work.

You always can change the context precision before the operation. But it's
not very normal to the user that comes from using a "hand calculator".


#- To my mind, the loss isn't worth the gain.

I think that make the calculations with Decimal, *now* isn't worth it. 

But when Decimal and the calculator will be implemented, we should
reconsider this again.

.	Facundo






. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
ADVERTENCIA  

La informaci?n contenida en este mensaje y cualquier archivo anexo al mismo,
son para uso exclusivo del destinatario y pueden contener informaci?n
confidencial o propietaria, cuya divulgaci?n es sancionada por la ley. 

Si Ud. No es uno de los destinatarios consignados o la persona responsable
de hacer llegar este mensaje a los destinatarios consignados, no est?
autorizado a divulgar, copiar, distribuir o retener informaci?n (o parte de
ella) contenida en este mensaje. Por favor notif?quenos respondiendo al
remitente, borre el mensaje original y borre las copias (impresas o grabadas
en cualquier medio magn?tico) que pueda haber realizado del mismo. 

Todas las opiniones contenidas en este mail son propias del autor del
mensaje y no necesariamente coinciden con las de Telef?nica Comunicaciones
Personales S.A. o alguna empresa asociada. 

Los mensajes electr?nicos pueden ser alterados, motivo por el cual
Telef?nica Comunicaciones Personales S.A. no aceptar? ninguna obligaci?n
cualquiera sea el resultante de este mensaje. 

Muchas Gracias.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20040310/f34de2a8/attachment.html
From Paul.Moore at atosorigin.com  Wed Mar 10 11:52:34 2004
From: Paul.Moore at atosorigin.com (Moore, Paul)
Date: Wed Mar 10 11:52:38 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
Message-ID: <16E1010E4581B049ABC51D4975CEDB8803060E23@UKDCX001.uk.int.atosorigin.com>

From: Aahz
> That still leaves the question for what *is* allowed within the
> brackets.  AFAICT, the options are
>
> * A single identifier (which must be a callable)
>
> * Comma-separated list of identifiers (which must be callables)
>
> * Arbitrary expression (which must produce a sequence of callables)

For completeness, and the one which I understood to be the case,

* Comma-separated list of arbitrary expressions

I still prefer not to state "*must* be callable" and instead note
that they *will* be called. The former leaves open the question of
what will happen if the "must" is violated. The latter is entirely
clear. It gets called - so it fails just as it would if called in any
other context.

> BTW, have we agreed on the order in which decorators will be applied?

Possibly not. I've been assuming the equivalence

    def f(a,b) [d1, d2, d3]:
        ...

<=>

    def f(a,b):
        ...
    f = d1(f)
    f = d2(f)
    f = d3(f)

but I haven't confirmed this against mwh's patch, and you're right that
it should be stated explicitly.

Paul.

PS I'm rapidly going off Guido's def f [...] (...) suggestion. As Just
   pointed out, I dislike even spaces between the function name and the
   argument list, so decorators there really do grate.

From oussoren at cistron.nl  Wed Mar 10 11:54:44 2004
From: oussoren at cistron.nl (Ronald Oussoren)
Date: Wed Mar 10 11:54:23 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040310163059.GA9944@panix.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
	<20040310153126.GA3616@panix.com>
	<200403101609.i2AG9PG04567@guido.python.org>
	<20040310163059.GA9944@panix.com>
Message-ID: <A18563F1-72B3-11D8-8990-0003931CFE24@cistron.nl>


On 10-mrt-04, at 17:30, Aahz wrote:

> On Wed, Mar 10, 2004, Guido van Rossum wrote:
>>
>> In any case, *I* would prefer not to let the semantics require
>> anything, and to make this just a (preferred) shorthand for applying
>> arbitrary transformations to something that starts out as a function.
>
> All right, I've done my duty as the Loyal Opposition ;-), and I don't
> care enough to keep arguing.
>
> That still leaves the question for what *is* allowed within the
> brackets.  AFAICT, the options are
>
> * A single identifier (which must be a callable)

Why not allow dotted name (foomod.modifier) as well?

>
> * Comma-separated list of identifiers (which must be callables)

* A comma-seperated list of expressions (which must produce callables)

>
> * Arbitrary expression (which must produce a sequence of callables)

This seems to allow def foo() ``[[x for x in sequence]]: pass`` which 
is very odd.

>
> BTW, have we agreed on the order in which decorators will be applied?
> -- 
> Aahz (aahz@pythoncraft.com)           <*>         
> http://www.pythoncraft.com/
>
> "Do not taunt happy fun for loops. Do not change lists you are looping 
> over."
> --Remco Gerlich, comp.lang.python


From guido at python.org  Wed Mar 10 11:56:04 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 10 11:56:09 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: Your message of "Wed, 10 Mar 2004 11:30:59 EST."
	<20040310163059.GA9944@panix.com> 
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
	<20040310153126.GA3616@panix.com>
	<200403101609.i2AG9PG04567@guido.python.org> 
	<20040310163059.GA9944@panix.com> 
Message-ID: <200403101656.i2AGu4u04774@guido.python.org>

> That still leaves the question for what *is* allowed within the
> brackets.  AFAICT, the options are
> 
> * A single identifier (which must be a callable)
> 
> * Comma-separated list of identifiers (which must be callables)
> 
> * Arbitrary expression (which must produce a sequence of callables)

The latter.  I previously mentioned a use case for allowing a function
call (or class constructor) here.

> BTW, have we agreed on the order in which decorators will be applied?

I think I've said in the past left-to-right, but since we're talking
function application here, right-to-left might also work.  And it's
also possible that decorators are (by convention) commutative, making
it a wash.  It's a concern that there's no clear reason to prefer one
order!  We'll have to come up with some use cases.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From Scott.Daniels at Acm.Org  Wed Mar 10 12:27:09 2004
From: Scott.Daniels at Acm.Org (Scott David Daniels)
Date: Wed Mar 10 12:27:45 2004
Subject: [Python-Dev] Re: Python-Dev Digest, Vol 8, Issue 20
In-Reply-To: <E1B17B2-000297-93@mail.python.org>
References: <E1B17B2-000297-93@mail.python.org>
Message-ID: <404F4FED.6000705@Acm.Org>

Aahz <aahz@pythoncraft.com> wrote:
> BTW, have we agreed on the order in which decorators will be applied?
I think this depends on the position of the expression list.

     def foo(a,b,c) [d,e,f]: pass

should be foo = f(e(d(<function body>)))
-- I read it <make body> apply d, apply e, apply f

while
     def foo [d,e,f](a,b,c): pass

should be foo = d(e(f(<function body>)))

The other orders seem harder to explain.

-- 
-Scott David Daniels
Scott.Daniels@Acm.Org

From fumanchu at amor.org  Wed Mar 10 12:27:18 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Wed Mar 10 12:28:52 2004
Subject: [Python-Dev] calculator module
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561EBC@opus.amorhq.net>

> My thought is to create a pure python module with "pretty good"
> implementations of things found on low to mid-range calculators

Great idea, as always! :)

> Here are a few that may be worthy of a module:
> * Combinations, permuations, and factorial

Only if we can write 3! instead of calculator.factorial(3) ;)

> * A simple solver, given f(x)==c and hi/low bounds, solve for x if
> possible.  (works like Goal Seek in Excel)

+1 zillion. This should be a commodity in most business apps, not the
exclusive domain of one or two.

> * A handfull of metric/English conversions (these could be skipped)

Yes, but make this a set of base classes, so that when Maeda in New
Guinea wants to convert square feet to bilums, she can subclass them.


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From skip at pobox.com  Wed Mar 10 12:37:56 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 10 12:38:10 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040310163059.GA9944@panix.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
	<20040310153126.GA3616@panix.com>
	<200403101609.i2AG9PG04567@guido.python.org>
	<20040310163059.GA9944@panix.com>
Message-ID: <16463.21108.422353.212491@montanaro.dyndns.org>


    aahz> That still leaves the question for what *is* allowed within the
    aahz> brackets.  AFAICT, the options are

    ...

    aahz> * Arbitrary expression (which must produce a sequence of callables)

My vote goes here.

    aahz> BTW, have we agreed on the order in which decorators will be
    aahz> applied?

No, but it seems to me that most people are assuming first-to-last in their
examples.

Skip

From pje at telecommunity.com  Wed Mar 10 13:15:16 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 10 13:09:59 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <200403101656.i2AGu4u04774@guido.python.org>
References: <Your message of "Wed,
	10 Mar 2004 11:30:59 EST." <20040310163059.GA9944@panix.com>
	<B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
	<20040310153126.GA3616@panix.com>
	<200403101609.i2AG9PG04567@guido.python.org>
	<20040310163059.GA9944@panix.com>
Message-ID: <5.1.0.14.0.20040310130739.0231dd00@mail.telecommunity.com>

At 08:56 AM 3/10/04 -0800, Guido van Rossum wrote:
> > That still leaves the question for what *is* allowed within the
> > brackets.  AFAICT, the options are
> >
> > * A single identifier (which must be a callable)
> >
> > * Comma-separated list of identifiers (which must be callables)
> >
> > * Arbitrary expression (which must produce a sequence of callables)
>
>The latter.  I previously mentioned a use case for allowing a function
>call (or class constructor) here.
>
> > BTW, have we agreed on the order in which decorators will be applied?
>
>I think I've said in the past left-to-right, but since we're talking
>function application here, right-to-left might also work.

I think the reason you said so in the past, was because normally Python 
expressions go left-to-right.  That is, Python always evaluates things in 
the order you see them.  Even in 'x(y,z)', x is evaluated, then y, then z.


>   And it's
>also possible that decorators are (by convention) commutative, making
>it a wash.  It's a concern that there's no clear reason to prefer one
>order!  We'll have to come up with some use cases.

The way I see it, left-to-right helps reading what's happening to the 
function.  For example:

def foo(cls,...) [contract(blah), classmethod]:
     ...

clearly says that a contract wrapper is being applied to foo, and then it 
is being made a classmethod.

If the order were the other direction, it would force you to read 
right-to-left in order to know what's going to happen to the function.

So, I vote for left-to-right on the grounds of "readability counts".  :)

(Which is also why I favor decorators after the arguments; 'foo(...)' at 
least looks something like the way it'll be called, while foo[...](...) 
bears no resemblance.)


From niemeyer at conectiva.com  Wed Mar 10 13:10:48 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Wed Mar 10 13:10:46 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <16463.3039.597228.217607@montanaro.dyndns.org>
References: <006801c40670$fb894e40$e841fea9@oemcomputer>
	<16463.3039.597228.217607@montanaro.dyndns.org>
Message-ID: <20040310181048.GA5515@burma.localdomain>

> Hand-held calculators aside, my model for what a computer-based calculator
> should be able to do is the graphing calculator which came with Mac OS9.  It
> was actually developed by Pacific Tech and bundled by Apple.  There are no
> plans for an OS X version.  The screenshots at the URL below will give you
> an idea of what it can do.
> 
>     http://www.pacifict.com/Gallery.html

Impressive!

This might get there, eventually:

http://matplotlib.sourceforge.net/screenshots.html

-- 
Gustavo Niemeyer
http://niemeyer.net

From theller at python.net  Wed Mar 10 13:24:25 2004
From: theller at python.net (Thomas Heller)
Date: Wed Mar 10 13:24:34 2004
Subject: [Python-Dev] Tkinter and python.exe.manifest
Message-ID: <d67kk1ba.fsf@python.net>

In the past, when building and testing the windows installer for Python
2.3.3, I sometimes had mysterious crashes with scripts using Tkinter.

As it turns out now, the crashes probably have been caused by the
python.exe.manifest and pythonw.exe.manifest files that wxPython
installs.  These files give wxPython apps the win XP look when run on
win XP by marking python to be compatible with the version 6 common
controls dll from MS, but it seems Tkinter is incompatible with this.

Is there anything that can be done to Tkinter to make it compatible with
the new common controls dll?

Thomas


From kbk at shore.net  Wed Mar 10 13:29:25 2004
From: kbk at shore.net (Kurt B. Kaiser)
Date: Wed Mar 10 13:29:29 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <006801c40670$fb894e40$e841fea9@oemcomputer> (Raymond
	Hettinger's message of "Wed, 10 Mar 2004 02:26:18 -0500")
References: <006801c40670$fb894e40$e841fea9@oemcomputer>
Message-ID: <87znao5zei.fsf@hydra.localdomain>

"Raymond Hettinger" <raymond.hettinger@verizon.net> writes:

> My thought is to create a pure python module with "pretty good"
> implementations of things found on low to mid-range calculators like my
> personal favorite, the hp32sII student scientific calculator.

This might be a useful addition to the IDLE File menu.  It would open a
calculator window.

-- 
KBK

From mwh at python.net  Wed Mar 10 13:33:20 2004
From: mwh at python.net (Michael Hudson)
Date: Wed Mar 10 13:33:24 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <200403101526.17307.gmccaughan@synaptics-uk.com> (Gareth
	McCaughan's message of "Wed, 10 Mar 2004 15:26:17 +0000")
References: <000b01c4068e$60c28520$e841fea9@oemcomputer>
	<200403101503.i2AF34n04307@guido.python.org>
	<200403101526.17307.gmccaughan@synaptics-uk.com>
Message-ID: <2mu10wwo0f.fsf@starship.python.net>

Gareth McCaughan <gmccaughan@synaptics-uk.com> writes:

> Further, I'd argue that since Decimal can work with numbers of
> arbitrary precision, an implementation of (say) exp that works on
> Decimals ought to work with something like full precision, unless
> it's *only* intended for casual use. Writing implementations of all
> these things would be a considerable amount of work.

Have you played with Jurgen Bos' real.py?  I'm not sure it's useful,
but I think you'll like it :-)

Cheers,
mwh

-- 
  It is time-consuming to produce high-quality software. However,
  that should not alone be a reason to give up the high standards
  of Python development.              -- Martin von Loewis, python-dev

From fdrake at acm.org  Wed Mar 10 13:40:07 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Wed Mar 10 13:40:25 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <16E1010E4581B049ABC51D4975CEDB8803060E23@UKDCX001.uk.int.atosorigin.com>
References: <16E1010E4581B049ABC51D4975CEDB8803060E23@UKDCX001.uk.int.atosorigin.com>
Message-ID: <200403101340.07019.fdrake@acm.org>

On Wednesday 10 March 2004 11:52 am, Moore, Paul wrote:
 > PS I'm rapidly going off Guido's def f [...] (...) suggestion. As Just
 >    pointed out, I dislike even spaces between the function name and the
 >    argument list, so decorators there really do grate.

I'm with you on this point.  The "def name [...] (args...):" syntax is 
probably bearable for one decorator, but seems intolerable for the case of 
multiple decorators.  While the single-decorator case will likely be the most 
common, I think multiple decorators won't be so unusual that they should be 
severely burdened with cumbersome syntax.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From tim.one at comcast.net  Wed Mar 10 13:46:10 2004
From: tim.one at comcast.net (Tim Peters)
Date: Wed Mar 10 13:46:16 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <000b01c4068e$60c28520$e841fea9@oemcomputer>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPGJGAB.tim.one@comcast.net>

[Raymond Hettinger]
> ...
> Also, I'll clarify the part about numerical accuracy.  The other
> respondent took "pretty good" as meaning sloppy.  What was meant was
> using the best algorithms and approximations that can be implemented
> cleanly and portably.  This is a high standard, but much lower than
> would be expected of a system library (down to the last ULP or pretty
> close to it).

It's also very much lower than the HP calculators you (& I) remember so
fondly.  A large part of why they were so pleasant to use is that HP went to
*extraordinary* lengths to make them numerically robust.  Indeed, they hired
Kahan to design and implement robust algorithms for the HP Solver, the HP
adaptive numerical Romberg integration, and the HP financial calculators'
IRR solvers (note that an IRR solver is essentially finding the roots of an
N-degree polynomial, so there are N roots, and it takes extreme care to
isolate the one that makes physical sense; & if cash flows are both positive
and negative, there can even be more than one root that makes physical
sense).  They were all solid advances in the numeric state of the art.
Kahan wrote some articles about them which are worth tracking down.  Alas, I
think they appeared in the HP Systems Journal, and offhand I wasn't able to
find them online just now.

> A practical and important consequence of "pretty good" is that we can
> ignore bug reports that say "last two digits of gamma(3.1) don't agree
> with Matlab."

Another professional thing HP did is document worst-case error bounds on all
their function implementations.  It's appalling that most C vendors don't
(and Python inherits this sad story).  It's fine by me if, for example, a
numeric function is guaranteed good to only one digit, provided that the
documentation says so.

BTW, it should be much easier to write high-quality implementations using
Decimal than using native binary fp.  The library writer building on Decimal
can easily request a temporary increase in working precision, then round the
final extended-precision result back.  With limitations, even the simplest
numerically naive algorithms can often be coaxed into delivering
high-quality results that way.  95% of the battle when using native binary
fp is finding clever ways to rephrase computation so that native-precision
operations don't go insane.

Simple example:  hypot(x, y) = sqrt(x**2 + y**2).  The naive implementation
in native fp is useless, because it suffers spurious overflow and underflow
across a huge section of the input domain.  Use an extended-range,
extended-precision arithmetic internally, though, and the naive
implementation is both accurate and robust across the entire domain.


From skip at pobox.com  Wed Mar 10 15:04:02 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 10 15:04:16 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <5.1.0.14.0.20040310130739.0231dd00@mail.telecommunity.com>
References: <Your message of "Wed,
	10 Mar 2004 11:30:59 EST." <20040310163059.GA9944@panix.com>
	<B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com>
	<200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
	<20040310153126.GA3616@panix.com>
	<200403101609.i2AG9PG04567@guido.python.org>
	<20040310163059.GA9944@panix.com>
	<5.1.0.14.0.20040310130739.0231dd00@mail.telecommunity.com>
Message-ID: <16463.29874.995735.300160@montanaro.dyndns.org>


    Phillip> The way I see it, left-to-right helps reading what's happening
    Phillip> to the function.  For example:

    Phillip> def foo(cls,...) [contract(blah), classmethod]:
    Phillip>      ...

    Phillip> clearly says that a contract wrapper is being applied to foo,
    Phillip> and then it is being made a classmethod.

On the off-chance that a default arg initializer has side effects,
left-to-right also guarantees that the side effects take place before the
decorator expression is evaluated:

    i = I(...)

    def foo(a=i.func(...), b, c) [i.w1(...), w2]:
        ...

Evaluating i.func(...) might have some effect on the execution of i.w1(...).
I don't claim that's necessarily a good property to rely on, but if reading
order matches evaluation order it makes for fewer surprises.

    Phillip> If the order were the other direction, it would force you to
    Phillip> read right-to-left in order to know what's going to happen to
    Phillip> the function.

    Phillip> So, I vote for left-to-right on the grounds of "readability
    Phillip> counts".  :)

+1.

Skip

From pycon at python.org  Wed Mar 10 16:37:29 2004
From: pycon at python.org (Steve Holden)
Date: Wed Mar 10 16:37:32 2004
Subject: [Python-Dev] View Talks - Buy 'Zen of Python' Tshirt - Free with
	PyCON by 17th
Message-ID: <E1B1BOL-0000Ba-6k@mail.python.org>


Dear Python User:

View the Talks online and Reserve your Zen of Python Shirt now or get one
free if you register for PyCON 2004 by the 17th.

Don't miss the most important Python event of the year.  PyCON 2004 is only
2 weeks away.  If you haven't registered for PyCON, please do so immediately
before the regular registration expires March 17th.
http://www.pycon.org/dc2004/register

We are extending the Free T-Shirt offer to all regular registrants (valued
at $20).  See the shirts online at
http://www.pycon.org/dc2004/shirts.pt

We are also taking orders for extra shirts, including shipping to those that
cannot attend.  Please email zope@toenjes.com to reserve your shirts.
All requests must be received by March 19.  Price will be $20 + $5 shipping
and handling.  We will ship them sometime around mid-April.
Payment instructions from the Python Software Foundation will follow.

The Talks schedule is now published and can be viewed at
http://www.python.org/pycon/dc2004/schedule.html

For more about PyCON...
http://www.pycon.org


regards
Steve Holden
Chairman, PyCON DC 2004

From bsder at allcaps.org  Wed Mar 10 17:15:53 2004
From: bsder at allcaps.org (Andrew P. Lentvorski, Jr.)
Date: Wed Mar 10 17:06:03 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <006801c40670$fb894e40$e841fea9@oemcomputer>
References: <006801c40670$fb894e40$e841fea9@oemcomputer>
Message-ID: <20040310135618.O30404@mail.allcaps.org>

On Wed, 10 Mar 2004, Raymond Hettinger wrote:

> To these, the HP12C financial calculator adds:
> * solve for missing variable in (n i% pv pmt fv)
> * internal rate of return and net present value for a stream of cash
> flows
...
> By "pretty good" implementation, I mean that the implementations should
> remain as straight-forward as possible and neither promise, nor get in a
> snit about being accurate to the last place (i.e. a 9 place
> approximation of gamma would be considered fine).

Pretty good is not "good enough".  One of the reasons the HP 12C is so
popular is that every 12C gets *exactly* the same answer for the same
inputs regardless of age, model, etc.  The Python version would have to be
a bit exact match to a 12C for that calculation.

In addition, there are going to be requirements for directed rounding to
ensure the stability of some functions over their domains.  Directed
rounding generally causes a machine wide pipeline drain and lots of
ancillary machine wide FP effects.  Having Python cause FP random FP bugs
in other programs would not be good for its reputation.

Personally, I would rather see extremely good and accurate, but possibly
complex versions of these mathematical functions added into Python proper
with the calculator module simply being a wrapper around the base library
functions.  That way, the calculator automatically benefits when the
"state of the art" method in the main library advances or is corrected.

-a

From greg at cosc.canterbury.ac.nz  Wed Mar 10 18:17:25 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 10 18:17:36 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <006801c40670$fb894e40$e841fea9@oemcomputer>
Message-ID: <200403102317.i2ANHPK29887@oma.cosc.canterbury.ac.nz>

Raymond Hettinger <raymond.hettinger@verizon.net>:

> * cross-product and dot product for 3 element vectors (x,y,z
> coordinates).

Cross product would be good to have around somewhere in
any case. It's frustrating that Numeric doesn't provide
a straightforward way of getting it (not that I could
see last time I looked, anyway).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Wed Mar 10 19:28:15 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 10 19:28:25 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403101656.i2AGu4u04774@guido.python.org>
Message-ID: <200403110028.i2B0SFj00171@oma.cosc.canterbury.ac.nz>

Guido:

> And it's also possible that decorators are (by convention)
> commutative, making it a wash.

That's not always going to be possible, since any decorator that
returns a descriptor will have to be applied last in order for the
descriptor to have its magical effect.  So order of application will
certainly have to be specified.

Left-to-right seems the most sensible to me, all things considered.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Wed Mar 10 19:30:22 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 10 19:30:43 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040310163059.GA9944@panix.com>
Message-ID: <200403110030.i2B0UM300175@oma.cosc.canterbury.ac.nz>

Aahz <aahz@pythoncraft.com>:

> * A single identifier (which must be a callable)
> 
> * Comma-separated list of identifiers (which must be callables)
> 
> * Arbitrary expression (which must produce a sequence of callables)

You left out

  * Comma-separated list of arbitrary expressions

which is the one I'm in favor of.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Wed Mar 10 19:38:44 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 10 19:38:53 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040310153126.GA3616@panix.com>
Message-ID: <200403110038.i2B0cik00194@oma.cosc.canterbury.ac.nz>

Aahz <aahz@pythoncraft.com>:

> Maybe I'm misunderstanding something.  I thought that a property contains
> a get descriptor, which makes it a kind of callable.

A property is a descriptor which *contains* up to 3 callables (for
get, set, del), but descriptors themselves are not callable.

This is one reason we can't require the result of a decorator to be
callable. That would immediately rule out classmethod and
staticmethod, which return descriptors, not callables!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From aahz at pythoncraft.com  Wed Mar 10 20:36:22 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 10 20:36:25 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403110030.i2B0UM300175@oma.cosc.canterbury.ac.nz>
References: <20040310163059.GA9944@panix.com>
	<200403110030.i2B0UM300175@oma.cosc.canterbury.ac.nz>
Message-ID: <20040311013622.GC14091@panix.com>

On Thu, Mar 11, 2004, Greg Ewing wrote:
> Aahz <aahz@pythoncraft.com>:
>> 
>> * A single identifier (which must be a callable)
>> 
>> * Comma-separated list of identifiers (which must be callables)
>> 
>> * Arbitrary expression (which must produce a sequence of callables)
> 
> You left out
> 
>   * Comma-separated list of arbitrary expressions
> 
> which is the one I'm in favor of.

If we allow truly arbitrary expressions, that comes for free, by
definition.  ;-)
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From aahz at pythoncraft.com  Wed Mar 10 20:43:20 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 10 20:43:23 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403110038.i2B0cik00194@oma.cosc.canterbury.ac.nz>
References: <20040310153126.GA3616@panix.com>
	<200403110038.i2B0cik00194@oma.cosc.canterbury.ac.nz>
Message-ID: <20040311014319.GD14091@panix.com>

On Thu, Mar 11, 2004, Greg Ewing wrote:
> Aahz <aahz@pythoncraft.com>:
>> 
>> Maybe I'm misunderstanding something.  I thought that a property
>> contains a get descriptor, which makes it a kind of callable.
>
> A property is a descriptor which *contains* up to 3 callables (for
> get, set, del), but descriptors themselves are not callable.
>
> This is one reason we can't require the result of a decorator
> to be callable. That would immediately rule out classmethod and
> staticmethod, which return descriptors, not callables!

That's why I said "kind of callable"; from the user's POV, they act like
callables because you invoke them with call syntax, just like methods.
As I noted, I'm not sure it's even possible to enforce that kind of
restriction, but I think it's desirable to document.  Guido has
pronounced, so it's a dead issue, though.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From fdrake at acm.org  Wed Mar 10 21:10:30 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Wed Mar 10 21:11:07 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040311014319.GD14091@panix.com>
References: <20040310153126.GA3616@panix.com>
	<200403110038.i2B0cik00194@oma.cosc.canterbury.ac.nz>
	<20040311014319.GD14091@panix.com>
Message-ID: <200403102110.30028.fdrake@acm.org>

On Wednesday 10 March 2004 08:43 pm, Aahz wrote:
 > That's why I said "kind of callable"; from the user's POV, they act like
 > callables because you invoke them with call syntax, just like methods.

My first reaction to this was "Since when??!?"

If the only descriptors you're thinking about are things that return callables 
from the __get__() method, it makes sense that you've made this mistake.  
What needs to be considered though is that not all descriptors provide 
callable results.  Remember that "property" is a descriptor type; instances 
of property are not normally callable.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From aahz at pythoncraft.com  Wed Mar 10 21:34:32 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 10 21:34:43 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <200403102110.30028.fdrake@acm.org>
References: <20040310153126.GA3616@panix.com>
	<200403110038.i2B0cik00194@oma.cosc.canterbury.ac.nz>
	<20040311014319.GD14091@panix.com>
	<200403102110.30028.fdrake@acm.org>
Message-ID: <20040311023432.GA19845@panix.com>

On Wed, Mar 10, 2004, Fred L. Drake, Jr. wrote:
> On Wednesday 10 March 2004 08:43 pm, Aahz wrote:
>>
>> That's why I said "kind of callable"; from the user's POV, they act like
>> callables because you invoke them with call syntax, just like methods.
> 
> My first reaction to this was "Since when??!?"
> 
> If the only descriptors you're thinking about are things that return
> callables from the __get__() method, it makes sense that you've made
> this mistake.  What needs to be considered though is that not all
> descriptors provide callable results.  Remember that "property" is a
> descriptor type; instances of property are not normally callable.

"They" was referring to Greg's mention of classmethods and staticmethods.
Despite the example given of the colormap, it's still not clear to me
what the use case is for a decorator returning an object that doesn't
get used as a callable, particularly when weighed against the
disadvantages.

There's a lot of ambiguity floating around with the advent of properties.
Although properties don't use call syntax, IMO they certainly have call
semantics because the get or set descriptor gets called by the attribute
machinery under the covers.  (I just checked -- it's legal to define a
property that uses string objects in the property() call, but you'll get
a TypeError when you try to access the attribute.)  I think that
properties as a specific case are sufficiently useful to warrant the
blurring, but I'd rather not see that blurring generalized through
Python.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From niemeyer at conectiva.com  Wed Mar 10 22:04:41 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Wed Mar 10 22:04:34 2004
Subject: [Python-Dev] dateutil
Message-ID: <20040311030440.GA2742@burma.localdomain>

Yes, it's time for the classical question. ;-)

What's your opinion about the inclusion of the dateutil[1]
extension in the standard library?

[1] https://moin.conectiva.com.br/DateUtil

-- 
Gustavo Niemeyer
http://niemeyer.net

From nas-python at python.ca  Wed Mar 10 22:01:34 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Wed Mar 10 22:31:03 2004
Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Objects
	bufferobject.c, 2.22, 2.23
In-Reply-To: <E1B1G9n-0000mW-NZ@sc8-pr-cvs1.sourceforge.net>
References: <E1B1G9n-0000mW-NZ@sc8-pr-cvs1.sourceforge.net>
Message-ID: <20040311030134.GA1751@mems-exchange.org>

On Wed, Mar 10, 2004 at 06:42:47PM -0800, nascheme@users.sourceforge.net wrote:
> Modified Files:
> 	bufferobject.c 
> Log Message:
> Make buffer objects based on immutable objects (like array) safe.

Perfect is the enemy of better.

I would appreciate it if people could review this change.  The
potential for range errors seems high.  I tried hard to make the
change more easily reviewable (at some cost to the resulting code
readability).

The mean idea of the change is that instead of using b_ptr and
b_size directly, get_buf() is called as necessary to retrieve them.
If the buffer is based on memory then b_ptr and b_size are set
normally and get_buf() just returns them.  If the buffer is based on
an object then get_buf() calls the tp_as_buffer method to retrieve
them.

I believe this fixes the nastier problems of the buffer object.  The
only remaining problem that I know of is that buffer_hash may not
raise a TypeError when it really should.

  Neil

From latentthanos at mac.com  Wed Mar 10 21:21:11 2004
From: latentthanos at mac.com (thanos vassilakis)
Date: Wed Mar 10 22:31:07 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <200403102317.i2ANHPK29887@oma.cosc.canterbury.ac.nz>
References: <200403102317.i2ANHPK29887@oma.cosc.canterbury.ac.nz>
Message-ID: <C365DA63-7302-11D8-A917-000A95A593E4@mac.com>

This has been an interesting thread....

I have an hp12c.py module written with my own python c extension of 
implementation of the Mike Cowlishaw (IEEE 754R + IEEE 854 + ANSI 
X3.274 +  ECMA 334) specs.
This  decimal.c module is in production on the floor of the a very 
large New York exchange. I have been kind of waiting for the dust to 
settle on  Facundo's work.  Then I would conform my module to what the 
python community finally decides. The module has been in production for 
over eight months, it has most of the  common Transcendentals and other 
interesting stuff, but only implements an emulation of decPacked.  My 
first version had the decimal ops as Context methods . Developers 
complained that it was not intuitive. So I was forced to create concept 
of named context scope, but I have never been happy with this.  It very 
fast

To demo and test the module I wrote a full emulation of the hp12c, TZYX 
stack and all. Most of the algorithms used for  IRR, Bond yield, dates 
etc, appeared in an article I wrote in the Algorithms Journal in 1987 
and we used Cyril Drimer, Mark Tsang  and I in the numerical package, 
Num++, we developed for Zortech's (later sold by Symantec ) C++ 
compiler. These algorithms are also documented on the back of the HP12c 
manual (beware of the errors in the equations !!)

Now how should I release it ? I will be having my first vacation for 
over 19 months in a few weeks, may be this will be a good time to clean 
it up and extend to meet the decimal number PEP.... any suggestions.

thanos










The question is how to release this in the public domain.
On Mar 10, 2004, at 6:17 PM, Greg Ewing wrote:

> Raymond Hettinger <raymond.hettinger@verizon.net>:
>
>> * cross-product and dot product for 3 element vectors (x,y,z
>> coordinates).
>
> Cross product would be good to have around somewhere in
> any case. It's frustrating that Numeric doesn't provide
> a straightforward way of getting it (not that I could
> see last time I looked, anyway).
>
> Greg Ewing, Computer Science Dept, 
> +--------------------------------------+
> University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
> Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
> greg@cosc.canterbury.ac.nz	   +--------------------------------------+
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/thanos%400x01.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: text/enriched
Size: 2728 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20040310/5f055666/attachment-0001.bin
From tim.one at comcast.net  Wed Mar 10 23:47:50 2004
From: tim.one at comcast.net (Tim Peters)
Date: Wed Mar 10 23:47:52 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEPGJGAB.tim.one@comcast.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBOJHAB.tim.one@comcast.net>

[Tim, on HP's Solve and Integrate abilities]
> ...
> Kahan wrote some articles about them which are worth tracking down.
> Alas, I think they appeared in the HP Systems Journal, and offhand I
> wasn't able to find them online just now.

Had more time to search, and found them here:

    http://www.cs.berkeley.edu/~wkahan/Math128/

    The SOLVE key on the HP-34C
    The INTEGRATE key on the HP-34C

These papers are simply superb, lucidly explaining (among other things)
exactly how and why users can be hoodwinked by these routines despite how
much effort went into them -- and also why this is inescapable.  A quote
from the end of the SOLVE paper:

    The reader will recognize, first, how little the pathologies
    illustrated above have to do with the specifics of the SOLVE
    key, and second, how nearly certain is the user of so powerful
    a key to stumble into pathologies sooner or later, however
    rarely.  While the SOLVE key enhances its user's powers it
    obliges its user to use it prudently or be misled.

    And here is Hewlett-Packard's dilemma.  The company cannot
    afford a massive effort to educate the public in numerical
    analysis.  But without some such effort most potential
    purchasers may blame their calculator for troubles that are
    intrinsic in the problems they are trying to SOLVE.  To
    nearly minimize that required effort and its attendant
    risks, SOLVE has been designed to be more robust, more
    reliable and much easier to use than other equation solvers
    previously accepted widely by the computing industry.
    Whether the effort is enough remains to be seen.  Meanwhile
    we enjoy the time SOLVE saves us when it works to our
    satisfaction, which is almost always.

IMO, SOLVE remains a great improvement over almost all of its "mass market"
successors.  Excel is a fine example of the latter; e.g.,

    On the Accuracy of Statistical Distributions in Microsoft Excel 97
    http://www.stat.uni-muenchen.de/~knuesel/elv/excelacc.pdf

A later paper concluded that Excels 2000 and 2002 were no better:

    http://portal.acm.org/citation.cfm?id=635312&dl=ACM&coll=portal

    The problems that rendered Excel 97 unfit for use as a statistical
    package have not been fixed in either Excel 2000 or Excel 2002 (also
    called "Excel XP").  Microsoft attempted to fix errors in the
    standard normal random number generator and the inverse normal
    function, and in the former case actually made the problem worse.

Unfortunately, naive users are the ones worst served by "good enough for me"
implementations -- they have no basis for understanding the troubles they
may be getting into, and often neither even for recognizing that they are in
deep numerical weeds.

Most numeric software should be declared illegal <0.837 wink>.


From greg at cosc.canterbury.ac.nz  Thu Mar 11 00:03:52 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu Mar 11 00:03:59 2004
Subject: [Python-Dev] PEP 318 - generality of list;
	restrictions on elements
In-Reply-To: <20040311013622.GC14091@panix.com>
Message-ID: <200403110503.i2B53qD04105@oma.cosc.canterbury.ac.nz>

Aahz <aahz@pythoncraft.com>:

> >   * Comma-separated list of arbitrary expressions
> 
> If we allow truly arbitrary expressions, that comes for free, by
> definition.  ;-)

Well, syntactically, yes, but not semantically...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From aahz at pythoncraft.com  Thu Mar 11 00:53:31 2004
From: aahz at pythoncraft.com (Aahz)
Date: Thu Mar 11 00:53:37 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <C365DA63-7302-11D8-A917-000A95A593E4@mac.com>
References: <200403102317.i2ANHPK29887@oma.cosc.canterbury.ac.nz>
	<C365DA63-7302-11D8-A917-000A95A593E4@mac.com>
Message-ID: <20040311055331.GA3518@panix.com>

On Wed, Mar 10, 2004, thanos vassilakis wrote:
>
> Now how should I release it ? I will be having my first vacation for 
> over 19 months in a few weeks, may be this will be a good time to clean 
> it up and extend to meet the decimal number PEP.... any suggestions. 

The simplest solution would be to e-mail the code to Facundo and have
him add it to the sandbox alongside the existing Decimal code.  That
would put it under CVS control and other people could if necessary make
changes, without losing your original work.  Next best would be to post
it on a website somewhere and ask Facundo to add the URL to the PEP.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From python at rcn.com  Thu Mar 11 01:16:32 2004
From: python at rcn.com (Raymond Hettinger)
Date: Thu Mar 11 01:18:37 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBOJHAB.tim.one@comcast.net>
Message-ID: <001d01c40730$66af36c0$4339c797@oemcomputer>

> [Tim, on HP's Solve and Integrate abilities]
> > ...
> > Kahan wrote some articles about them which are worth tracking down.
> > Alas, I think they appeared in the HP Systems Journal, and offhand I
> > wasn't able to find them online just now.
> 
> Had more time to search, and found them here:
> 
>     http://www.cs.berkeley.edu/~wkahan/Math128/
> 
>     The SOLVE key on the HP-34C
>     The INTEGRATE key on the HP-34C
> 
> These papers are simply superb

Thanks for the links.  Will use them in the draft implementation.


Raymond


#################################################################
#################################################################
#################################################################
#####
#####
#####
#################################################################
#################################################################
#################################################################

From mwh at python.net  Thu Mar 11 05:42:38 2004
From: mwh at python.net (Michael Hudson)
Date: Thu Mar 11 05:42:42 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <200403101609.i2AG9PG04567@guido.python.org> (Guido van
	Rossum's message of "Wed, 10 Mar 2004 08:09:25 -0800")
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D380@USAHM010.amer.corp.eds.com>
	<16460.56237.524168.961669@montanaro.dyndns.org>
	<20040308210000.GA20554@panix.com> <200403081637.42008.fdrake@acm.org>
	<20040310113302.GB11417@panix.com>
	<200403101504.i2AF4c404327@guido.python.org>
	<20040310153126.GA3616@panix.com>
	<200403101609.i2AG9PG04567@guido.python.org>
Message-ID: <2mptbjwtpd.fsf@starship.python.net>

Guido van Rossum <guido@python.org> writes:

> In any case, *I* would prefer not to let the semantics require
> anything, and to make this just a (preferred) shorthand for applying
> arbitrary transformations to something that starts out as a function.

Sense at last <wink>!

> Two additional thoughts:
  ^^^

"No-one expects..."

> 1) I'm not sure I like using the same syntax for classes; the use
>    cases are so different that using similar syntax only adds
>    confusion, and I think the use cases for classes are a lot weaker
>    than for methods.

This is a marginal point, in my view.

> 2) The syntax should also apply to regular functions.

I wasn't aware that only applying it to methods had even been
considered for the tiniest fraction of an instant.  It would be
painful to implement and a transparently bad idea.

> 3) It needs to be crystal clear that the list of transformations is
>    applied at function/method definition time, not at class definition
>    time (which is later, after all the methods have been collected in
>    a namespace).

Given 2), that the syntax works for functions, I think this follows.
Besides, I can't think of a sane way of implementing the opposite...

Cheers,
mwh

-- 
  I would hereby duly point you at the website for the current pedal
  powered submarine world underwater speed record, except I've lost
  the URL.                                         -- Callas, cam.misc

From mwh at python.net  Thu Mar 11 05:45:48 2004
From: mwh at python.net (Michael Hudson)
Date: Thu Mar 11 05:45:51 2004
Subject: [Python-Dev] PEP 318 - generality of list; restrictions on
	elements
In-Reply-To: <20040311013622.GC14091@panix.com> (aahz@pythoncraft.com's
	message of "Wed, 10 Mar 2004 20:36:22 -0500")
References: <20040310163059.GA9944@panix.com>
	<200403110030.i2B0UM300175@oma.cosc.canterbury.ac.nz>
	<20040311013622.GC14091@panix.com>
Message-ID: <2mllm7wtk3.fsf@starship.python.net>

Aahz <aahz@pythoncraft.com> writes:

> On Thu, Mar 11, 2004, Greg Ewing wrote:
>> Aahz <aahz@pythoncraft.com>:
>>> 
>>> * A single identifier (which must be a callable)
>>> 
>>> * Comma-separated list of identifiers (which must be callables)
>>> 
>>> * Arbitrary expression (which must produce a sequence of callables)
>> 
>> You left out
>> 
>>   * Comma-separated list of arbitrary expressions
>> 
>> which is the one I'm in favor of.
>
> If we allow truly arbitrary expressions, that comes for free, by
> definition.  ;-)

Yes, but that's not the point!  What my patch currently allows is a
'exprlist', whuch is a "comma-separated list of arbitrary
expressions", and is The Only Sane Choice (tm).  Allowing a listmaker
(or whatever that production is called) and so list comprehensions is
barmy.

Besides, a literal reading of 

Arbitrary expression (which must produce a sequence of callables)

would suggest

def foo [staticmethod] ():
    pass

was in error!

Cheers,
mwh

-- 
  Now this is what I don't get.  Nobody said absolutely anything
  bad about anything.  Yet it is always possible to just pull
  random flames out of ones ass.
         -- http://www.advogato.org/person/vicious/diary.html?start=60

From gmccaughan at synaptics-uk.com  Thu Mar 11 06:02:17 2004
From: gmccaughan at synaptics-uk.com (Gareth McCaughan)
Date: Thu Mar 11 06:02:24 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <2mu10wwo0f.fsf@starship.python.net>
References: <000b01c4068e$60c28520$e841fea9@oemcomputer>
	<200403101526.17307.gmccaughan@synaptics-uk.com>
	<2mu10wwo0f.fsf@starship.python.net>
Message-ID: <200403111102.17748.gmccaughan@synaptics-uk.com>

On Wednesday 2004-03-10 18:33, Michael Hudson wrote:

> Gareth McCaughan <gmccaughan@synaptics-uk.com> writes:
>
> > Further, I'd argue that since Decimal can work with numbers of
> > arbitrary precision, an implementation of (say) exp that works on
> > Decimals ought to work with something like full precision, unless
> > it's *only* intended for casual use. Writing implementations of all
> > these things would be a considerable amount of work.
>
> Have you played with Jurgen Bos' real.py?  I'm not sure it's useful,
> but I think you'll like it :-)

I think I have, some time ago. I also have something similar
of my own written in Common Lisp.

-- 
g



From gerrit at nl.linux.org  Thu Mar 11 08:05:32 2004
From: gerrit at nl.linux.org (Gerrit)
Date: Thu Mar 11 08:09:27 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040311030440.GA2742@burma.localdomain>
References: <20040311030440.GA2742@burma.localdomain>
Message-ID: <20040311130532.GA4105@nl.linux.org>

Gustavo Niemeyer wrote:
> Yes, it's time for the classical question. ;-)
> 
> What's your opinion about the inclusion of the dateutil[1]
> extension in the standard library?
> 
> [1] https://moin.conectiva.com.br/DateUtil

In this context, PEP 321 and the discussion about it are relevant:

http://www.python.org/peps/pep-0321.html
http://groups.google.nl/groups?threadm=ad6u7j09.fsf%40yahoo.co.uk

I am in favour of including something like DatuUtil in the standard
library. I need it often enough, e.g., to find out out when two weeks after
24 Feb is is easier with DateUtil than with datetime, and I think
datetime lacks a strptime. IMO it should be possible to do all date/time
arithmetic without the time module; I don't like the time module.

My 20 milli-euro's ;-)

Gerrit.

P.S.
"cal 9 1752" would also be nice to have in Python ;-)

-- 
Weather in Twenthe, Netherlands 11/03 12:25 UTC:
	5.0?C Few clouds partly cloudy wind 3.6 m/s E (57 m above NAP)
-- 
Asperger's Syndrome - a personal approach:
	http://people.nl.linux.org/~gerrit/english/

From niemeyer at conectiva.com  Thu Mar 11 09:33:39 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Thu Mar 11 09:46:37 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040311130532.GA4105@nl.linux.org>
References: <20040311030440.GA2742@burma.localdomain>
	<20040311130532.GA4105@nl.linux.org>
Message-ID: <20040311143339.GA3335@burma.localdomain>

> In this context, PEP 321 and the discussion about it are relevant:
> 
> http://www.python.org/peps/pep-0321.html
> http://groups.google.nl/groups?threadm=ad6u7j09.fsf%40yahoo.co.uk

Indeed. I've followed it at the time it happened. Do you see
anything which should be considered before including dateutil in
the standard library?

> I am in favour of including something like DatuUtil in the standard
> library. I need it often enough, e.g., to find out out when two weeks after
> 24 Feb is is easier with DateUtil than with datetime, and I think
> datetime lacks a strptime. IMO it should be possible to do all date/time
> arithmetic without the time module; I don't like the time module.

I don't think the dateutil functionality is provided by the time
module at all.

> My 20 milli-euro's ;-)

Thanks! :_)

> P.S.
> "cal 9 1752" would also be nice to have in Python ;-)

>>> print calendar.month(1972, 9)
   September 1972
Mo Tu We Th Fr Sa Su
             1  2  3
 4  5  6  7  8  9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30

-- 
Gustavo Niemeyer
http://niemeyer.net

From jacobske at mail.nih.gov  Thu Mar 11 08:06:28 2004
From: jacobske at mail.nih.gov (Kevin Jacobs)
Date: Thu Mar 11 10:12:57 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040311030440.GA2742@burma.localdomain>
References: <20040311030440.GA2742@burma.localdomain>
Message-ID: <40506454.2090100@mail.nih.gov>

Gustavo Niemeyer wrote:

>Yes, it's time for the classical question. ;-)
>
>What's your opinion about the inclusion of the dateutil[1]
>extension in the standard library?
>
>[1] https://moin.conectiva.com.br/DateUtil
>  
>

Gustavo,

+1000!  I hadn't seen dateutil until today, but I think it is 
brilliant!  Definitely fodder for
the standard library.

The functionality is similar to a module that I wrote a few years back, 
though I didn't
model the advanced behavior after the iCalendar RFC.  Unfortunately, I 
have lost the
fight to open source most of my work from that era, so I am more than 
happy to
help review and assist in the effort to incorporate your excellent 
module into the
standard library.

Some initial suggestion:

  1) relativedelta and maybe the tz module should be added to the 
datetime module.

  2) the tz module needs to be made Win32 aware -- at least minimally.   
It should also
      fail gracefully on systems that do not have /etc/localtime, 
/usr/share/zoneinfo, etc).

  3) Some of the constants like FREQ_* may be nicer without the FREQ_ 
prefix.  I
      almost never use 'from x import *', so it seems unnecessary to 
protect the module
      namespace with prefixes (unless there is an existing collision 
that I do not see).

  4) Similarly, it would be useful to also support the long names for 
MO,TU,WE, etc.

I'm happy to supply patches as well.

Best regards,
-Kevin Jacobs

-------------- next part --------------
A non-text attachment was scrubbed...
Name: jacobske.vcf
Type: text/x-vcard
Size: 348 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20040311/e9958fad/jacobske.vcf
From skip at pobox.com  Thu Mar 11 10:32:25 2004
From: skip at pobox.com (Skip Montanaro)
Date: Thu Mar 11 10:32:51 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040311030440.GA2742@burma.localdomain>
References: <20040311030440.GA2742@burma.localdomain>
Message-ID: <16464.34441.977965.650711@montanaro.dyndns.org>


    Gustavo> Yes, it's time for the classical question. ;-)
    Gustavo> What's your opinion about the inclusion of the dateutil[1]
    Gustavo> extension in the standard library?

    Gustavo> [1] https://moin.conectiva.com.br/DateUtil

There is another module loose in the wild which handles recurring dates:

    http://www.aminus.org/rbre/python/recur.py
    http://www.aminus.org/rbre/python/test_recur.py

Since this is an area which has had very little attention I'd like to see if
there are ideas in one or both modules which can be used to arrive at a "one
true solution" to this problem.  In fact, it's a significant enough problem
all on its own that I suspect it might be worth separating out from other
(simpler?) date manipulations.

Skip


From skip at pobox.com  Thu Mar 11 10:37:28 2004
From: skip at pobox.com (Skip Montanaro)
Date: Thu Mar 11 10:37:40 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040311143339.GA3335@burma.localdomain>
References: <20040311030440.GA2742@burma.localdomain>
	<20040311130532.GA4105@nl.linux.org>
	<20040311143339.GA3335@burma.localdomain>
Message-ID: <16464.34744.524683.632512@montanaro.dyndns.org>


    >> P.S.
    >> "cal 9 1752" would also be nice to have in Python ;-)

    >>>> print calendar.month(1972, 9)
    Gustavo>    September 1972
    ...

I think you missed Gerrit's smiley:

    % cal 9 1752
       September 1752
     S  M Tu  W Th  F  S
           1  2 14 15 16
    17 18 19 20 21 22 23
    24 25 26 27 28 29 30

1752 probably ranks as the weirdest year in timekeeping history.

Skip

From pythondev-dang at lazytwinacres.net  Thu Mar 11 12:52:47 2004
From: pythondev-dang at lazytwinacres.net (pythondev-dang)
Date: Thu Mar 11 12:52:48 2004
Subject: [Python-Dev] calculator module
Message-ID: <20040311175247.11191.qmail@server265.com>

>  From: "Tim Peters" <tim.one@comcast.net>
[snip]
>  IMO, SOLVE remains a great improvement over almost all of its "mass market"
>  successors.??
[snip]

Temporal anomolies not withstanding...
or maybe that's the secret ingredient.
    --dang

From niemeyer at conectiva.com  Thu Mar 11 12:54:56 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Thu Mar 11 12:54:51 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <16464.34441.977965.650711@montanaro.dyndns.org>
References: <20040311030440.GA2742@burma.localdomain>
	<16464.34441.977965.650711@montanaro.dyndns.org>
Message-ID: <20040311175456.GA4770@burma.localdomain>

Hello Skip!

> There is another module loose in the wild which handles recurring dates:
[...]

Being a carefully developed superset (as it offers more features and
less restrictions) of the iCal RFC regarding RRULEs, I belive you won't
find any equivalent work (in terms of speed and functionality).

>     http://www.aminus.org/rbre/python/recur.py
>     http://www.aminus.org/rbre/python/test_recur.py
> 
> Since this is an area which has had very little attention I'd like to
> see if there are ideas in one or both modules which can be used to
> arrive at a "one true solution" to this problem.  In fact, it's a
> significant enough problem all on its own that I suspect it might be
> worth separating out from other (simpler?) date manipulations.

Every example may be easily implemented with dateutil's rrule, as the
above module is a simple implementation of interval based recurrences.
As a counter example, rrule is able to do something like:

  rrule(FREQ_YEARLY,bymonth=8,bymonthday=13,byweekday=FR)

Meaning "dates with a friday 13th in august", for example. Notice that
this is also a simple one. There are more complex ones involving
cross-year weekly periods with random week starting dates and ISO week
numbers, for example.

-- 
Gustavo Niemeyer
http://niemeyer.net

From niemeyer at conectiva.com  Thu Mar 11 12:57:36 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Thu Mar 11 12:57:31 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <16464.34744.524683.632512@montanaro.dyndns.org>
References: <20040311030440.GA2742@burma.localdomain>
	<20040311130532.GA4105@nl.linux.org>
	<20040311143339.GA3335@burma.localdomain>
	<16464.34744.524683.632512@montanaro.dyndns.org>
Message-ID: <20040311175736.GB4770@burma.localdomain>

> I think you missed Gerrit's smiley:
[...]
> 1752 probably ranks as the weirdest year in timekeeping history.

Erm.. innocence o'mine. Sorry! :-)

-- 
Gustavo Niemeyer
http://niemeyer.net

From fumanchu at amor.org  Thu Mar 11 13:12:52 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Thu Mar 11 13:14:28 2004
Subject: [Python-Dev] dateutil
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561ECE@opus.amorhq.net>

Gustavo Niemeyer wrote:
> Being a carefully developed superset (as it offers more features and
> less restrictions) of the iCal RFC regarding RRULEs, I belive 
> you won't
> find any equivalent work (in terms of speed and functionality).
> 
> >     http://www.aminus.org/rbre/python/recur.py
> >     http://www.aminus.org/rbre/python/test_recur.py
> > 
> 
> Every example may be easily implemented with dateutil's rrule, as the
> above module is a simple implementation of interval based recurrences.
> As a counter example, rrule is able to do something like:
> 
>   rrule(FREQ_YEARLY,bymonth=8,bymonthday=13,byweekday=FR)
> 
> Meaning "dates with a friday 13th in august", for example. Notice that
> this is also a simple one. There are more complex ones involving
> cross-year weekly periods with random week starting dates and ISO week
> numbers, for example.

As the author of the module Skip mentioned (and awfully proud to be
noticed :) I have to concur; I actually prefer the iCal RFC, but didn't
have time back then to pursue implementing such a large spec. I've been
hoping someone would write DateUtil for me. ;)

The big win for my apps would be the inverse of rrulestr(); that is,
having constructed an rrule, give me the string according to the RFC (I
didn't see this in a quick perusal of the wiki page). This would let me
easily offer recurring events to my Outlook users from a webpage, for
example.

I like the generator I wrote, but that should be easy enough to wrap
around rrules. Ditto for the natural-language parser.


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From tim.one at comcast.net  Thu Mar 11 13:14:32 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 11 13:14:35 2004
Subject: [Python-Dev] calculator module
In-Reply-To: <A128D751272CD411BC9200508BC2194D03383718@escpl.tcp.com.ar>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFPJHAB.tim.one@comcast.net>

[Gareth McCaughan]
>> Further, I'd argue that since Decimal can work with numbers
>> of arbitrary precision, an implementation of (say) exp that
>> works on Decimals ought to work with something like full precision,
>> unless it's *only* intended for casual use. Writing implementations
>> of all these things would be a considerable amount of work.

[Batista, Facundo]
> You always can change the context precision before the operation. But
> it's not very normal to the user that comes from using a "hand
> calculator".

Except the user isn't responsible for calculating exp(), that problem
belongs to the system exp() implementation.  The bulk of context features
are really for the benefit of library writers, and temporarily boosting
precision within library functions is a conventional way to proceed.  Hand
calculators also do this under the covers.  For example, HP's current
top-end calculator displays 12 decimal digits, but always uses (at least) 15
decimal digits internally.  The HP *user* doesn't have to do anything to get
a good-to-the-12th-digit exp() result beyond pressing the EXP key.

A huge advantage of using a standard-conforming implementation of a standard
arithmetic (which IBM's decimal proposal aims to be) is that high quality
arbitrary-but-fixed precision implementations of "advanced" functions
written to that standard will eventually appear, giving the same results on
all platforms implementing that standard, and written by experts all over
the world (not just by the relative handful of Python library writers, or
the teensy subset of those experienced in coding all-purpose
library-strength numerics).


From bac at OCF.Berkeley.EDU  Thu Mar 11 14:44:59 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Thu Mar 11 14:46:17 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040311130532.GA4105@nl.linux.org>
References: <20040311030440.GA2742@burma.localdomain>
	<20040311130532.GA4105@nl.linux.org>
Message-ID: <4050C1BB.6060702@ocf.berkeley.edu>

Gerrit wrote:

> Gustavo Niemeyer wrote:
> 
>>Yes, it's time for the classical question. ;-)
>>
>>What's your opinion about the inclusion of the dateutil[1]
>>extension in the standard library?
<SNIP>
> I am in favour of including something like DatuUtil in the standard
> library. I need it often enough, e.g., to find out out when two weeks after
> 24 Feb is is easier with DateUtil than with datetime, and I think
> datetime lacks a strptime.

You're right, Gerrit, strptime is not in datetime and it was a conscious 
decision by Tim and Guido.

And if you want a poor man's way of finding the date a specific number 
of days past a date you can add those number of days to the day of the 
year value and then pass in the year and day of year into strptime and 
it will calculate the new date for you.  Roundabout, yes, but it works. 
  =)  Obviously DateUtil is a much better way to handle this.

> IMO it should be possible to do all date/time
> arithmetic without the time module; I don't like the time module.
> 

I personally would not mind seeing the time module end up with its usage 
being relegated to getting time information from the OS and moving all 
other functionality to datetime.

-Brett

From skip at pobox.com  Thu Mar 11 14:50:37 2004
From: skip at pobox.com (Skip Montanaro)
Date: Thu Mar 11 14:51:00 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <DE1CF2B4FEC4A342BF62B6B2B334601E561ECE@opus.amorhq.net>
References: <DE1CF2B4FEC4A342BF62B6B2B334601E561ECE@opus.amorhq.net>
Message-ID: <16464.49933.92567.644751@montanaro.dyndns.org>


    Robert> I like the generator I wrote, but that should be easy enough to
    Robert> wrap around rrules. Ditto for the natural-language parser.

I'd like to try out your natural language parser, but can't find it.  I
naively thought this might work, but it doesn't:

    >>> import recur
    >>> import datetime
    >>> for eachDate in recur.Recurrence(datetime.date(2004, 1, 7), "every 4 days", datetime.date(2004, 4, 15)):
    ...   print eachDate
    ... 
    Traceback (most recent call last):
      File "<stdin>", line 1, in ?
      File "recur.py", line 599, in __init__
        raise ValueError, (u"The supplied description ('%s') "
    ValueError: The supplied description ('every 4 days') could not be parsed.

I realize this is a usage question and python-dev isn't a usage list.
Still, it suggests that perhaps the community as a whole needs a bit more
exposure to these concepts before they are incorporated into the standard
library.  Perhaps a PEP about recurrence relations is warranted.

Skip

From jim.jewett at eds.com  Thu Mar 11 15:11:51 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Thu Mar 11 15:13:05 2004
Subject: [Python-Dev] PEP 318 trial balloon (wrappers)
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A0@USAHM010.amer.corp.eds.com>

I am hoping to post a new draft of PEP318 soon.

Unfortunately, the solution that currently looks 
best to me has not been mentioned before.

I wanted to give a heads up, so that people will
be able to read the rest of the PEP too.  (And
so that I can change it if everyone else hates
the new syntax.)

The most common use case is

    class Foo:
        def bar():
            pass
        bar = transform(bar)

Because the name bar is immediately rebound, it
effectively never references the original 
function directly.  That function is hidden from
the namespace where it is defined.  

This is similar to nested functions.

def foo():
    def bar():
        pass

bar is never visible (at least under that name) at 
the global level.  To indicate this, it is nested 
inside the foo() block.

I would like to do the same with decorators.

class Foo:
    [transform] from:
        def bar():
            pass

The bar() function is now defined exactly as it was before,
except that it is an extra indent to the right.

[transform] can be treated as a completely ordinary list.

The "from:" indicates that each member of this (just evaluated) 
list should be applied to the contained block, and the final
result should be bound to the name (that would otherwise be)
assigned to that internal block.

-jJ

From niemeyer at conectiva.com  Thu Mar 11 15:21:10 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Thu Mar 11 15:21:05 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <16464.49933.92567.644751@montanaro.dyndns.org>
References: <DE1CF2B4FEC4A342BF62B6B2B334601E561ECE@opus.amorhq.net>
	<16464.49933.92567.644751@montanaro.dyndns.org>
Message-ID: <20040311202110.GA27281@burma.localdomain>

> I'd like to try out your natural language parser, but can't find it.  I
> naively thought this might work, but it doesn't:
> 
>     >>> import recur
>     >>> import datetime
>     >>> for eachDate in recur.Recurrence(datetime.date(2004, 1, 7), "every 4 days", datetime.date(2004, 4, 15)):
>     ...   print eachDate

>>> list(rrule(FREQ_DAILY,interval=4,count=3))
[datetime.datetime(2004, 3, 11, 17, 17, 19),
 datetime.datetime(2004, 3, 15, 17, 17, 19),
 datetime.datetime(2004, 3, 19, 17, 17, 19)]

[...]
> I realize this is a usage question and python-dev isn't a usage list.
> Still, it suggests that perhaps the community as a whole needs a bit
> more exposure to these concepts before they are incorporated into the
> standard library.  Perhaps a PEP about recurrence relations is
> warranted.

Have you looked at the RRULE item on the iCalendar RFC? It should
tell you everything about date recurrences. If you're interested,
you might want to read it, and try out the rrule implementation.

-- 
Gustavo Niemeyer
http://niemeyer.net

From tim.one at comcast.net  Thu Mar 11 15:23:16 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 11 15:23:19 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <4050C1BB.6060702@ocf.berkeley.edu>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEGKJHAB.tim.one@comcast.net>

[Gerrit]
>> I am in favour of including something like DatuUtil in the standard
>> library. I need it often enough, e.g., to find out out when two
>> weeks after 24 Feb is is easier with DateUtil than with datetime,
>> ...

[Brett]
> And if you want a poor man's way of finding the date a specific number
> of days past a date you can add those number of days to the day of the
> year value and then pass in the year and day of year into strptime and
> it will calculate the new date for you.  Roundabout, yes, but it
> works. =)  Obviously DateUtil is a much better way to handle this.

I'm not grasping the perceived difficulty with this specific use case:

>>> import datetime
>>> datetime.date(2004, 2, 24) + datetime.timedelta(weeks=2)
datetime.date(2004, 3, 9)
>>>

I suppose it's possible that Gerrit finds "relativedelta" easier to type for
some reason <wink>.


From niemeyer at conectiva.com  Thu Mar 11 15:27:22 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Thu Mar 11 15:27:19 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <40506454.2090100@mail.nih.gov>
References: <20040311030440.GA2742@burma.localdomain>
	<40506454.2090100@mail.nih.gov>
Message-ID: <20040311202722.GB27281@burma.localdomain>

Hi Kevin,

> +1000!  I hadn't seen dateutil until today, but I think it is
> brilliant!  Definitely fodder for the standard library.

Thanks! :-)

[...]
> Some initial suggestion:
> 
>  1) relativedelta and maybe the tz module should be added to the 
> datetime module.

I'm open to namespace changes during integration.

>  2) the tz module needs to be made Win32 aware -- at least minimally.   
> It should also
>      fail gracefully on systems that do not have /etc/localtime, 
> /usr/share/zoneinfo, etc).

Indeed. Will appreciate suggestions from Windows users.

>  3) Some of the constants like FREQ_* may be nicer without the FREQ_ 
> prefix.  I

Probably!

>      almost never use 'from x import *', so it seems unnecessary to
> protect the module namespace with prefixes (unless there is an
> existing collision that I do not see).

The fact that you don't use it doesn't mean everyone won't use it. :-)

>  4) Similarly, it would be useful to also support the long names for 
> MO,TU,WE, etc.

These names come from the rrule RFC, and since you may provide a
tuple of days, like (MO,TU,WE), to a given rule, they're pretty
comfortable.

Thanks for the suggestions!

-- 
Gustavo Niemeyer
http://niemeyer.net

From pje at telecommunity.com  Thu Mar 11 15:40:38 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu Mar 11 15:35:16 2004
Subject: [Python-Dev] PEP 318 trial balloon (wrappers)
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A0@USAHM010.amer.cor
	p.eds.com>
Message-ID: <5.1.0.14.0.20040311153521.03b99ec0@mail.telecommunity.com>

At 03:11 PM 3/11/04 -0500, Jewett, Jim J wrote:

>class Foo:
>     [transform] from:
>         def bar():
>             pass
>
>The bar() function is now defined exactly as it was before,
>except that it is an extra indent to the right.

-1 (times ten to a high power).  Better the PEP be rejected than use this 
syntax.

It is ambiguous precisely *because* it introduces a new suite.  Suites in 
Python indicate a difference in execution context, often along with 
introducing a new namespace.  This new syntax does neither.

Also, in Python a name to be bound is always the first or second token of a 
statement.  This syntax buries the name inside the suite, appearing *after* 
the definition of what's going to be bound to the name.


From skip at pobox.com  Thu Mar 11 16:04:18 2004
From: skip at pobox.com (Skip Montanaro)
Date: Thu Mar 11 16:04:28 2004
Subject: [Python-Dev] PEP 318 trial balloon (wrappers)
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A0@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A0@USAHM010.amer.corp.eds.com>
Message-ID: <16464.54354.240597.208204@montanaro.dyndns.org>


    Jim> I am hoping to post a new draft of PEP318 soon.  Unfortunately, the
    Jim> solution that currently looks best to me has not been mentioned
    Jim> before.

    Jim> I wanted to give a heads up, so that people will be able to read
    Jim> the rest of the PEP too.  (And so that I can change it if everyone
    Jim> else hates the new syntax.)

    ...

    Jim> class Foo:
    Jim>     [transform] from:
    Jim>         def bar():
    Jim>             pass

I honestly don't think this is going to fly.  Ignoring the readability
factor (it doesn't read right to me), suppose I have a function foo() which
is 90 lines long and I don't use Emacs, vim or some other editor with a
notion of Python's indentation-based block structure (say, Notepad).  Now I
decide foo() needs to be transform()ed.  It will be tedious and error-prone
to have to reindent the entire function just to wedge in the decorator.  On
the other hand, a decorator syntax which doesn't affect indentation of the
class or function is much easier to apply.

Skip

From bac at OCF.Berkeley.EDU  Thu Mar 11 16:04:26 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Thu Mar 11 16:05:06 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEGKJHAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCEEGKJHAB.tim.one@comcast.net>
Message-ID: <4050D45A.6090002@ocf.berkeley.edu>

Tim Peters wrote:
> [Gerrit]
> 
<SNIP - use case of a date two weeks from one and my bumbling attempt at 
doing it with the stdlib as it stand now>
> 
> I'm not grasping the perceived difficulty with this specific use case:
> 
> 
>>>>import datetime
>>>>datetime.date(2004, 2, 24) + datetime.timedelta(weeks=2)
> 
> datetime.date(2004, 3, 9)
> 
> 
> I suppose it's possible that Gerrit finds "relativedelta" easier to type for
> some reason <wink>.
> 

=)

Before I give my vote on DateUtil I know I would love to hear Gustavo 
give a quick comparison to datetime in terms of what DateUtil provides 
over datetime.  For instance, as Tim showed above (I should have known 
there was a better way with datetime than with my nutty way of doing 
it), datetime supports time deltas.  So why should we use DateUtil's or 
what should we try to take from it?  Same for parser (compared to 
strptime) and tz (compared to tzinfo).  I could obviously stare at the 
wiki and datetime docs, but I am sure Gustavo can give a better overview 
than I could glean on my own.

The rrule idea does sound cool and could be neat to add to datetime.  I 
can see having an iterator for these things being useful to someone 
(unless I am making myself partially look like a fool again by having 
this be in datetime already without me realizing it).

-Brett

From skip at pobox.com  Thu Mar 11 16:16:17 2004
From: skip at pobox.com (Skip Montanaro)
Date: Thu Mar 11 16:16:31 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040311202110.GA27281@burma.localdomain>
References: <DE1CF2B4FEC4A342BF62B6B2B334601E561ECE@opus.amorhq.net>
	<16464.49933.92567.644751@montanaro.dyndns.org>
	<20040311202110.GA27281@burma.localdomain>
Message-ID: <16464.55073.519036.14303@montanaro.dyndns.org>


    Gustavo> Have you looked at the RRULE item on the iCalendar RFC? It
    Gustavo> should tell you everything about date recurrences. If you're
    Gustavo> interested, you might want to read it, and try out the rrule
    Gustavo> implementation.

That's not what I was asking.

I realize that generating the recurrences themselves is important, but it's
not the only important thing.  Getting from the English (or Portuguese ;-)
that a user would use as input is just as important to me as having the
recurrence classes available to generate a stream of dates.  For example,
this does work using recur:

    >>> import recur
    >>> import datetime
    >>> for eachDate in recur.Recurrence(datetime.date(2004, 1, 7), "Saturday",  datetime.date(2004, 4, 15)):
    ...   print eachDate
    ... 
    2004-01-10
    2004-01-17
    2004-01-24
    2004-01-31
    2004-02-07
    2004-02-14
    2004-02-21
    2004-02-28
    2004-03-06
    2004-03-13
    2004-03-20
    2004-03-27
    2004-04-03
    2004-04-10

The point is, the Recurrence class in the recur module seems to have some
hooks builtin for this sort of stuff, but it's not been fleshed out very
well.  A PEP with some sample implementations might go a long way to making
a more complete implementation available.  The documentation seems to be
missing that would help me add it.  I think there is technology there which
doesn't exist in dateutil.  Correct me if I'm wrong.

Perhaps recur.Recurrence just needs a little more work so it can handle some
common timekeeping phraseology:

    * every Tuesday

    * every hour on the half hour

    * once an hour on the quarter hour

    * every 4 days

    * the first Monday of each month

    * every four years starting in 2000

I'll restate my suggestion that maybe a PEP for this stuff would be a good
idea.  I think it would be a reasonable idea to check both recur and
dateutil into the nondist/sandbox so other people can take a whack at them.

Skip

From tim.one at comcast.net  Thu Mar 11 16:24:46 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 11 16:28:35 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <4050D45A.6090002@ocf.berkeley.edu>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEHBJHAB.tim.one@comcast.net>

[Brett C.]
> Before I give my vote on DateUtil I know I would love to hear Gustavo
> give a quick comparison to datetime in terms of what DateUtil provides
> over datetime.  For instance, as Tim showed above (I should have known
> there was a better way with datetime than with my nutty way of doing
> it), datetime supports time deltas.  So why should we use DateUtil's
> or what should we try to take from it?  Same for parser (compared to
> strptime) and tz (compared to tzinfo).  I could obviously stare at the
> wiki and datetime docs, but I am sure Gustavo can give a better
> overview than I could glean on my own.

WRT time deltas, datetime stuck to things people can't reasonably argue
about, while DateUtil (like mxDateTime before it) is in the business of
guessing what people really want.  That's why, e.g., datetime.timedelta has
no months= argument.  People can (and do) argue about what "a month" means
(what's one month after January 31?  A few days into March?  The last day of
February?  4 weeks later?  30.436875 days later (that's 365.2425/12 == the
average length of a year in days divided by 12)?).  They can't argue about
what "a week" means -- it's 7 days.  And a day is 24 hours, and an hour is
60 minutes, and a minute is 60 seconds, and a second is 10000000
microseconds, and "leap seconds" don't exist.

Ya, that was a gutless decision, but it also means datetime isn't about to
change the rules on you because some other notion of "month" becomes
fashionable.  datetime is about naive time, an idealized calendar and an
idealized clock that won't change even if social or legislative or physical
reality does.  It's a safe fantasy land where you can block out the world's
noise.  That's also why it ignores timezones <wink>.

> The rrule idea does sound cool and could be neat to add to datetime.
> I can see having an iterator for these things being useful to someone
> (unless I am making myself partially look like a fool again by having
> this be in datetime already without me realizing it).

Nope, they're not, and things like "the second Tuesday of the month" aren't
supported natively by datetime at all.  They're easy enough to compute using
datetime as a basis, but the datetime Wiki was full of incompatible
suggestions about what people really wanted there.  We didn't have time (or
interest) in settling those arguments, so we covered our asses by leaving it
out.  Gustavo is covering his by appealing to an external standard, which is
an admirable strategy.


From fumanchu at amor.org  Thu Mar 11 16:31:28 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Thu Mar 11 16:33:02 2004
Subject: [Python-Dev] dateutil
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561ECF@opus.amorhq.net>

Skip Montanaro wrote:
> I'd like to try out your natural language parser, but can't 
> find it.  I
> naively thought this might work, but it doesn't:
> 
>     >>> import recur
>     >>> import datetime
>     >>> for eachDate in recur.Recurrence(datetime.date(2004, 
> 1, 7), "every 4 days", datetime.date(2004, 4, 15)):
>     ...   print eachDate
>     ... 
>     Traceback (most recent call last):
>       File "<stdin>", line 1, in ?
>       File "recur.py", line 599, in __init__
>         raise ValueError, (u"The supplied description ('%s') "
>     ValueError: The supplied description ('every 4 days') 
> could not be parsed.

In this case, use:
>>> import recur
>>> import datetime
>>> for eachDate in recur.Recurrence(datetime.date(2004, 1, 7), "4
days", datetime.date(2004, 4, 15)):
... 	print eachDate
... 	
2004-01-07
2004-01-11
2004-01-15
2004-01-19
2004-01-23
2004-01-27
2004-01-31
2004-02-04
2004-02-08
2004-02-12
2004-02-16
2004-02-20
2004-02-24
2004-02-28
2004-03-03
2004-03-07
2004-03-11
2004-03-15
2004-03-19
2004-03-23
2004-03-27
2004-03-31
2004-04-04
2004-04-08
2004-04-12

It would be a simple thing to fix by changing a regex in
Locale.patterns[byunits] from r"([0-9]+) days?" to r"(?:every )?([0-9]+)
days?"  I'll probably add this to my default Locale class, but one of
the reasons I made a separate Locale class in the first place was to
facilitate such modification via subclassing.

> I realize this is a usage question and python-dev isn't a usage list.
> Still, it suggests that perhaps the community as a whole 
> needs a bit more
> exposure to these concepts before they are incorporated into 
> the standard
> library.  Perhaps a PEP about recurrence relations is warranted.

As Gustavo has pointed out, the internal, representational details have
been pretty much nailed in the iCal RFC. But I agree with you that
wrapping them for locale-specific, end-user ease-of-use is important. I
wouldn't worry about a PEP unless DateUtil approaches Library inclusion
status.


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From fumanchu at amor.org  Thu Mar 11 16:37:30 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Thu Mar 11 16:39:02 2004
Subject: [Python-Dev] dateutil
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561ED0@opus.amorhq.net>

Skip Montanaro wrote:
> The point is, the Recurrence class in the recur module seems 
> to have some hooks builtin for this sort of stuff, but it's
> not been fleshed out very well.  A PEP with some sample
> implementations might go a long way to making a more
> complete implementation available.  The documentation 
> seems to be missing that would help me add it.  I think
> there is technology there which doesn't exist in dateutil.
> Correct me if I'm wrong.
> 
> Perhaps recur.Recurrence just needs a little more work so it 
> can handle some common timekeeping phraseology:
> 
>     * every Tuesday
>     * every hour on the half hour
>     * once an hour on the quarter hour
>     * every 4 days
>     * the first Monday of each month
>     * every four years starting in 2000
> 
> I'll restate my suggestion that maybe a PEP for this stuff 
> would be a good
> idea.  I think it would be a reasonable idea to check both recur and
> dateutil into the nondist/sandbox so other people can take a 
> whack at them.

Fine with me. I agree with your assessment that, although flexible, it's
not fleshed out. I only built in the specific use cases I needed at the
time. More would be nice regardless of how it interfaces with DateUtil.
As far as a lack of documentation, I'd be happy to answer questions and
then turn around and use those to write some more docs.

For a start, it probably needs a better "little language" lexer/parser
than just regexes if you're going to fold in the examples above.


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From niemeyer at conectiva.com  Thu Mar 11 16:51:03 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Thu Mar 11 16:51:00 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <16464.55073.519036.14303@montanaro.dyndns.org>
References: <DE1CF2B4FEC4A342BF62B6B2B334601E561ECE@opus.amorhq.net>
	<16464.49933.92567.644751@montanaro.dyndns.org>
	<20040311202110.GA27281@burma.localdomain>
	<16464.55073.519036.14303@montanaro.dyndns.org>
Message-ID: <20040311215103.GB28090@burma.localdomain>

> That's not what I was asking.

Oops.. sorry then.

> I realize that generating the recurrences themselves is important, but
> it's not the only important thing.  Getting from the English (or
> Portuguese ;-) that a user would use as input is just as important to
> me as having the recurrence classes available to generate a stream of
> dates.  For example, this does work using recur:
[...]
> The point is, the Recurrence class in the recur module seems to have
> some hooks builtin for this sort of stuff, but it's not been fleshed
> out very well.  A PEP with some sample implementations might go a long
> way to making a more complete implementation available.  The
> documentation seems to be missing that would help me add it.  I think
> there is technology there which doesn't exist in dateutil.  Correct me
> if I'm wrong.

No, it doesn't implement any english specific statement parsing, and I
confess I don't have it in my current todo list. If you belive that
dateutil is not worth for the standard library without an english
recurrence parsing, I respect your opinion. I'll maintain it as an
extension in my own site then.

> Perhaps recur.Recurrence just needs a little more work so it can
> handle some common timekeeping phraseology:
[...]

Yeah.. this module might be better indeed.

> I'll restate my suggestion that maybe a PEP for this stuff would be a
> good idea.  I think it would be a reasonable idea to check both recur
> and dateutil into the nondist/sandbox so other people can take a whack
> at them.

Not needed. dateutil is already released, and being used in real
world applications. You may get it at the following URL if you
want it:

https://moin.conectiva.com.br/DateUtil

Thanks for discussing!

-- 
Gustavo Niemeyer
http://niemeyer.net

From python at rcn.com  Thu Mar 11 16:57:09 2004
From: python at rcn.com (Raymond Hettinger)
Date: Thu Mar 11 16:59:09 2004
Subject: [Python-Dev] PEP 318 trial balloon (wrappers)
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A0@USAHM010.amer.corp.eds.com>
Message-ID: <002901c407b3$cdd14180$2f0aa044@oemcomputer>

[Jewett, Jim J]
>     class Foo:
>         def bar():
>             pass
>         bar = transform(bar)
 ...
> I would like to do the same with decorators.
> 
> class Foo:
>     [transform] from:
>         def bar():
>             pass
> 
> The bar() function is now defined exactly as it was before,
> except that it is an extra indent to the right.

-1 This gains nothing over what we have now but does incur costs on
readability, yet another syntax, and limiting the ability to apply a
decoration outside the class definition.

In its favor, this syntax does suggest something new which is the
possibility of having all similar decorations grouped under one banner:

class Foo:
    [transform] from"
        def bar():
            pass
        def bat():
            pass
        def baf():
            pass

That being said, grouping doesn't go much for me.  I think the key to
the previous proposals was putting wrappers on the same line as the
function definition. 



Raymond Hettinger


#################################################################
#################################################################
#################################################################
#####
#####
#####
#################################################################
#################################################################
#################################################################

From niemeyer at conectiva.com  Thu Mar 11 16:59:16 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Thu Mar 11 16:59:16 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <4050D45A.6090002@ocf.berkeley.edu>
References: <LNBBLJKPBEHFEDALKOLCEEGKJHAB.tim.one@comcast.net>
	<4050D45A.6090002@ocf.berkeley.edu>
Message-ID: <20040311215916.GC28090@burma.localdomain>

> Before I give my vote on DateUtil I know I would love to hear Gustavo 
> give a quick comparison to datetime in terms of what DateUtil provides 
> over datetime.  For instance, as Tim showed above (I should have known 
> there was a better way with datetime than with my nutty way of doing 
> it), datetime supports time deltas.  So why should we use DateUtil's or 
> what should we try to take from it?  Same for parser (compared to 
> strptime) and tz (compared to tzinfo).  I could obviously stare at the 
> wiki and datetime docs, but I am sure Gustavo can give a better overview 
> than I could glean on my own.

Here is a quick list of features, from the website:

* Computing of relative deltas (next month, next year,
  next monday, last week of month, and a lot more);

* Computing of relative deltas between two given
  date and/or datetime objects;

* Computing of dates based on very flexible recurrence rules
  (every month, every week on Thursday and Friday, every
  Friday 13th, and a *LOT* more), using a superset of the
  iCalendar RFC specification. Parsing of RFC strings is
  supported as well.

* Generic parsing of dates in almost any string format;

* Timezone (tzinfo) implementations for tzfile(5) format
  files (/etc/localtime, /usr/share/zoneinfo, etc), TZ
  environment string (in all known formats), iCalendar
  format files, given ranges (with help from relative deltas),
  local machine timezone, fixed offset timezone, and UTC
  timezone.

* Computing of Easter Sunday dates for any given year,
  using Western, Orthodox or Julian algorithms;

* More than 400 test cases.

Please, check the website for complete documentation:

https://moin.conectiva.com.br/DateUtil

> The rrule idea does sound cool and could be neat to add to datetime.  I 
> can see having an iterator for these things being useful to someone 
> (unless I am making myself partially look like a fool again by having 
> this be in datetime already without me realizing it).

Btw, here is a fun task for rrule:

Every four years, the first Tuesday after a Monday in November, 3
occurrences (U.S. Presidential Election day): 

>>> list(rrule(FREQ_YEARLY, interval=4, count=3, bymonth=11,
               byweekday=TU, bymonthday=(2,3,4,5,6,7,8),
               dtstart=parse("19961105T090000")))
[datetime.datetime(1996, 11, 5, 9, 0),
 datetime.datetime(2000, 11, 7, 9, 0),
 datetime.datetime(2004, 11, 2, 9, 0)]

-- 
Gustavo Niemeyer
http://niemeyer.net

From pf_moore at yahoo.co.uk  Thu Mar 11 17:59:12 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Thu Mar 11 17:59:27 2004
Subject: [Python-Dev] Re: PEP 318 trial balloon (wrappers)
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A0@USAHM010.amer.corp.eds.com>
Message-ID: <8yi7dm7z.fsf@yahoo.co.uk>

"Jewett, Jim J" <jim.jewett@eds.com> writes:

> I am hoping to post a new draft of PEP318 soon.
>
> Unfortunately, the solution that currently looks 
> best to me has not been mentioned before.

[...]

> class Foo:
>     [transform] from:
>         def bar():
>             pass
>

Ack, no.

I'm very strongly -1 on this particular suggestion, but I'd also make
the point that we *really, really* don't need a new draft of the PEP
opening up issues again. Please, only collate what's already been
stated.

My general impression is:

1. Semantics are pretty clear, but not documented explicitly yet. The
   PEP should document them. There's an open issue over the order in
   which decorators are applied.
2. Syntax is coming down to a few contenders. The version implemented
   in mwh's patch, Guido's variation with the [...] in front of the
   args, and variations with "as" (with a few other suggested
   keywords).

Not much more than this. (I know, that's very over-simplified...)

Please don't take this the wrong way, but I got the impression that
you were one of the relatively few people still suggesting more
radical alternatives. While I respect your motives, I hope the revised
PEP will document the overall consensus, with a clear listing of the
basic alternatives. By all means add more radical suggestions, but
please keep them separate, and make it clear that they have not had
the same level of discussion as the more "mainstream" suggestions.

The revised PEP needs to consolidate and summarise the discussions,
not start them up again!

Paul.
-- 
This signature intentionally left blank


From jim.jewett at eds.com  Thu Mar 11 18:13:11 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Thu Mar 11 18:14:08 2004
Subject: [Python-Dev] RE: Python-Dev Digest, Vol 8, Issue 26
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A5@USAHM010.amer.corp.eds.com>

[Brett C.]
> I would love to hear Gustavo give a quick comparison ...
> of what DateUtil provides over datetime. 

This is important, because the standard library is already
starting to suffer from bloat.  (Too many module names, 
not too much utility.)

Namespaces help; the Mac-specific modules might be 
overwhelming if so many weren't nicely wrapped in Carbon.
I have no idea whether the Carbon.* modules are at all
related on a technical level, but I don't care -- grouping
them makes it easier to ignore the whole batch at once,
instead of running into them one at a time.  (Hmm, would
a preferences module be useful?  Oh, wait, its Mac-specific.)

If I want to deal with time in some way, I already have to 
decide between (at least) calendar, datetime, and time, and 
perhaps locale (for printing).  I'm not saying that they all
do the same thing; I'm saying that the name alone doesn't
tell me which to use, so I have to either guess, or read the 
documentation for each.

If useful parts of DateUtil migrate into a standard module,
that's great.  Maybe even as what appears to be a submodule,
like datetime.DateUtils.  But adding yet another alternative
just makes things more confusing; people ready to sort them
out on their own are probably willing to install third party
packages.

[Tim Peters]
> datetime stuck to things people can't reasonably argue
> about, while DateUtil (like mxDateTime before it) is in
> the business of guessing what people really want. ...

This is a useful distinction.

Unfortunately, it isn't clear from the name.  I suppose the
module documentation (for both modules) could say this right
at the top.  It also works well with placing DateUtils inside
one of the current modules; it will seem like a specialization.

-jJ

From barry at python.org  Thu Mar 11 18:33:54 2004
From: barry at python.org (Barry Warsaw)
Date: Thu Mar 11 18:34:06 2004
Subject: [Python-Dev] RE: Python-Dev Digest, Vol 8, Issue 26
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A5@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A5@USAHM010.amer.corp.eds.com>
Message-ID: <1079048033.18085.13.camel@anthem.wooz.org>

On Thu, 2004-03-11 at 18:13, Jewett, Jim J wrote:

> This is important, because the standard library is already
> starting to suffer from bloat.  (Too many module names, 
> not too much utility.)

Time to dust off that old packagizing-the-stdlib idea, eh?

-Barry



From jim.jewett at eds.com  Thu Mar 11 18:50:34 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Thu Mar 11 18:51:06 2004
Subject: [Python-Dev] PEP 318 trial balloon (wrappers)
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A6@USAHM010.amer.corp.eds.com>

----------
class Foo:
    [transform] from:
        def x():
            pass
 
[wrapper] from:
    class x:
        pass

[decorator] from:
    def x():
        pass

x() is now defined exactly as it was before,
except that it has an extra indent to the right.

---------

I understand most of the objections well enough to
summarize them; I'm singling these out because I'm 
not sure I that I do.

Phillip J. Eby:

> It is ambiguous precisely *because* it introduces a new suite.  
> Suites in Python indicate a difference in execution context, 
> often along with introducing a new namespace.  This new syntax
> does neither.

Today, if we wait until the declaration is finished, we can write

    for decorator in seq:
        var = decorator(var)

How does this construct change the context or namespace any less
than a for loop?


Raymond Hettinger:

> This gains nothing over what we have now

It moves the transform near the "def", and it indicates that
there will be something unusual about this definition.  
(staticmethod or classmethod change the expected signature,
other wrappers may turn it into a non-callable, etc.)

> ... and limiting the ability to apply a
> decoration outside the class definition.

No syntax will improve on the current idiom if you want to
change class attributes from outside the class.

Are you assuming it can't apply to classes or functions 
because I happened to choose a method example this time?

If not, I'm afraid I don't understand this objection.

> In its favor, this syntax does suggest something
> new which is the possibility of having all similar
> decorations grouped under one banner:

> class Foo:
>    [classmethod] from:
>        def bar():
>            pass
>        def bat():
>            pass
>        def baf():
>            pass

Would you believe I hadn't even thought of that?

That also might be a bit trickier to parse, since
it would require a cartesian product instead of 
just an analogue of reduce.

> That being said, grouping doesn't go much for me.  
> I think the key to the previous proposals was putting 
> wrappers on the same line as the function definition. 

I think the key was associating them more closely.

Mentally, when I see the indentation go back (or the
parentheses close), I shove that object out of my
mind.  The current idiom is like saying "oh, wait,
I forgot.  I didn't finish the definition yet!"

Putting the decorator on the same line is good, but
putting the declaration inside the decorator's context
meets the same goal.  (Perhaps not as well, depending
on taste.)

-jJ

From edcjones at erols.com  Fri Mar 12 00:05:49 2004
From: edcjones at erols.com (Edward C. Jones)
Date: Fri Mar 12 00:10:39 2004
Subject: [Python-Dev] Reading various date formats
Message-ID: <4051452D.2080903@erols.com>

I have placed on my webpage a slightly modified version of getdatemodule 
by Jeffrey A. Macdonald. See 
"http://members.tripod.com/~edcjones/getdatemodule.tar.gz".

From bac at OCF.Berkeley.EDU  Fri Mar 12 03:29:12 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Fri Mar 12 03:29:24 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEHBJHAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCAEHBJHAB.tim.one@comcast.net>
Message-ID: <405174D8.1000105@ocf.berkeley.edu>

Responding to both Tim's and Gustavo's follow-ups to my email here.


Tim Peters wrote:

> [Brett C.]
> 
>>Before I give my vote on DateUtil I know I would love to hear Gustavo
>>give a quick comparison to datetime in terms of what DateUtil provides
>>over datetime.
<SNIP>
> 
> WRT time deltas, datetime stuck to things people can't reasonably argue
> about, while DateUtil (like mxDateTime before it) is in the business of
> guessing what people really want.  That's why, e.g., datetime.timedelta has
> no months= argument.  People can (and do) argue about what "a month" means
> (what's one month after January 31?  A few days into March?  The last day of
> February?  4 weeks later?  30.436875 days later (that's 365.2425/12 == the
> average length of a year in days divided by 12)?).  They can't argue about
> what "a week" means -- it's 7 days.  And a day is 24 hours, and an hour is
> 60 minutes, and a minute is 60 seconds, and a second is 10000000
> microseconds, and "leap seconds" don't exist.
> 
> Ya, that was a gutless decision, but it also means datetime isn't about to
> change the rules on you because some other notion of "month" becomes
> fashionable.  datetime is about naive time, an idealized calendar and an
> idealized clock that won't change even if social or legislative or physical
> reality does.  It's a safe fantasy land where you can block out the world's
> noise.  That's also why it ignores timezones <wink>.
> 

This all makes sense to me.  No need to try to pigeonhole others into 
our own definitions of date/time "stuff".

> 
>>The rrule idea does sound cool and could be neat to add to datetime.
>>I can see having an iterator for these things being useful to someone
>>(unless I am making myself partially look like a fool again by having
>>this be in datetime already without me realizing it).
> 
> 
> Nope, they're not, and things like "the second Tuesday of the month" aren't
> supported natively by datetime at all.  They're easy enough to compute using
> datetime as a basis, but the datetime Wiki was full of incompatible
> suggestions about what people really wanted there.  We didn't have time (or
> interest) in settling those arguments, so we covered our asses by leaving it
> out.  Gustavo is covering his by appealing to an external standard, which is
> an admirable strategy.
> 

OK, so this does sound cool.  I wouldn't mind seeing this added to the 
language for those values that are not questionable as with the 
timedeltas (if there even are any).



Gustavo Niemeyer wrote:
 > Here is a quick list of features, from the website:
 >
 > * Computing of relative deltas (next month, next year,
 >   next monday, last week of month, and a lot more);
 >

As Tim pointed out, this is a little sticky.  I personally appreciate 
datetime's choice of not trying to force me into a specific 
interpretation of what a "month" is.  I say stay naive.

 > * Computing of relative deltas between two given
 >   date and/or datetime objects;
 >

Without the relative delta values based on the "questionable" date/time 
"stuff" this seems to boil down to datetime.timedelta .

 > * Computing of dates based on very flexible recurrence rules
 >   (every month, every week on Thursday and Friday, every
 >   Friday 13th, and a *LOT* more), using a superset of the
 >   iCalendar RFC specification. Parsing of RFC strings is
 >   supported as well.
 >

This is very cool.  It's based on an accepted API which is a big plus 
and the functionality could be very useful.

 > * Generic parsing of dates in almost any string format;
 >

Seems like a convenience wrapper around strptime .  Personally I would 
love for datetime objects to have a class method to be able to take in 
the appropriate ISO-formatted date/time string and return the 
appropriate datetime object.  Otherwise I would rather keep the 
interface clean of string parsing short of using strptime .

But then again maybe I just don't want strptime to become obsolete.  =)

 > * Timezone (tzinfo) implementations for tzfile(5) format
 >   files (/etc/localtime, /usr/share/zoneinfo, etc), TZ
 >   environment string (in all known formats), iCalendar
 >   format files, given ranges (with help from relative deltas),
 >   local machine timezone, fixed offset timezone, and UTC
 >   timezone.
 >

This could be good.  Beyond knowing that timezones are a pain in the 
rear to deal with in terms of DST I don't know *how* good it would be. 
I know datetime stays away from timezones, but if it can be gleaned from 
the system cleanly it might be nice to have that option.  Breaks with 
the naive view that I am supporting here, but this stuff can be hard to 
deal with so I think it wouldn't hurt to have.

 > * Computing of Easter Sunday dates for any given year,
 >   using Western, Orthodox or Julian algorithms;
 >

Don't really see the use in this other than the rule figuring this out 
is odd enough that I never remember.  Who really cares when Easter is 
beyond certain religious groups?



OK, so with the rrule stuff, I am +1 on adding those if we can come up 
with a clean way to add them to datetime (I don't think we need a 
separate module as Jim Jewett pointed out; we already have enough 
date-related modules).  -1 for the timedelta stuff since I am fine with 
not handling timedeltas for "a month from now", etc. unless we can get 
that kind of definition from the iCalender API and thus have a 
consistent basis on an accepted API.  And then -0 on the rest since I 
fall in the "minimize stdlib bloat" camp.

-Brett

From gmccaughan at synaptics-uk.com  Fri Mar 12 04:22:10 2004
From: gmccaughan at synaptics-uk.com (Gareth McCaughan)
Date: Fri Mar 12 04:22:30 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <405174D8.1000105@ocf.berkeley.edu>
References: <LNBBLJKPBEHFEDALKOLCAEHBJHAB.tim.one@comcast.net>
	<405174D8.1000105@ocf.berkeley.edu>
Message-ID: <200403120922.10930.gmccaughan@synaptics-uk.com>

On Friday 2004-03-12 08:29, Brett C. wrote:

[Gustavo:]
>  > * Computing of Easter Sunday dates for any given year,
>  >   using Western, Orthodox or Julian algorithms;

[Brett:]
> Don't really see the use in this other than the rule figuring this out
> is odd enough that I never remember.  Who really cares when Easter is
> beyond certain religious groups?

People living in countries where some public holidays are relative
to Easter. (For instance, in the UK there's a "bank holiday"
on Good Friday, defined as the Friday immediately preceding
Easter Sunday.) People who like some of the (not particularly
religious) traditions that have become attached to Easter and
to related dates. (For instance, I know some very outspoken
atheists who hold a party every year on Shrove Tuesday and,
in accordance with tradition, serve pancakes.)

Anyway, the "certain religious groups" you mention include
something like 1/5 of the world's population, and probably
at least 5% of the Python userbase. There are probably as
many Python users who care about the date of Easter as there
are who care about Enigma-like encryption or reading IFF
chunked data or using QuickTime on a Macintosh.

-- 
g




From raymond.hettinger at verizon.net  Fri Mar 12 04:57:09 2004
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Fri Mar 12 04:59:10 2004
Subject: [Python-Dev] Joys of Optimization
Message-ID: <000401c40818$62fcf9c0$8c1cc797@oemcomputer>

In case you haven't been watching, we've been improving the heck out of
looping, list operations, and list comprehensions.
 
Here is one of the scoresheets:
 
python timeit.py "[i for i in xrange(1000)]"
py2.4   0.56 msec
py2.3   1.16 msec
py2.2   1.45 msec
 
 
ymmv,
 
 
Raymond Hettinger
 
 
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20040312/1f68567e/attachment-0001.html
From skip at pobox.com  Fri Mar 12 07:07:55 2004
From: skip at pobox.com (Skip Montanaro)
Date: Fri Mar 12 07:08:00 2004
Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Doc/api
	abstract.tex, 1.30, 1.31
In-Reply-To: <E1B1heg-0004ot-Pk@sc8-pr-cvs1.sourceforge.net>
References: <E1B1heg-0004ot-Pk@sc8-pr-cvs1.sourceforge.net>
Message-ID: <16465.43035.34794.508447@montanaro.dyndns.org>


    raymond> Modified Files:
    raymond>    abstract.tex 
    raymond> Log Message:
    raymond> Use a new macro, PySequence_Fast_ITEMS to factor out code
    raymond> common to three recent optimizations.  Aside from reducing code
    raymond> volume, it increases readability.

Isn't it actually _PySequence_Fast_ITEMS (private API because of the leading
underscore)?  Here's one diff chunk from listobject.c:

    ***************
    *** 692,701 ****

            /* populate the end of self with b's items */
    !   if (PyList_Check(b)) 
    !           src = ((PyListObject *)b)->ob_item;
    !   else {
    !           assert (PyTuple_Check(b));
    !           src = ((PyTupleObject *)b)->ob_item;
    !   }
            dest = self->ob_item + selflen;
            for (i = 0; i < blen; i++) {
    --- 687,691 ----

            /* populate the end of self with b's items */
    !   src = _PySequence_Fast_ITEMS(b);
            dest = self->ob_item + selflen;
            for (i = 0; i < blen; i++) {

Skip

From skip at pobox.com  Fri Mar 12 07:13:44 2004
From: skip at pobox.com (Skip Montanaro)
Date: Fri Mar 12 07:13:52 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <200403120922.10930.gmccaughan@synaptics-uk.com>
References: <LNBBLJKPBEHFEDALKOLCAEHBJHAB.tim.one@comcast.net>
	<405174D8.1000105@ocf.berkeley.edu>
	<200403120922.10930.gmccaughan@synaptics-uk.com>
Message-ID: <16465.43384.705845.928289@montanaro.dyndns.org>


    Gareth> People living in countries where some public holidays are
    Gareth> relative to Easter. (For instance, in the UK there's a "bank
    Gareth> holiday" on Good Friday, defined as the Friday immediately
    Gareth> preceding Easter Sunday.) 

Similarly, in the US since Easter is always on a Sunday many schools take
Good Friday off.

Skip

From mwh at python.net  Fri Mar 12 08:28:04 2004
From: mwh at python.net (Michael Hudson)
Date: Fri Mar 12 08:28:08 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <000401c40818$62fcf9c0$8c1cc797@oemcomputer> (Raymond
	Hettinger's message of "Fri, 12 Mar 2004 04:57:09 -0500")
References: <000401c40818$62fcf9c0$8c1cc797@oemcomputer>
Message-ID: <2mr7vyry8r.fsf@starship.python.net>

Please stop posting HTML to this list!

"Raymond Hettinger" <raymond.hettinger@verizon.net> writes:

> In case you haven't been watching, we've been improving the heck out of
> looping, list operations, and list comprehensions.
>  
> Here is one of the scoresheets:
>  
> python timeit.py "[i for i in xrange(1000)]"
> py2.4   0.56 msec
> py2.3   1.16 msec
> py2.2   1.45 msec
>  
>  
> ymmv,

Not that much, actually (except that I have a faster machine than you
:-).  Nice work (particularly as the listresize reworking made no
detectable difference on this machine).

Just a couple of thoughts:

1) listextend_internal calls PyObject_Size twice, for no obvious
   reason.

2) It's a little odd that listextend_internal consumes a reference to
   b.  Perhaps a comment?

Cheers,
mwh

-- 
  Our Constitution never promised us a good or efficient government,
  just a representative one. And that's what we got.
      -- http://www.advogato.org/person/mrorganic/diary.html?start=109

From aahz at pythoncraft.com  Fri Mar 12 08:43:13 2004
From: aahz at pythoncraft.com (Aahz)
Date: Fri Mar 12 08:43:17 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <16465.43384.705845.928289@montanaro.dyndns.org>
References: <LNBBLJKPBEHFEDALKOLCAEHBJHAB.tim.one@comcast.net>
	<405174D8.1000105@ocf.berkeley.edu>
	<200403120922.10930.gmccaughan@synaptics-uk.com>
	<16465.43384.705845.928289@montanaro.dyndns.org>
Message-ID: <20040312134313.GE21520@panix.com>

On Fri, Mar 12, 2004, Skip Montanaro wrote:
> 
>     Gareth> People living in countries where some public holidays are
>     Gareth> relative to Easter. (For instance, in the UK there's a "bank
>     Gareth> holiday" on Good Friday, defined as the Friday immediately
>     Gareth> preceding Easter Sunday.) 
> 
> Similarly, in the US since Easter is always on a Sunday many schools take
> Good Friday off.

More importantly, for many schools the week following Easter Sunday is
spring break.  ;-)
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"Do not taunt happy fun for loops. Do not change lists you are looping over."
--Remco Gerlich, comp.lang.python

From niemeyer at conectiva.com  Fri Mar 12 09:58:09 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Fri Mar 12 09:58:04 2004
Subject: [Python-Dev] Reading various date formats
In-Reply-To: <4051452D.2080903@erols.com>
References: <4051452D.2080903@erols.com>
Message-ID: <20040312145809.GA3369@burma.localdomain>

Hi Edward!

> I have placed on my webpage a slightly modified version of getdatemodule 
> by Jeffrey A. Macdonald. See 
> "http://members.tripod.com/~edcjones/getdatemodule.tar.gz".

Just out of curiosity, have you tried dateutil's parser?

-- 
Gustavo Niemeyer
http://niemeyer.net

From pje at telecommunity.com  Fri Mar 12 10:03:36 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri Mar 12 09:58:17 2004
Subject: [Python-Dev] PEP 318 trial balloon (wrappers)
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3A6@USAHM010.amer.cor
	p.eds.com>
Message-ID: <5.1.0.14.0.20040312095954.033ad670@mail.telecommunity.com>

At 06:50 PM 3/11/04 -0500, Jewett, Jim J wrote:
>----------
>class Foo:
>     [transform] from:
>         def x():
>             pass
>
>[wrapper] from:
>     class x:
>         pass
>
>[decorator] from:
>     def x():
>         pass
>
>x() is now defined exactly as it was before,
>except that it has an extra indent to the right.
>
>---------
>
>I understand most of the objections well enough to
>summarize them; I'm singling these out because I'm
>not sure I that I do.
>
>Phillip J. Eby:
>
> > It is ambiguous precisely *because* it introduces a new suite.
> > Suites in Python indicate a difference in execution context,
> > often along with introducing a new namespace.  This new syntax
> > does neither.
>
>Today, if we wait until the declaration is finished, we can write
>
>     for decorator in seq:
>         var = decorator(var)
>
>How does this construct change the context or namespace any less
>than a for loop?

In your proposal, the function definition is nested, but its execution 
context and namespace are unchanged.  The presence of a suite implies that 
either the control flow or the namespace (or both) are affected, but your 
proposed syntax affects neither.


From python at rcn.com  Fri Mar 12 10:49:03 2004
From: python at rcn.com (Raymond Hettinger)
Date: Fri Mar 12 10:51:04 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <2mr7vyry8r.fsf@starship.python.net>
Message-ID: <002001c40849$8c390b40$3e2acb97@oemcomputer>

[Michael Hudson]
> Just a couple of thoughts:
> 
> 1) listextend_internal calls PyObject_Size twice, for no obvious
>    reason.

Good eye.  Will remove the duplicate call.


> 2) It's a little odd that listextend_internal consumes a reference to
>    b.  Perhaps a comment?

It probably was a good idea before all the refactoring because it
simplified the calling code and because the b argument was always a
temporary sequence.  Now that there is only one caller, the
responsibility for the DECREF could be shifted back to the caller --
it's a matter of taste.


Raymond


From fumanchu at amor.org  Fri Mar 12 11:08:04 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Fri Mar 12 11:09:44 2004
Subject: [Python-Dev] dateutil
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561ED5@opus.amorhq.net>

Brett C. wrote:
> Responding to both Tim's and Gustavo's follow-ups to my email here.
>8 
> Personally I would 
> love for datetime objects to have a class method to be able 
> to take in 
> the appropriate ISO-formatted date/time string and return the 
> appropriate datetime object.  Otherwise I would rather keep the 
> interface clean of string parsing short of using strptime .

I use:

def datefromiso(isoString, todayOnError=True):
    try:
        args = tuple(map(int, isoString.split(u'-')))
        return datetime.date(*args)
    except (TypeError, ValueError), x:
        if todayOnError:
            return datetime.date.today()
        else:
            raise x

def timefromiso(isoString, zeroOnError=True):
    try:
        args = tuple(map(int, isoString.split(u':')))
        return datetime.time(*args)
    except (TypeError, ValueError), x:
        if zeroOnError:
            return datetime.time(0)
        else:
            raise x


>  > * Computing of Easter Sunday dates for any given year,
>  >   using Western, Orthodox or Julian algorithms;
> 
> Don't really see the use in this other than the rule figuring 
> this out 
> is odd enough that I never remember.  Who really cares when Easter is 
> beyond certain religious groups?

Lots of schools still structure Spring Break around Easter...since my
org works with youth to a large extent, that means we do about 1/8 of
our business for the year during Palm Sunday week and Easter week. Just
an example.


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From tanzer at swing.co.at  Fri Mar 12 11:10:09 2004
From: tanzer at swing.co.at (Christian Tanzer)
Date: Fri Mar 12 11:11:05 2004
Subject: [Python-Dev] dateutil
In-Reply-To: Your message of "Fri, 12 Mar 2004 00:29:12 PST."
	<405174D8.1000105@ocf.berkeley.edu>
Message-ID: <E1B1pEf-0008G4-00@swing.co.at>


"Brett C." <bac@OCF.Berkeley.EDU> wrote:

>  > * Computing of Easter Sunday dates for any given year,
>  >   using Western, Orthodox or Julian algorithms;
>  >
>
> Don't really see the use in this other than the rule figuring this out
> is odd enough that I never remember.  Who really cares when Easter is
> beyond certain religious groups?

Anybody living in a part of the world with some public holidays based
on Easter. For instance, in Austria there are five of these.

BTW, I can't really see why you have such a hard time remembering:

    if 1583 <= year <= 1699 : m, n = 22, 2
    elif 1700 <= year <= 1799 : m, n = 23, 3
    elif 1800 <= year <= 1899 : m, n = 23, 4
    elif 1900 <= year <= 2099 : m, n = 24, 5
    elif 2100 <= year <= 2199 : m, n = 24, 6
    elif 2200 <= year <= 2299 : m, n = 25, 0
    else :
        raise NotImplementedError, \
              "Only implemented for years between 1583 and 2299"
    a = year % 19
    b = year %  4
    c = year %  7
    d = (19*a + m) % 30
    e = (2*b + 4*c + 6*d + n) %  7
    day = 22 + d + e
    if day <= 31 :
        month = 3
    else :
        day = d + e - 9
        month = 4
        if day in (25, 26) and d == 28 and e == 6 and a > 10 :
            day -= 7
    return (year, month, day)

<42.0 wink>

Cheers,

-- 
Christian Tanzer                                    http://www.c-tanzer.at/


From fumanchu at amor.org  Fri Mar 12 11:11:50 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Fri Mar 12 11:13:23 2004
Subject: [Python-Dev] Joys of Optimization
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561ED6@opus.amorhq.net>

Raymond Hettinger wrote:
> In case you haven't been watching, we've been improving the
> heck out of looping, list operations, and list comprehensions.
> 
> Here is one of the scoresheets:
> 
> python timeit.py "[i for i in xrange(1000)]"
> py2.4   0.56 msec
> py2.3   1.16 msec
> py2.2   1.45 msec

Watching and loving every minute of it. Thanks for your pursuit of
these!


FuManChu

From niemeyer at conectiva.com  Fri Mar 12 13:02:46 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Fri Mar 12 13:02:40 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <405174D8.1000105@ocf.berkeley.edu>
References: <LNBBLJKPBEHFEDALKOLCAEHBJHAB.tim.one@comcast.net>
	<405174D8.1000105@ocf.berkeley.edu>
Message-ID: <20040312180246.GB3369@burma.localdomain>

> > Here is a quick list of features, from the website:
> >
> > * Computing of relative deltas (next month, next year,
> >   next monday, last week of month, and a lot more);
> 
> As Tim pointed out, this is a little sticky.  I personally appreciate 
> datetime's choice of not trying to force me into a specific 
> interpretation of what a "month" is.  I say stay naive.

I appreciate it too. As Tim noticed, the idea is leaving to higher level
tools the task to decide what a month is. That's what relativedelta is,
a higher level tool. OTOH, I woulnd't include "adding a month" as the
main usage for it. I've seen more than once in python list someone
asking "hey, how do I know how long ago has D date happened, in human
style". That's the kind of task which is handled by dateutil.

> > * Computing of relative deltas between two given
> >   date and/or datetime objects;
> 
> Without the relative delta values based on the "questionable"
> date/time "stuff" this seems to boil down to datetime.timedelta .

Sorry, but I think you're undermining relativedelta's usage. timedelta
is as powerfull as adding seconds to a given epoch. That's really nice
to have in datetime, but it's not the task that relativedelta tries to
accomplish.

> > * Computing of dates based on very flexible recurrence rules
> >   (every month, every week on Thursday and Friday, every
> >   Friday 13th, and a *LOT* more), using a superset of the
> >   iCalendar RFC specification. Parsing of RFC strings is
> >   supported as well.
> 
> This is very cool.  It's based on an accepted API which is a big plus 
> and the functionality could be very useful.

Thanks!

> > * Generic parsing of dates in almost any string format;
> 
> Seems like a convenience wrapper around strptime .  Personally I would 

No, it's not a wrapper around strptime. It's a smart date parsing
mechanism. The smartest I'm aware about (I've obviously researched
before starting to write it). It will not only parse most used date
strings (no it doesn't parse english statements), but will also
interpret timezone information correctly using Python's tzinfo
notation.

Some people claim that this is "dangerous", since there are
ambiguous dates, like "03-03-03". In my opinion, this is completely
non-sense, since dateutil's parsing routine has a well defined,
documented, and simple behavior. If you're parsing US dates, the default
is ok. If you're parsing brazilian dates, pass it "dayfirst=1" and
you're done. What other ways would you parse it!? Ahh.. of course.
Perhaps you'd prefer to say "%a, %d %b %Y %H:%M:%S %z" than
"dayfirst=1", since it's a lot more obvious what you're parsing, isn't
it?  ;-)

> love for datetime objects to have a class method to be able to take in 
> the appropriate ISO-formatted date/time string and return the 
> appropriate datetime object.  Otherwise I would rather keep the 
> interface clean of string parsing short of using strptime .
> 
> But then again maybe I just don't want strptime to become obsolete.  =)

Sure.. if you prefer using an explicit format, strptime() is for you.
I'd rather use something which accepts all common formats correctly
without having to tell it what format it is. It's all about choice.

> > * Timezone (tzinfo) implementations for tzfile(5) format
> >   files (/etc/localtime, /usr/share/zoneinfo, etc), TZ
> >   environment string (in all known formats), iCalendar
> >   format files, given ranges (with help from relative deltas),
> >   local machine timezone, fixed offset timezone, and UTC
> >   timezone.
> >
> 
> This could be good.  Beyond knowing that timezones are a pain in the
> rear to deal with in terms of DST I don't know *how* good it would be.
> I know datetime stays away from timezones, but if it can be gleaned
> from the system cleanly it might be nice to have that option.  Breaks
> with the naive view that I am supporting here, but this stuff can be
> hard to deal with so I think it wouldn't hurt to have.

Everything dateutil provides is hard to deal with if you don't have
the necessary tools.

> > * Computing of Easter Sunday dates for any given year,
> >   using Western, Orthodox or Julian algorithms;
> 
> Don't really see the use in this other than the rule figuring this out 
> is odd enough that I never remember.  Who really cares when Easter is 
> beyond certain religious groups?

Other people already answered that issue.

> OK, so with the rrule stuff, I am +1 on adding those if we can come up 
> with a clean way to add them to datetime (I don't think we need a 
> separate module as Jim Jewett pointed out; we already have enough 
> date-related modules).  -1 for the timedelta stuff since I am fine with 
> not handling timedeltas for "a month from now", etc. unless we can get 
> that kind of definition from the iCalender API and thus have a 
> consistent basis on an accepted API.  And then -0 on the rest since I 
> fall in the "minimize stdlib bloat" camp.

If it'd be to include only rrule, I wouldn't like to include anything at
all. If I'll have to maintain a good part of dateutil as an extension, I
won't mind maintaining rrule as well.

-- 
Gustavo Niemeyer
http://niemeyer.net

From gerrit at nl.linux.org  Sat Mar 13 11:01:07 2004
From: gerrit at nl.linux.org (Gerrit)
Date: Sat Mar 13 11:01:19 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040312180246.GB3369@burma.localdomain>
References: <LNBBLJKPBEHFEDALKOLCAEHBJHAB.tim.one@comcast.net>
	<405174D8.1000105@ocf.berkeley.edu>
	<20040312180246.GB3369@burma.localdomain>
Message-ID: <20040313160107.GA17751@nl.linux.org>

Gustavo Niemeyer wrote:
> Some people claim that this is "dangerous", since there are
> ambiguous dates, like "03-03-03".

I don't think 03-03-03 is very ambigious, maybe 03-04-05 would be a
better example ;-)

> In my opinion, this is completely
> non-sense, since dateutil's parsing routine has a well defined,
> documented, and simple behavior. If you're parsing US dates, the default
> is ok. If you're parsing brazilian dates, pass it "dayfirst=1" and
> you're done. What other ways would you parse it!?

Year first, perhaps, but I think it would be yyyy-mm-dd instead of
yy-mm-dd then. But perhaps it would be nice to enable a (surpressable,
of course) warning when a date is multi-parsable...?

> Ahh.. of course.
> Perhaps you'd prefer to say "%a, %d %b %Y %H:%M:%S %z" than
> "dayfirst=1", since it's a lot more obvious what you're parsing, isn't
> it?  ;-)

It may be faster.

Gerrit.

-- 
Weather in Amsterdam Airport Schiphol, Netherlands 13/03 14:55 UTC:
	10.0?C Few clouds mostly cloudy wind 9.8 m/s SW (-2 m above NAP)
-- 
Asperger's Syndrome - a personal approach:
	http://people.nl.linux.org/~gerrit/english/

From skip at pobox.com  Sat Mar 13 16:59:18 2004
From: skip at pobox.com (Skip Montanaro)
Date: Sat Mar 13 18:32:43 2004
Subject: [Python-Dev] test_tcl failing...
Message-ID: <16467.33846.319550.239336@montanaro.dyndns.org>

I'm getting a malloc() error in test_tcl when I run "make test":

    test_tarfile
    test_tcl
    *** malloc[3674]: error for object 0x4cca408: Pointer being reallocated was not allocated
    make: *** [test] Bus error (core dumped)

Platform is Mac OSX 10.2.8, "gcc version 3.3 20030304 (Apple Computer,
Inc. build 1493)", OPT=-Os.

If I run it independently I get one failure but no core dump:

    testCall (__main__.TclTest) ... ok
    testCallException (__main__.TclTest) ... ok
    testCallException2 (__main__.TclTest) ... ok
    testEval (__main__.TclTest) ... ok
    testEvalException (__main__.TclTest) ... ok
    testEvalException2 (__main__.TclTest) ... ok
    testEvalFile (__main__.TclTest) ... ok
    testEvalFileException (__main__.TclTest) ... ok
    testGetVar (__main__.TclTest) ... ok
    testGetVarArray (__main__.TclTest) ... ok
    testGetVarArrayException (__main__.TclTest) ... ok
    testGetVarException (__main__.TclTest) ... ok
    testLoadTk (__main__.TclTest) ... ok
    testLoadTkFailure (__main__.TclTest) ... FAIL
    testPackageRequire (__main__.TclTest) ... ok
    testPackageRequireException (__main__.TclTest) ... ok
    testSetVar (__main__.TclTest) ... ok
    testSetVarArray (__main__.TclTest) ... ok
    testUnsetVar (__main__.TclTest) ... ok
    testUnsetVarArray (__main__.TclTest) ... ok
    testUnsetVarException (__main__.TclTest) ... ok

    ======================================================================
    FAIL: testLoadTkFailure (__main__.TclTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "../Lib/test/test_tcl.py", line 153, in testLoadTkFailure
        self.assertRaises(TclError, tcl.loadtk)
    AssertionError: TclError not raised

    ----------------------------------------------------------------------
    Ran 21 tests in 20.394s

    FAILED (failures=1)
    Traceback (most recent call last):
      File "../Lib/test/test_tcl.py", line 162, in ?
        test_main()
      File "../Lib/test/test_tcl.py", line 159, in test_main
        test_support.run_unittest(TclTest)
      File "/Users/skip/src/python/head/dist/src/Lib/test/test_support.py", line 290
    , in run_unittest
        run_suite(suite, testclass)
      File "/Users/skip/src/python/head/dist/src/Lib/test/test_support.py", line 275
    , in run_suite
        raise TestFailed(err)
    test.test_support.TestFailed: Traceback (most recent call last):
      File "../Lib/test/test_tcl.py", line 153, in testLoadTkFailure
        self.assertRaises(TclError, tcl.loadtk)
    AssertionError: TclError not raised

Skip

From bac at OCF.Berkeley.EDU  Sat Mar 13 20:14:48 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Sat Mar 13 20:16:33 2004
Subject: [Python-Dev] test_tcl failing...
In-Reply-To: <16467.33846.319550.239336@montanaro.dyndns.org>
References: <16467.33846.319550.239336@montanaro.dyndns.org>
Message-ID: <4053B208.600@ocf.berkeley.edu>

Skip Montanaro wrote:
> I'm getting a malloc() error in test_tcl when I run "make test":
> 
>     test_tarfile
>     test_tcl
>     *** malloc[3674]: error for object 0x4cca408: Pointer being reallocated was not allocated
>     make: *** [test] Bus error (core dumped)
> 
> Platform is Mac OSX 10.2.8, "gcc version 3.3 20030304 (Apple Computer,
> Inc. build 1493)", OPT=-Os.
> 
> If I run it independently I get one failure but no core dump:
> 
>     testCall (__main__.TclTest) ... ok
>     testCallException (__main__.TclTest) ... ok
>     testCallException2 (__main__.TclTest) ... ok
>     testEval (__main__.TclTest) ... ok
>     testEvalException (__main__.TclTest) ... ok
>     testEvalException2 (__main__.TclTest) ... ok
>     testEvalFile (__main__.TclTest) ... ok
>     testEvalFileException (__main__.TclTest) ... ok
>     testGetVar (__main__.TclTest) ... ok
>     testGetVarArray (__main__.TclTest) ... ok
>     testGetVarArrayException (__main__.TclTest) ... ok
>     testGetVarException (__main__.TclTest) ... ok
>     testLoadTk (__main__.TclTest) ... ok
>     testLoadTkFailure (__main__.TclTest) ... FAIL
>     testPackageRequire (__main__.TclTest) ... ok
>     testPackageRequireException (__main__.TclTest) ... ok
>     testSetVar (__main__.TclTest) ... ok
>     testSetVarArray (__main__.TclTest) ... ok
>     testUnsetVar (__main__.TclTest) ... ok
>     testUnsetVarArray (__main__.TclTest) ... ok
>     testUnsetVarException (__main__.TclTest) ... ok
> 
>     ======================================================================
>     FAIL: testLoadTkFailure (__main__.TclTest)
>     ----------------------------------------------------------------------
>     Traceback (most recent call last):
>       File "../Lib/test/test_tcl.py", line 153, in testLoadTkFailure
>         self.assertRaises(TclError, tcl.loadtk)
>     AssertionError: TclError not raised
> 
>     ----------------------------------------------------------------------
>     Ran 21 tests in 20.394s
> 
>     FAILED (failures=1)
>     Traceback (most recent call last):
>       File "../Lib/test/test_tcl.py", line 162, in ?
>         test_main()
>       File "../Lib/test/test_tcl.py", line 159, in test_main
>         test_support.run_unittest(TclTest)
>       File "/Users/skip/src/python/head/dist/src/Lib/test/test_support.py", line 290
>     , in run_unittest
>         run_suite(suite, testclass)
>       File "/Users/skip/src/python/head/dist/src/Lib/test/test_support.py", line 275
>     , in run_suite
>         raise TestFailed(err)
>     test.test_support.TestFailed: Traceback (most recent call last):
>       File "../Lib/test/test_tcl.py", line 153, in testLoadTkFailure
>         self.assertRaises(TclError, tcl.loadtk)
>     AssertionError: TclError not raised
> 

I have been getting this error on OS X 10.3.2 ("gcc (GCC) 3.3 20030304 
(Apple Computer, Inc. build 1495)") but with no core dump regardless if 
I run test_tcl directly or by execing ``regrtest.py -unetwork`` (how I 
always run the test suite).  I do have Tcl installed on my system and 
you can import Tkinter but I get the usual "SetFrontProcess failed,-606" 
error that you get on OS X unless you use the pythonw executable with Tk 
code.

-Brett

From python at rcn.com  Sat Mar 13 20:54:04 2004
From: python at rcn.com (Raymond Hettinger)
Date: Sat Mar 13 20:56:08 2004
Subject: [Python-Dev] test_tcl failing...
In-Reply-To: <4053B208.600@ocf.berkeley.edu>
Message-ID: <000101c40967$3b1c2c00$8a01a044@oemcomputer>

It also has been failing on Windows since it was checked in on 2-18-2004
as part of patch #869468.




Raymond


From guido at python.org  Sat Mar 13 23:20:33 2004
From: guido at python.org (Guido van Rossum)
Date: Sat Mar 13 23:20:43 2004
Subject: [Python-Dev] 
	Re: [Python-checkins] python/dist/src/Include Python.h, 2.61, 2.62
In-Reply-To: Your message of "Sat, 13 Mar 2004 15:11:47 PST."
	<E1B2IIF-0006b6-3Y@sc8-pr-cvs1.sourceforge.net> 
References: <E1B2IIF-0006b6-3Y@sc8-pr-cvs1.sourceforge.net> 
Message-ID: <200403140420.i2E4KXC15780@guido.python.org>

> compile.h and eval.h weren't being included which kept a fair bit of the
> public API from being exposed by simply including Python.h (as recommended).

This was actually intentional.  But I agree it's questionable.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From tim.one at comcast.net  Sat Mar 13 23:24:58 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sat Mar 13 23:25:04 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040311030440.GA2742@burma.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEFPJIAB.tim.one@comcast.net>

[Gustavo Niemeyer]
> Subject: [Python-Dev] dateutil
>
> Yes, it's time for the classical question. ;-)
>
> What's your opinion about the inclusion of the dateutil[1]
> extension in the standard library?
>
> [1] https://moin.conectiva.com.br/DateUtil

The docs aren't clear enough to predict what the code may or may not do.  I
can't make time for an exhaustive review, so I'll just stop after pointing
out ambiguities in the relativedelta docs:

    And the other way is to use any of the following keyword arguments:

    year, month, day, hour, minute, seconds, microseconds
    Absolute information.

Why are only the last two plural?  What does absolute *mean*?  What type(s)
are these things?

    years, months, weeks, days, hours, minutes, seconds, microseconds
    Relative information, may be negative.

How can "seconds" and "microseconds" be both absolute and relative?  What
type(s)?  What ranges?  For example, does days=+1000 make sense?

    weekday
    One of the weekday instances (MO, TU, etc).  These instances may
    receive a parameter n, specifying the nth weekday, which could be
    positive or negative (like MO(+2) or MO(-3).

Does MO(+42) make sense?  MO(-42)?  At this point it's not clear what MO(+2)
or MO(-3) may mean either.  What about MO(0)?  Or is there "a hole" at 0 in
this scheme?  Is MO(+2) different than MO(2)?

    Not specifying it is the same as specifying +1.

Not exactly clear what "it" means.  Assuming it means the argument n.

    You can also use an integer, where 0=MO.

Meaning that MO(0) is the same as MO(MO)?  Or that you can say weekday=0?
If the latter, what's the full set of allowed integers?

    Notice that, for example, if the calculated date is already Monday,
    using MO or MO(+1) (which is the same thing in this context), won't
    change the day.

So we know that if the calculated date is a Monday, then if

    weekday=MO
or
    weekday=MO(+1)

were specified then they won't change the day.  It hasn't explained what any
other value means.

    leapdays
    Will add given days to the date found, but only if the computed
    year is a leap year and the computed date is post 28 of february.

Couldn't follow this one at all.  Is this a Boolean argument?  An integer?
The explanation below makes it sound like a bool, but the "add given days"
above makes it sound like an int.  If integer, what range?  What use is
this?

    yearday, nlyearday
    Set the yearday or the non-leap year day (jump leap days). These are
    converted to day/month/leapdays information.

"jump leap days" doesn't mean anything to me, and neither does "yearday".
Are "yearday" and "nlyearday" mutually exclusive, or can you specifiy both?
What types, and ranges, make sense for them?  What do they mean?

    If you're curious about exactly how the relative delta will act on
    operations, here is a description of its behavior.

What are the supported operations?  The docs mention

    relativedelta(datetime1, datetime2)

and

    datetime1 = datetime2 + relativedelta(datetime1, datetime2)

The explanation below doesn't seem to make any sense for the first of those,
so it must be explaining the second.  Are these the *only* supported
operations?  For example, datetime.timedelta also supports multiplying
timedelta by an integer.

    1. Calculate the absolute year, using the year argument, or the
       original datetime year, if the argument is not present.

Are there range restrictions on these?  If so, what happens if they go out
of bounds?

    2. Add the relative years argument to the absolute year.

I *assume* that the relative years added here is 0 if no years= was
specified.  Same questions about range restrictions.

    3. Do steps 1 and 2 for month/months.

Same questions, but more acutely, as it's obviously possible for even simple
arguments like months=-2 to lead to a calculated month outside range(1, 13).
What happens then?

    4. Calculate the absolute day, using the day argument, or the
       original datetime day, if the argument is not present.  Then,
       subtract from the day until it fits in the year and month found
       after their operations.

    5. Add the relative days argument to the absolute day.

What if the resulting day doesn't "fit in the year and month found" so far?

      Notice that the weeks argument is multiplied by 7 and added
      to days.

   6. If leapdays is present, the computed year is a leap year, and
      the computed month is after february, remove one day from the
      found date.

So leapdays=True and leapdays=False and leapdays=45 and leapdays=None all
mean the same thing?

   7. Do steps 1 and 2 for hour/hours, minute/minutes, second/seconds,
      microsecond/microseconds.

All the same questions about types, ranges, and what happens when results
are out of range.  If I understood what the rest of it intended, I suspect
I'd also wonder what happens if leapdays=True, the computed date after #6 is
February 1, and hours=24*60 was passed.  *Presumably* 60 days get added to
the date then, moving the result well beyond February, but then the presence
of leapdays *doesn't* "remove one day"?

   8. If the weekday argument is present, calculate the nth occurrence of
      the given weekday.

Where +1 means "don't move if you're already there, but move ahead to the
closest succeeding if you're not"?  -1 means "move to closest preceding, and
regardless of whether you're already there"?  0 is an error, or means the
same as +1, or ...?

That was the end of the explanation, and "yearday" and "nlyearday" weren't
mentioned again.

None of this is meant to imply that the functionality isn't useful -- it's
that the functionality isn't specified clearly enough to know what it is.

Is it true that adding relativedelta(months=+1) 12 times isn't necessarily
the same as adding relativedelta(years=+1) once?  (I picture starting with
31 January, landing on Feb 28 or 29 after one addition of months=+1, and
then sticking on 28 or 29 thereafter.)


From niemeyer at conectiva.com  Sun Mar 14 14:03:32 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Sun Mar 14 21:34:05 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEFPJIAB.tim.one@comcast.net>
References: <20040311030440.GA2742@burma.localdomain>
	<LNBBLJKPBEHFEDALKOLCCEFPJIAB.tim.one@comcast.net>
Message-ID: <20040314190331.GA2626@burma.localdomain>

Tim, first, thank you very much for your careful review of the
relativedelta documentation. It looks like most of these issues
are result of the documentation being written by someone which
already knew how things work out, and was deeply involved in the
development. I'll try to answer your doubts here, and later I'll
review the documentation again trying to explain these issues in
a better, non-ambiguous way.

> The docs aren't clear enough to predict what the code may or
> may not do.  I can't make time for an exhaustive review, so
> I'll just stop after pointing out ambiguities in the
> relativedelta docs:
> 
> And the other way is to use any of the following keyword arguments:
> 
>     year, month, day, hour, minute, seconds, microseconds
>     Absolute information.
> 
> Why are only the last two plural?

They are not plural. This is a bug in the documentation.

> What does absolute *mean*?

Absolute mean *set* date/time to the given parameter. Relative mean
*add* the given parameter to the date/time.

>>> d
datetime.datetime(2004, 4, 4, 0, 0)
>>> d+relativedelta(month=1)
datetime.datetime(2004, 1, 4, 0, 0)
>>> d+relativedelta(months=1)
datetime.datetime(2004, 5, 4, 0, 0)

> What type(s) are these things?

They're integers.

> years, months, weeks, days, hours, minutes, seconds, microseconds
> Relative information, may be negative.
> 
> How can "seconds" and "microseconds" be both absolute and
> relative?

They're not. It's a documentation bug.

> What type(s)?

They're integers.

> What ranges?  For example, does days=+1000 make sense?

Yes.. -1000 as well. "relative" information will be "added" to
the given date, so any integer makes sense. For example:

>>> d+relativedelta(days=1000)
datetime.datetime(2006, 12, 30, 0, 0)

Absolute information may receive any integer as well, but while
being applied on a date, the values will be rounded to the
minimum acceptable value. For example:

>>> d+relativedelta(day=1000)
datetime.datetime(2004, 4, 30, 0, 0)

>     weekday
>     One of the weekday instances (MO, TU, etc).  These instances may
>     receive a parameter n, specifying the nth weekday, which could be
>     positive or negative (like MO(+2) or MO(-3).

(awful explanation indeed)

> Does MO(+42) make sense?  MO(-42)?

Yes, both make sense.

>>> date(2004,1,1)+relativedelta(weekday=MO(+200))
datetime.date(2007, 10, 29)
>>> date(2004,1,1)+relativedelta(weekday=MO(-200))
datetime.date(2000, 3, 6)

> At this point it's not clear what MO(+2) or MO(-3) may mean
> either.

They mean "the monday after the next/current monday" and "two mondays
before the last/current monday" respectively.

> What about MO(0)?  Or is there "a hole" at 0 in this scheme?
> Is MO(+2) different than MO(2)?

MO(0) shouldn't be used as it makes no sense, but is the same
as MO(+1) which is the same as MO(1). The '+' in the
documentation is just to give the idea of 'next'.

>     Not specifying it is the same as specifying +1.
> 
> Not exactly clear what "it" means.  Assuming it means the argument n.
> 
>     You can also use an integer, where 0=MO.
> 
> Meaning that MO(0) is the same as MO(MO)?  Or that you can say weekday=0?
> If the latter, what's the full set of allowed integers?

Meaning weekday=0. This is for compatibility with the notation
already in use in the tuple notation, in the "calendar" module,
and in the weekday() method of date/datetime objects.

The range is:

>>> calendar.MONDAY, calendar.SUNDAY
(0, 6)

Notice that you're unable to specify MO(+2) using this notation.

>     Notice that, for example, if the calculated date is already Monday,
>     using MO or MO(+1) (which is the same thing in this context), won't
>     change the day.
> 
> So we know that if the calculated date is a Monday, then if
> 
>     weekday=MO
> or
>     weekday=MO(+1)
> 
> were specified then they won't change the day.  It hasn't
> explained what any other value means.

Basically, it should be explained in the documentation that
"today" counts.

>     leapdays
>     Will add given days to the date found, but only if the computed
>     year is a leap year and the computed date is post 28 of february.
> 
> Couldn't follow this one at all.  Is this a Boolean argument?  An integer?
> The explanation below makes it sound like a bool, but the "add given days"
> above makes it sound like an int.  If integer, what range?  What use is
> this?

The expected type is an integer. This is mainly used to implement
nlyearday.

>     yearday, nlyearday
>     Set the yearday or the non-leap year day (jump leap days). These are
>     converted to day/month/leapdays information.
> 
> "jump leap days" doesn't mean anything to me, and neither does "yearday".

yearday means the number of days since the year started. For
example:

>>> time.localtime().tm_yday
74
>>> date(2004,1,1)+relativedelta(yearday=74)
datetime.date(2004, 3, 14)
>>> date.today()
datetime.date(2004, 3, 14)

"non-leap yearday" means the number of days since the year
started, ignoring the leap-day. For example:

>>> date(2004,1,1)+relativedelta(nlyearday=74)
datetime.date(2004, 3, 15)

> Are "yearday" and "nlyearday" mutually exclusive, or can you
> specifiy both?

Yes, they're mutually exclusive, since nlyearday and yearday
are both converted to day/month/leapdays:

>>> relativedelta(nlyearday=74)
relativedelta(month=3, day=15)
>>> relativedelta(yearday=74)
relativedelta(leapdays=-1, month=3, day=15)

> What types, and ranges, make sense for them?  What do they
> mean?

1-366 make sense for them.

>     If you're curious about exactly how the relative delta will act on
>     operations, here is a description of its behavior.
> 
> What are the supported operations?  The docs mention
> 
>     relativedelta(datetime1, datetime2)
> 
> and
> 
>     datetime1 = datetime2 + relativedelta(datetime1, datetime2)
> 
> The explanation below doesn't seem to make any sense for the first of those,
> so it must be explaining the second.

I don't understand your sentence above. The first of those is
just an instantiation of a relativedelta, while the second one is
an instantiation and an operation being made at the same time.

> Are these the *only* supported operations?  For example,
> datetime.timedelta also supports multiplying timedelta by an
> integer.

Yes, there are other operations supported as well, and should
be documented.

>     1. Calculate the absolute year, using the year argument, or the
>        original datetime year, if the argument is not present.
> 
> Are there range restrictions on these?  If so, what happens if they go out
> of bounds?

datetime will tell what'll happen, since it has specific range limitations:

>>> datetime.now()+relativedelta(year=10000)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/home/niemeyer/src/dateutil/dateutil/relativedelta.py", line 261, in __radd__
    day = min(calendar.monthrange(year, month)[1],
  File "/usr/lib/python2.3/calendar.py", line 101, in monthrange
    day1 = weekday(year, month, 1)
  File "/usr/lib/python2.3/calendar.py", line 94, in weekday
    return datetime.date(year, month, day).weekday()
ValueError: year is out of range

>     2. Add the relative years argument to the absolute year.
> 
> I *assume* that the relative years added here is 0 if no years= was
> specified.  Same questions about range restrictions.

Correct about relative years. Same answers about ranges.

>     3. Do steps 1 and 2 for month/months.
> 
> Same questions, but more acutely, as it's obviously possible for even simple
> arguments like months=-2 to lead to a calculated month outside range(1, 13).
> What happens then?

About month, it only makes sense in the 1-12 range.

About months, it will do the "correct thing":

>>> datetime(2004,1,1)-relativedelta(months=1)
datetime.datetime(2003, 12, 1, 0, 0)
>>> datetime(2004,1,1)+relativedelta(months=-1)
datetime.datetime(2003, 12, 1, 0, 0)

>     4. Calculate the absolute day, using the day argument, or the
>        original datetime day, if the argument is not present.  Then,
>        subtract from the day until it fits in the year and month found
>        after their operations.
> 
>     5. Add the relative days argument to the absolute day.
> 
> What if the resulting day doesn't "fit in the year and month found" so far?

In this step it will just add the number of days, as timedelta would do.

>       Notice that the weeks argument is multiplied by 7 and added
>       to days.
> 
>    6. If leapdays is present, the computed year is a leap year, and
>       the computed month is after february, remove one day from the
>       found date.
> 
> So leapdays=True and leapdays=False and leapdays=45 and leapdays=None all
> mean the same thing?

Oops. Bug in the documentation. leapdays is a "relative" integer.

>    7. Do steps 1 and 2 for hour/hours, minute/minutes, second/seconds,
>       microsecond/microseconds.
> 
> All the same questions about types, ranges, and what happens when results
> are out of range.  If I understood what the rest of it intended, I suspect
> I'd also wonder what happens if leapdays=True, the computed date after #6 is
> February 1, and hours=24*60 was passed.  *Presumably* 60 days get added to
> the date then, moving the result well beyond February, but then the presence
> of leapdays *doesn't* "remove one day"?

Correct (assuming leapdays=-1).

>    8. If the weekday argument is present, calculate the nth occurrence of
>       the given weekday.
> 
> Where +1 means "don't move if you're already there, but move ahead to the
> closest succeeding if you're not"?  -1 means "move to closest preceding, and
> regardless of whether you're already there"?  0 is an error, or means the
> same as +1, or ...?

-1 and +1 have the same behavior. Go to the last/next given weekday,
with the current day counting. 0 is not supposed to be used in this
context, but means the same as +1.

> That was the end of the explanation, and "yearday" and "nlyearday" weren't
> mentioned again.

Notice in the explanation of yearday and nlyearday that they're
converted to month/day/leapdays.

> None of this is meant to imply that the functionality isn't useful -- it's
> that the functionality isn't specified clearly enough to know what it is.

Completely agreed. As I said, this is clearly my fault, as I've
written the documentation while developing the module, and thus
was aware about many details that a person which has never seen
it, nor studied other documents, wouldn't aware.

> Is it true that adding relativedelta(months=+1) 12 times isn't necessarily
> the same as adding relativedelta(years=+1) once?  (I picture starting with
> 31 January, landing on Feb 28 or 29 after one addition of months=+1, and
> then sticking on 28 or 29 thereafter.)

They land on the same date. While the documentation looks somewhat
confusing, the implementation is not. For example:

>>> date(2000,2,29)+relativedelta(months=+12)
datetime.date(2001, 2, 28)
>>> date(2000,2,29)+relativedelta(years=+1)
datetime.date(2001, 2, 28)
>>> date(2000,2,29)+relativedelta(years=-1)
datetime.date(1999, 2, 28)
>>> date(2000,2,29)+relativedelta(months=-12)
datetime.date(1999, 2, 28)

>>> rd = relativedelta(months=+1)
>>> d = date(2000,2,29)
>>> for i in range(12):
...   d += rd
...
>>> d
datetime.date(2001, 2, 28)

>>> rd = relativedelta(months=+1)
>>> d = date(2000,2,29)
>>> for i in range(12):
...   d -= rd
...
>>> d
datetime.date(1999, 2, 28)

Again, thank you very much!

-- 
Gustavo Niemeyer
http://niemeyer.net

From tdelaney at avaya.com  Sun Mar 14 18:07:11 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Sun Mar 14 21:34:55 2004
Subject: [Python-Dev] Joys of Optimization
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE014600BF@au3010avexu1.global.avaya.com>

I did a bit of benchmarking of various versions on my FreeBSD box ... I may have got a bit carried away ;)

A few interesting things to note:

1. [i for i in xrange(1000)] in Python 2.4 is approaching the speed of the same construct under Python 2.3 with psyco - 1.46 msec/loop compared to 1.03 msec/loop. This is very impressive.

2. Python 2.4 gains no benefit from psyco for this construct - presumably because psyco does not yet recognise the LIST_APPEND opcode.

3. However, even including the above, psyco improves 2.4 more than it improves 2.3 - so once the above has been fixed, the improvement should be even greater.

4. 2.4 gives the best pystone results.

5. 2.4 gives the best parrotbench results (I forgot to send the parrotbench results here to work ...).

6. There were certain tests in parrotbench that were slightly slower under 2.4 than 2.3 - this should probably be investigated. However, b3.py (IIRC) had a significant performance improvement - 2.3 was ~17 seconds, 2.4 was ~12 seconds.


Hardware/OS:

    Pentium II 266MHz (128K cache I think - might be 256K).

    FreeBSD sasami.mshome.net 5.2.1-RELEASE FreeBSD 5.2.1-RELEASE
    #0: Mon Feb 23 20:45:55 GMT 2004
    root@wv1u.btc.adaptec.com:/usr/obj/usr/src/sys/GENERIC  i386

Python versions:

    2.0.1 (#2, Dec  5 2003, 03:07:29)
    [GCC 3.3.3 [FreeBSD] 20031106]

    2.1.3 (#1, Dec  5 2003, 03:03:53)
    [GCC 3.3.3 [FreeBSD] 20031106]

    2.2.3 (#1, Dec  5 2003, 03:06:39)
    [GCC 3.3.3 [FreeBSD] 20031106]

    2.3.3 (#2, Mar 14 2004, 09:28:06)
    [GCC 3.3.3 [FreeBSD] 20031106]

    2.4.a0.20040311 (#2, Mar 13 2004, 19:38:05)
    [GCC 3.3.3 [FreeBSD] 20031106]

2.3 and 2.4 were built from source, the others were installed as binary packages.


# Basic list comprehension test

/usr/home/Tim> python2.0 /usr/local/lib/python2.4/timeit.py -n 1000 "[i for i in xrange(1000)]"
1000 loops, best of 3: 5.74 msec per loop

/usr/home/Tim> python2.1 /usr/local/lib/python2.4/timeit.py -n 1000 "[i for i in xrange(1000)]"
1000 loops, best of 3: 7.16 msec per loop

/usr/home/Tim> python2.2 /usr/local/lib/python2.4/timeit.py -n 1000 "[i for i in xrange(1000)]"
1000 loops, best of 3: 3.75 msec per loop

/usr/home/Tim> python2.3 /usr/local/lib/python2.4/timeit.py -n 1000 "[i for i in xrange(1000)]"
1000 loops, best of 3: 2.41 msec per loop

/usr/home/Tim> python2.4 /usr/local/lib/python2.4/timeit.py -n 1000 "[i for i in xrange(1000)]"
1000 loops, best of 3: 1.44 msec per loop


# List comprehension test where listcomp is in a function.

/usr/home/Tim> python2.0 /usr/local/lib/python2.4/timeit.py -n 1000 -s "def main():[i for i in xrange(1000)]" "main()"
1000 loops, best of 3: 5.57 msec per loop

/usr/home/Tim> python2.1 /usr/local/lib/python2.4/timeit.py -n 1000 -s "def main():[i for i in xrange(1000)]" "main()"
1000 loops, best of 3: 6.84 msec per loop

/usr/home/Tim> python2.2 /usr/local/lib/python2.4/timeit.py -n 1000 -s "def main():[i for i in xrange(1000)]" "main()"
1000 loops, best of 3: 3.88 msec per loop

/usr/home/Tim> python2.3 /usr/local/lib/python2.4/timeit.py -n 1000 -s "def main():[i for i in xrange(1000)]" "main()"
1000 loops, best of 3: 2.35 msec per loop

/usr/home/Tim> python2.4 /usr/local/lib/python2.4/timeit.py -n 1000 -s "def main():[i for i in xrange(1000)]" "main()"
1000 loops, best of 3: 1.46 msec per loop


# Listcomp in function + psyco.bind.

/usr/home/Tim> python2.2 /usr/local/lib/python2.4/timeit.py -n 1000 -s "def main():[i for i in xrange(1000)]" -s "import psyco;psyco.bind(main)" "main()"
1000 loops, best of 3: 1.05 msec per loop

/usr/home/Tim> python2.3 /usr/local/lib/python2.4/timeit.py -n 1000 -s "def main():[i for i in xrange(1000)]" -s "import psyco;psyco.bind(main)" "main()"
1000 loops, best of 3: 1.03 msec per loop

/usr/home/Tim> python2.4 /usr/local/lib/python2.4/timeit.py -n 1000 -s "def main():[i for i in xrange(1000)]" -s "import psyco;psyco.bind(main)" "main()"
1000 loops, best of 3: 1.49 msec per loop


# Proof that psyco was working with 2.4 ...

/usr/home/Tim> python2.4 /usr/local/lib/python2.4/timeit.py -n 10 -s "a='1'" "for i in xrange(1000):a+='1'"
10 loops, best of 3: 14.4 msec per loop

/usr/home/Tim> python2.4 /usr/local/lib/python2.4/timeit.py -n 10 -s "import psyco;psyco.full()" "a='1'" "for i in xrange(1000):a+='1'"
10 loops, best of 3: 413 usec per loop


# Pystone

/usr/home/Tim> python2.1 -OO /usr/local/lib/python2.4/test/pystone.py
Pystone(1.1) time for 50000 passes = 17.6797
This machine benchmarks at 2828.1 pystones/second

/usr/home/Tim> python2.2 -OO /usr/local/lib/python2.4/test/pystone.py
Pystone(1.1) time for 50000 passes = 18.7422
This machine benchmarks at 2667.78 pystones/second

/usr/home/Tim> python2.3 -OO /usr/local/lib/python2.4/test/pystone.py
Pystone(1.1) time for 50000 passes = 13.2656
This machine benchmarks at 3769.14 pystones/second

/usr/home/Tim> python2.4 -OO /usr/local/lib/python2.4/test/pystone.py
Pystone(1.1) time for 50000 passes = 12.6875
This machine benchmarks at 3940.89 pystones/second


# Pystone - regular vs. psyco.

/usr/home/Tim> python2.1 -OO /home/Tim/psyco-1.1.1/test/pystone.py
Pystone(1.1)                   time     loops per second
regular Python for 20000 passes  7.16406        2791.71
Psyco for 10000 passes           0.867188        11531.5
Psyco for 10000 more passes      0.9375        10666.7
Total for 20000 passes           1.80469        11082.3
Separated compilation/execution timings for 20000 passes
Compilation (i.e. start-up)   -0.0703125        -14.2222
Machine code execution        1.875        10666.7

Relative execution frequencies (iterations per second)
iterations        Psyco        Python    Psyco is ... times faster
        1       -14.2412         2791.71           -0.01
       10       -144.144         2791.71           -0.05
      100       -1641.03         2791.71           -0.59
     1000       42666.7         2791.71           15.28
    10000       11531.5         2791.71           4.13
   100000       10747.3         2791.71           3.85
  1000000       10674.7         2791.71           3.82
 10000000       10667.5         2791.71           3.82
Cut-off point: -265.9 iterations

/usr/home/Tim> python2.2 -OO /home/Tim/psyco-1.1.1/test/pystone.py
Pystone(1.1)                   time     loops per second
regular Python for 20000 passes  7.51562        2661.12
Psyco for 10000 passes           0.78125        12800
Psyco for 10000 more passes      0.8125        12307.7
Total for 20000 passes           1.59375        12549
Separated compilation/execution timings for 20000 passes
Compilation (i.e. start-up)   -0.03125        -32
Machine code execution        1.625        12307.7

Relative execution frequencies (iterations per second)
iterations        Psyco        Python    Psyco is ... times faster
        1       -32.0834         2661.12           -0.01
       10       -328.542         2661.12           -0.12
      100       -4324.32         2661.12           -1.62
     1000       20000         2661.12           7.52
    10000       12800         2661.12           4.81
   100000       12355.2         2661.12           4.64
  1000000       12312.4         2661.12           4.63
 10000000       12308.2         2661.12           4.63
Cut-off point: -106.1 iterations

/usr/home/Tim> python2.3 -OO /home/Tim/psyco-1.1.1/test/pystone.py
Pystone(1.1)                   time     loops per second
regular Python for 20000 passes  5.33594        3748.17
Psyco for 10000 passes           0.679688        14712.6
Psyco for 10000 more passes      0.695312        14382
Total for 20000 passes           1.375        14545.5
Separated compilation/execution timings for 20000 passes
Compilation (i.e. start-up)   -0.015625        -64
Machine code execution        1.39062        14382

Relative execution frequencies (iterations per second)
iterations        Psyco        Python    Psyco is ... times faster
        1       -64.2861         3748.17           -0.02
       10       -669.806         3748.17           -0.18
      100       -11531.5         3748.17           -3.08
     1000       18550.7         3748.17           4.95
    10000       14712.6         3748.17           3.93
   100000       14414.4         3748.17           3.85
  1000000       14385.3         3748.17           3.84
 10000000       14382.3         3748.17           3.84
Cut-off point: -79.2 iterations

/usr/home/Tim> python2.4 -OO /home/Tim/psyco-1.1.1/test/pystone.py
Pystone(1.1)                   time     loops per second
regular Python for 20000 passes  5.08594        3932.41
Psyco for 10000 passes           0.609375        16410.3
Psyco for 10000 more passes      0.625        16000
Total for 20000 passes           1.23438        16202.5
Separated compilation/execution timings for 20000 passes
Compilation (i.e. start-up)   -0.015625        -64
Machine code execution        1.25        16000

Relative execution frequencies (iterations per second)
iterations        Psyco        Python    Psyco is ... times faster
        1       -64.257         3932.41           -0.02
       10       -666.667         3932.41           -0.17
      100       -10666.7         3932.41           -2.71
     1000       21333.3         3932.41           5.42
    10000       16410.3         3932.41           4.17
   100000       16040.1         3932.41           4.08
  1000000       16004         3932.41           4.07
 10000000       16000.4         3932.41           4.07
Cut-off point: -81.5 iterations

Tim Delaney

From niemeyer at conectiva.com  Sun Mar 14 19:13:46 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Sun Mar 14 21:36:22 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEFPJIAB.tim.one@comcast.net>
References: <20040311030440.GA2742@burma.localdomain>
	<LNBBLJKPBEHFEDALKOLCCEFPJIAB.tim.one@comcast.net>
Message-ID: <20040315001346.GA15662@burma.localdomain>

Tim, first, thank you very much for your careful review of the
relativedelta documentation. It looks like most of these issues
are result of the documentation being written by someone which
already knew how things work out, and was deeply involved in the
development. I'll try to answer your doubts here, and later I'll
review the documentation again trying to explain these issues in
a better, non-ambiguous way.

> The docs aren't clear enough to predict what the code may or
> may not do.  I can't make time for an exhaustive review, so
> I'll just stop after pointing out ambiguities in the
> relativedelta docs:
> 
> And the other way is to use any of the following keyword arguments:
> 
>     year, month, day, hour, minute, seconds, microseconds
>     Absolute information.
> 
> Why are only the last two plural?

They are not plural. This is a bug in the documentation.

> What does absolute *mean*?

Absolute mean *set* date/time to the given parameter. Relative mean
*add* the given parameter to the date/time.

>>> d
datetime.datetime(2004, 4, 4, 0, 0)
>>> d+relativedelta(month=1)
datetime.datetime(2004, 1, 4, 0, 0)
>>> d+relativedelta(months=1)
datetime.datetime(2004, 5, 4, 0, 0)

> What type(s) are these things?

They're integers.

> years, months, weeks, days, hours, minutes, seconds, microseconds
> Relative information, may be negative.
> 
> How can "seconds" and "microseconds" be both absolute and
> relative?

They're not. It's a documentation bug.

> What type(s)?

They're integers.

> What ranges?  For example, does days=+1000 make sense?

Yes.. -1000 as well. "relative" information will be "added" to
the given date, so any integer makes sense. For example:

>>> d+relativedelta(days=1000)
datetime.datetime(2006, 12, 30, 0, 0)

Absolute information may receive any integer as well, but while
being applied on a date, the values will be rounded to the
minimum acceptable value. For example:

>>> d+relativedelta(day=1000)
datetime.datetime(2004, 4, 30, 0, 0)

>     weekday
>     One of the weekday instances (MO, TU, etc).  These instances may
>     receive a parameter n, specifying the nth weekday, which could be
>     positive or negative (like MO(+2) or MO(-3).

(awful explanation indeed)

> Does MO(+42) make sense?  MO(-42)?

Yes, both make sense.

>>> date(2004,1,1)+relativedelta(weekday=MO(+200))
datetime.date(2007, 10, 29)
>>> date(2004,1,1)+relativedelta(weekday=MO(-200))
datetime.date(2000, 3, 6)

> At this point it's not clear what MO(+2) or MO(-3) may mean
> either.

They mean "the monday after the next/current monday" and "two mondays
before the last/current monday" respectively.

> What about MO(0)?  Or is there "a hole" at 0 in this scheme?
> Is MO(+2) different than MO(2)?

MO(0) shouldn't be used as it makes no sense, but is the same
as MO(+1) which is the same as MO(1). The '+' in the
documentation is just to give the idea of 'next'.

>     Not specifying it is the same as specifying +1.
> 
> Not exactly clear what "it" means.  Assuming it means the argument n.
> 
>     You can also use an integer, where 0=MO.
> 
> Meaning that MO(0) is the same as MO(MO)?  Or that you can say weekday=0?
> If the latter, what's the full set of allowed integers?

Meaning weekday=0. This is for compatibility with the notation
already in use in the tuple notation, in the "calendar" module,
and in the weekday() method of date/datetime objects.

The range is:

>>> calendar.MONDAY, calendar.SUNDAY
(0, 6)

Notice that you're unable to specify MO(+2) using this notation.

>     Notice that, for example, if the calculated date is already Monday,
>     using MO or MO(+1) (which is the same thing in this context), won't
>     change the day.
> 
> So we know that if the calculated date is a Monday, then if
> 
>     weekday=MO
> or
>     weekday=MO(+1)
> 
> were specified then they won't change the day.  It hasn't
> explained what any other value means.

Basically, it should be explained in the documentation that
"today" counts.

>     leapdays
>     Will add given days to the date found, but only if the computed
>     year is a leap year and the computed date is post 28 of february.
> 
> Couldn't follow this one at all.  Is this a Boolean argument?  An integer?
> The explanation below makes it sound like a bool, but the "add given days"
> above makes it sound like an int.  If integer, what range?  What use is
> this?

The expected type is an integer. This is mainly used to implement
nlyearday.

>     yearday, nlyearday
>     Set the yearday or the non-leap year day (jump leap days). These are
>     converted to day/month/leapdays information.
> 
> "jump leap days" doesn't mean anything to me, and neither does "yearday".

yearday means the number of days since the year started. For
example:

>>> time.localtime().tm_yday
74
>>> date(2004,1,1)+relativedelta(yearday=74)
datetime.date(2004, 3, 14)
>>> date.today()
datetime.date(2004, 3, 14)

"non-leap yearday" means the number of days since the year
started, ignoring the leap-day. For example:

>>> date(2004,1,1)+relativedelta(nlyearday=74)
datetime.date(2004, 3, 15)

> Are "yearday" and "nlyearday" mutually exclusive, or can you
> specifiy both?

Yes, they're mutually exclusive, since nlyearday and yearday
are both converted to day/month/leapdays:

>>> relativedelta(nlyearday=74)
relativedelta(month=3, day=15)
>>> relativedelta(yearday=74)
relativedelta(leapdays=-1, month=3, day=15)

> What types, and ranges, make sense for them?  What do they
> mean?

1-366 make sense for them.

>     If you're curious about exactly how the relative delta will act on
>     operations, here is a description of its behavior.
> 
> What are the supported operations?  The docs mention
> 
>     relativedelta(datetime1, datetime2)
> 
> and
> 
>     datetime1 = datetime2 + relativedelta(datetime1, datetime2)
> 
> The explanation below doesn't seem to make any sense for the first of those,
> so it must be explaining the second.

I don't understand your sentence above. The first of those is
just an instantiation of a relativedelta, while the second one is
an instantiation and an operation being made at the same time.

> Are these the *only* supported operations?  For example,
> datetime.timedelta also supports multiplying timedelta by an
> integer.

Yes, there are other operations supported as well, and should
be documented.

>     1. Calculate the absolute year, using the year argument, or the
>        original datetime year, if the argument is not present.
> 
> Are there range restrictions on these?  If so, what happens if they go out
> of bounds?

datetime will tell what'll happen, since it has specific range limitations:

>>> datetime.now()+relativedelta(year=10000)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/home/niemeyer/src/dateutil/dateutil/relativedelta.py", line 261, in __radd__
    day = min(calendar.monthrange(year, month)[1],
  File "/usr/lib/python2.3/calendar.py", line 101, in monthrange
    day1 = weekday(year, month, 1)
  File "/usr/lib/python2.3/calendar.py", line 94, in weekday
    return datetime.date(year, month, day).weekday()
ValueError: year is out of range

>     2. Add the relative years argument to the absolute year.
> 
> I *assume* that the relative years added here is 0 if no years= was
> specified.  Same questions about range restrictions.

Correct about relative years. Same answers about ranges.

>     3. Do steps 1 and 2 for month/months.
> 
> Same questions, but more acutely, as it's obviously possible for even simple
> arguments like months=-2 to lead to a calculated month outside range(1, 13).
> What happens then?

About month, it only makes sense in the 1-12 range.

About months, it will do the "correct thing":

>>> datetime(2004,1,1)-relativedelta(months=1)
datetime.datetime(2003, 12, 1, 0, 0)
>>> datetime(2004,1,1)+relativedelta(months=-1)
datetime.datetime(2003, 12, 1, 0, 0)

>     4. Calculate the absolute day, using the day argument, or the
>        original datetime day, if the argument is not present.  Then,
>        subtract from the day until it fits in the year and month found
>        after their operations.
> 
>     5. Add the relative days argument to the absolute day.
> 
> What if the resulting day doesn't "fit in the year and month found" so far?

In this step it will just add the number of days, as timedelta would do.

>       Notice that the weeks argument is multiplied by 7 and added
>       to days.
> 
>    6. If leapdays is present, the computed year is a leap year, and
>       the computed month is after february, remove one day from the
>       found date.
> 
> So leapdays=True and leapdays=False and leapdays=45 and leapdays=None all
> mean the same thing?

Oops. Bug in the documentation. leapdays is a "relative" integer.

>    7. Do steps 1 and 2 for hour/hours, minute/minutes, second/seconds,
>       microsecond/microseconds.
> 
> All the same questions about types, ranges, and what happens when results
> are out of range.  If I understood what the rest of it intended, I suspect
> I'd also wonder what happens if leapdays=True, the computed date after #6 is
> February 1, and hours=24*60 was passed.  *Presumably* 60 days get added to
> the date then, moving the result well beyond February, but then the presence
> of leapdays *doesn't* "remove one day"?

Correct (assuming leapdays=-1).

>    8. If the weekday argument is present, calculate the nth occurrence of
>       the given weekday.
> 
> Where +1 means "don't move if you're already there, but move ahead to the
> closest succeeding if you're not"?  -1 means "move to closest preceding, and
> regardless of whether you're already there"?  0 is an error, or means the
> same as +1, or ...?

-1 and +1 have the same behavior. Go to the last/next given weekday,
with the current day counting. 0 is not supposed to be used in this
context, but means the same as +1.

> That was the end of the explanation, and "yearday" and "nlyearday" weren't
> mentioned again.

Notice in the explanation of yearday and nlyearday that they're
converted to month/day/leapdays.

> None of this is meant to imply that the functionality isn't useful -- it's
> that the functionality isn't specified clearly enough to know what it is.

Completely agreed. As I said, this is clearly my fault, as I've
written the documentation while developing the module, and thus
was aware about many details that a person which has never seen
it, nor studied other documents, wouldn't aware.

> Is it true that adding relativedelta(months=+1) 12 times isn't necessarily
> the same as adding relativedelta(years=+1) once?  (I picture starting with
> 31 January, landing on Feb 28 or 29 after one addition of months=+1, and
> then sticking on 28 or 29 thereafter.)

They land on the same date. While the documentation looks somewhat
confusing, the implementation is not. For example:

>>> date(2000,2,29)+relativedelta(months=+12)
datetime.date(2001, 2, 28)
>>> date(2000,2,29)+relativedelta(years=+1)
datetime.date(2001, 2, 28)
>>> date(2000,2,29)+relativedelta(years=-1)
datetime.date(1999, 2, 28)
>>> date(2000,2,29)+relativedelta(months=-12)
datetime.date(1999, 2, 28)

>>> rd = relativedelta(months=+1)
>>> d = date(2000,2,29)
>>> for i in range(12):
...   d += rd
...
>>> d
datetime.date(2001, 2, 28)

>>> rd = relativedelta(months=+1)
>>> d = date(2000,2,29)
>>> for i in range(12):
...   d -= rd
...
>>> d
datetime.date(1999, 2, 28)

Again, thank you very much!

-- 
Gustavo Niemeyer
http://niemeyer.net

From jon at indelible.org  Sun Mar 14 21:47:22 2004
From: jon at indelible.org (Jon Parise)
Date: Sun Mar 14 21:47:29 2004
Subject: [Python-Dev] Python patches for DragonFly BSD
Message-ID: <20040315024722.GA60862@indelible.org>

I'm writing to spur some interest in my patchset [1] for building
Python natively under DragonFly BSD [2].  Aside from Martin asking if
I'd be willing to act as a maintainer for the DragonFly platform
(which I am), the patch has received no other attention.

Is there no interest in this platform at the moment?  I'm content with
maintaining the patch independently [3], but as it's small and
unobtrusive, I'd like to see it added to the main tree.

[1] http://tinyurl.com/2ndbp
[2] http://www.dragonflybsd.org/
[3] http://www.indelible.org/dragonfly/python/

-- 
Jon Parise (jon@indelible.org)  ::  "Scientia est Potentia"

From greg at cosc.canterbury.ac.nz  Sun Mar 14 22:17:50 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 14 22:18:07 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040314190331.GA2626@burma.localdomain>
Message-ID: <200403150317.i2F3Hohl030236@cosc353.cosc.canterbury.ac.nz>

Some thoughts about this relativedelta stuff:

Isn't the name "relativedelta" somewhat superfluouslyredundant?
The word "delta" already implies something relative.
(It's also ratherhardtoread.)

> >>> d
> datetime.datetime(2004, 4, 4, 0, 0)
> >>> d+relativedelta(month=1)
> datetime.datetime(2004, 1, 4, 0, 0)

So a relativedelta can affect things in a way that's not
relative at all? That sounds *very* confusing.

Wouldn't it be better if relativedelta confined itself to
relative things only, and provide some other way of
absolutely setting fields of a date?

> MO(0) shouldn't be used as it makes no sense, but is the same
> as MO(+1) which is the same as MO(1).

So there is a hole at 0. Something about that smells wrong.

> The expected type [of leapdays] is an integer. This is mainly used
> to implement nlyearday.

Would a value other than 0 or 1 ever make sense for this? I'm
having a hard time imagining how it could -- but maybe my imagination
isn't twisted enough...

> > Is it true that adding relativedelta(months=+1) 12 times isn't necessarily
> > the same as adding relativedelta(years=+1) once?
> 
> They land on the same date. While the documentation looks somewhat
> confusing, the implementation is not. For example:
> 
> >>> date(2000,2,29)+relativedelta(months=+12)
> datetime.date(2001, 2, 28)

I think the OP's question was what happens if you do

  for i in range(12):
    d += relativedelta(months=+1)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From python at rcn.com  Sun Mar 14 22:41:51 2004
From: python at rcn.com (Raymond Hettinger)
Date: Sun Mar 14 22:43:58 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <338366A6D2E2CA4C9DAEAE652E12A1DE014600BF@au3010avexu1.global.avaya.com>
Message-ID: <002701c40a3f$74358b20$09ac958d@oemcomputer>

> I did a bit of benchmarking of various versions on my FreeBSD box ...
I
> may have got a bit carried away ;)

Am still reading the details of your thorough posting.  Excellent
documentation, thank you.

If you're up for a couple of more runs, I would appreciate independent 
timing and testing of my dictionary freelist patch,
www.python.org/sf/916251 , which may or may not be an improvement.

Also, here are additional fun scorecards that will probably also all
show wins (too bad you can't measure memory consumption or code volume
which also improved):

    python timeit.py -r9 "[1].pop()"
    python timeit.py -r9 -s "a=[];b=a.append;c=a.pop" "b(1);c()"
    python timeit.py -r9 "[None]*500"
    python timeit.py -r9 -s "dc={}.__contains__" "dc(1)"
    python timeit.py -r9 -s "a=b=[None]*500" "a+b"
    python timeit.py -r9 -s "a=[None]*500" "a[:]"
    python timeit.py -r9 -s "a=(None,)*50" "list(a)"
    python timeit.py -r9 -s "import itertools"
                            "list(itertools.repeat(None,500))"
    python timeit.py -r9 -s "import itertools"
                            "list(itertools.chain(xrange(500)))"
    python timeit.py -r9 "xrange(500)"
    python timeit.py -r9 -s "a=[1]; ag=a.__getitem__", "ag[0]"


Raymond Hettinger


From tim.one at comcast.net  Sun Mar 14 23:12:36 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sun Mar 14 23:12:35 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040314190331.GA2626@burma.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEJJJIAB.tim.one@comcast.net>

[Gustavo Niemeyer]
> Tim, first, thank you very much for your careful review of the
> relativedelta documentation. It looks like most of these issues
> are result of the documentation being written by someone which
> already knew how things work out, and was deeply involved in the
> development.

Probably, yes.

> I'll try to answer your doubts here, and later I'll review the
> documentation again trying to explain these issues in a better,
> non-ambiguous way.

There's no need to reply to me at all <wink>.  It does suggest that if you
want to fold it into the core, a PEP is really in order.  The usual way
things go is that you get no feedback at all until *someone* asks questions
in public.  That gets other people thinking about it too, and then the
floodgates open.  For example, I see Greg Ewing already took the bait, and
has his own set of design questions.  While I'm sure the bulk of the
questions I asked have clear and non-controversial answers, some of the
decisions are debatable.

[Greg Ewing]
> I think the OP's question was what happens if you do
>
>   for i in range(12):
>     d += relativedelta(months=+1)

Gustavo partially explained that via a different example:

> >>> rd = relativedelta(months=+1)
> >>> d = date(2000,2,29)
> >>> for i in range(12):
> ...   d += rd
> >>> d
> datetime.date(2001, 2, 28)

I assume that if d had been date(2000, 1, 31) instead, we would have wound
up with date(2001, 1, 29), so that adding relativedelta(months=+12) is the
same as adding relativedelta(years=+1), but neither is necessarily the same
as adding relativedelta(months=+1) 12 times.  That's defensible, it's just
not obvious either way.

> So a relativedelta can affect things in a way that's not
> relative at all? That sounds *very* confusing.
>
> Wouldn't it be better if relativedelta confined itself to
> relative things only, and provide some other way of
> absolutely setting fields of a date?

I'm sure this part was inherited from mxDateTime.  I find it confusing to
slam so much mixed functionality into a single type.  The Python datetime
types support .replace() methods already for replacing fields with absolute
values, although they complain if an out-of-range replacement is attempted;
e.g.,

>>> from datetime import *
>>> now = date.today()
>>> now
datetime.date(2004, 3, 14)
>>> now.replace(month=2, day=29)
datetime.date(2004, 2, 29)
>>> now.replace(month=2, day=30)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: day is out of range for month
>>>

This makes operations like "move to the end of the current month"
non-trivial (not *hard*, just non-trivial).  I'm not sure they're trivial in
Gustavo's scheme either, though.  Java supports distinct notions of "set",
"add" and "roll" to cater to different use cases, and a discussion of that
would be great to see in a PEP:

    http://java.sun.com/j2se/1.3/docs/api/java/util/Calendar.html

I'll confess in advance that I can't make sense of them either <wink>.


From tdelaney at avaya.com  Sun Mar 14 23:29:40 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Sun Mar 14 23:29:47 2004
Subject: [Python-Dev] Joys of Optimization
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE01460258@au3010avexu1.global.avaya.com>

> From: Raymond Hettinger [mailto:python@rcn.com]
> 
> If you're up for a couple of more runs, I would appreciate 
> independent 
> timing and testing of my dictionary freelist patch,
> www.python.org/sf/916251 , which may or may not be an improvement.

I'll see what I can do ... I'm still fairly shakey on FreeBSD - this is the first time I've *really* set it up. Spent 2.5 days building KDE 3.2 from ports because I couldn't work out how to make portupgrade install a new port from packages (yes - I now know it's portupgrade -N or portinstall ;)

> Also, here are additional fun scorecards that will probably also all
> show wins (too bad you can't measure memory consumption or code volume
> which also improved):
> 
>     python timeit.py -r9 "[1].pop()"
>     python timeit.py -r9 -s "a=[];b=a.append;c=a.pop" "b(1);c()"
>     python timeit.py -r9 "[None]*500"
>     python timeit.py -r9 -s "dc={}.__contains__" "dc(1)"
>     python timeit.py -r9 -s "a=b=[None]*500" "a+b"
>     python timeit.py -r9 -s "a=[None]*500" "a[:]"
>     python timeit.py -r9 -s "a=(None,)*50" "list(a)"
>     python timeit.py -r9 -s "import itertools"
>                             "list(itertools.repeat(None,500))"
>     python timeit.py -r9 -s "import itertools"
>                             "list(itertools.chain(xrange(500)))"
>     python timeit.py -r9 "xrange(500)"
>     python timeit.py -r9 -s "a=[1]; ag=a.__getitem__", "ag[0]"

Fun! Of course, I'll only be comparing between 2.3 and 2.4 (+ psyco) for any additional stuff. 2.3 was such a huge win over earlier versions.

Tim Delaney

From greg at cosc.canterbury.ac.nz  Mon Mar 15 00:01:40 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 15 00:01:46 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEJJJIAB.tim.one@comcast.net>
Message-ID: <200403150501.i2F51eRC030381@cosc353.cosc.canterbury.ac.nz>

> >>> now.replace(month=2, day=30)
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: day is out of range for month
> >>>
> 
> This makes operations like "move to the end of the current month"
> non-trivial (not *hard*, just non-trivial).

If I were designing something like this, I think I would approach it
quite differently, and provide a bunch of separate functions or
methods that transform dates in specific ways. e.g.

  end_of_month(d)
  forward_months(n, d)

then to "go to the end of the month 3 months from now" would
be

  end_of_month(forward_months(3, d))

or, if you prefer a method-chaining style,

  d.forward_months(3).end_of_month()

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From perky at i18n.org  Mon Mar 15 00:39:20 2004
From: perky at i18n.org (Hye-Shik Chang)
Date: Mon Mar 15 00:39:26 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <338366A6D2E2CA4C9DAEAE652E12A1DE014600BF@au3010avexu1.global.avaya.com>
References: <338366A6D2E2CA4C9DAEAE652E12A1DE014600BF@au3010avexu1.global.avaya.com>
Message-ID: <20040315053920.GA63461@i18n.org>

On Mon, Mar 15, 2004 at 10:07:11AM +1100, Delaney, Timothy C (Timothy) wrote:
> I did a bit of benchmarking of various versions on my FreeBSD box ... I may have got a bit carried away ;)
> 
> A few interesting things to note:
[snip]
> 
> 2.3 and 2.4 were built from source, the others were installed as binary packages.
>

If you are interested in more performance, try to build the port
with this options:

 CFLAGS=-O3 CPUTYPE=p2 make install clean

FreeBSD ports use only -O1 by default. -O3 may boost the benchmark
result by 10%~15%.


Hye-Shik

From tdelaney at avaya.com  Mon Mar 15 00:45:12 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Mon Mar 15 00:45:19 2004
Subject: [Python-Dev] Joys of Optimization
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE014602B6@au3010avexu1.global.avaya.com>

> From: Hye-Shik Chang [mailto:perky@i18n.org]
> 
> If you are interested in more performance, try to build the port
> with this options:
> 
>  CFLAGS=-O3 CPUTYPE=p2 make install clean
> 
> FreeBSD ports use only -O1 by default. -O3 may boost the benchmark
> result by 10%~15%.

True, and it will be interesting to see if this makes any relative difference between 2.3 and 2.4. The important thing though is that the same compilation options were used for 2.3 and 2.4, so I'm comparing like with like.

BTW, other comments have indicated that -Os might give the best results (can fit more in cache).

Tim Delaney

From jacobs at theopalgroup.com  Mon Mar 15 07:54:22 2004
From: jacobs at theopalgroup.com (Kevin Jacobs)
Date: Mon Mar 15 08:04:41 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <200403150501.i2F51eRC030381@cosc353.cosc.canterbury.ac.nz>
References: <200403150501.i2F51eRC030381@cosc353.cosc.canterbury.ac.nz>
Message-ID: <4055A77E.7040106@theopalgroup.com>

Greg Ewing wrote:

>>>>>now.replace(month=2, day=30)
>>>>>          
>>>>>
>If I were designing something like this, I think I would approach it
>quite differently, and provide a bunch of separate functions or
>methods that transform dates in specific ways. e.g.
>
>  end_of_month(d)
>  forward_months(n, d)
>
>then to "go to the end of the month 3 months from now" would
>be
>
>  end_of_month(forward_months(3, d))
>
>or, if you prefer a method-chaining style,
>
>  d.forward_months(3).end_of_month()
>  
>

After using such a system in a large system with non-trivial date 
manipulations, I think
you'd change your mind about that approach -- it can be excessively 
verbose.  Being
able to change a full array of date and time components in one call is 
_very_ useful.
However, in my datetime classes, I also have a very complete set of 
calls like:
startOfMonth, endOfMonth, startOfYear, endOfYear, lastWeekday, nextWeekday,
etc...  That way your code that needs the power and benefits from the
compactness of the powerful relative and absolute adjustment functions 
(with many
parameters), but also has very readable simple adjustments for common 
cases and
for novice programmers to get started.

Best regards,
-Kevin


From Paul.Moore at atosorigin.com  Mon Mar 15 08:49:13 2004
From: Paul.Moore at atosorigin.com (Moore, Paul)
Date: Mon Mar 15 08:49:16 2004
Subject: [Python-Dev] dateutil
Message-ID: <16E1010E4581B049ABC51D4975CEDB8803060E2A@UKDCX001.uk.int.atosorigin.com>

From: Greg Ewing
>> This makes operations like "move to the end of the current month"
>> non-trivial (not *hard*, just non-trivial).
>
> If I were designing something like this, I think I would approach it
> quite differently, and provide a bunch of separate functions or
> methods that transform dates in specific ways. e.g.
>
>  end_of_month(d)
>  forward_months(n, d)
>
> then to "go to the end of the month 3 months from now" would
> be
>
>  end_of_month(forward_months(3, d))
>
> or, if you prefer a method-chaining style,
>
>  d.forward_months(3).end_of_month()

In dateutil, I think it's d + relativedelta(months=+1, day=1, days=-1).
Arguably, this is fairly trivial (highly non-obvious, but trivial...)

It's a trade-off, power at the cost of comprehensibility. I've always
been a fan of flexible, powerful tools (and I'm sure the readability of
my code suffers for it :-)). So I prefer relativedelta.

Some use cases that I find hard with "bare" datetime (all are real
requirements for me):

1. For a date d, get the start and end of the month containing it.
   (many "report this month" jobs)
2. For a date d, get the start and end of the month before it.
   (equivalent "report last month" jobs). This is just (1) plus the
   need to subtract "roughly" a month from a date.
3. 12 noon on the first Thursday of next month. (Don't blame me, that's
   when we have a maintenance slot on one of our systems).
4. The last Friday of the month. (End-of-month reporting time).

The key functionality here is "add N months" and "go to weekday N
(forward or backward)". Both things that the core datetime avoids for
well-explained reasons.

I'm happy with dateutil as it stands, outside of the core. If it's to
go into the core, I agree that a PEP would be the right thing to do.
And probably not just one - separate PEPS for the different areas would
focus discussion better:

1. Expanded datetime "delta" functionality
2. Recurrence rules
3. Flexible date string parser
4. Special date calculations (Easter, any others?)

Paul.

From niemeyer at conectiva.com  Mon Mar 15 09:14:17 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Mon Mar 15 09:14:31 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <16E1010E4581B049ABC51D4975CEDB8803060E2A@UKDCX001.uk.int.atosorigin.com>
References: <16E1010E4581B049ABC51D4975CEDB8803060E2A@UKDCX001.uk.int.atosorigin.com>
Message-ID: <20040315141417.GA2705@burma.localdomain>

> >  d.forward_months(3).end_of_month()
> 
> In dateutil, I think it's d + relativedelta(months=+1, day=1, days=-1).
> Arguably, this is fairly trivial (highly non-obvious, but trivial...)

Yes, it'd work, but "d + relativedelta(day=31)" would be enough.

> It's a trade-off, power at the cost of comprehensibility. I've always
> been a fan of flexible, powerful tools (and I'm sure the readability of
> my code suffers for it :-)). So I prefer relativedelta.

Thanks! I belive it'd also be easy to implement the interface mentioned
by Greg on top of relativedelta.

> Some use cases that I find hard with "bare" datetime (all are real
> requirements for me):
> 
> 1. For a date d, get the start and end of the month containing it.
>    (many "report this month" jobs)

(just as examples)

d + relativedelta(day=1)
d + relativedelta(day=31)

> 2. For a date d, get the start and end of the month before it.
>    (equivalent "report last month" jobs). This is just (1) plus the
>    need to subtract "roughly" a month from a date.

d + relativedelta(months=-1, day=1)
d + relativedelta(months=-1, day=31)

> 3. 12 noon on the first Thursday of next month. (Don't blame me, that's
>    when we have a maintenance slot on one of our systems).

d + relativedelta(months=+1, day=1, weekday=TH, hour=12)

> 4. The last Friday of the month. (End-of-month reporting time).

d + relativedelta(day=31, weekday=FR(-1))

> The key functionality here is "add N months" and "go to weekday N
> (forward or backward)". Both things that the core datetime avoids for
> well-explained reasons.

Agreed.

> I'm happy with dateutil as it stands, outside of the core. If it's to
> go into the core, I agree that a PEP would be the right thing to do.
> And probably not just one - separate PEPS for the different areas would
> focus discussion better:
[...]

Understood. Thanks for your comments.

-- 
Gustavo Niemeyer
http://niemeyer.net

From niemeyer at conectiva.com  Mon Mar 15 09:27:59 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Mon Mar 15 09:27:51 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <200403150317.i2F3Hohl030236@cosc353.cosc.canterbury.ac.nz>
References: <20040314190331.GA2626@burma.localdomain>
	<200403150317.i2F3Hohl030236@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040315142759.GB2705@burma.localdomain>

Hello Greg!

> Isn't the name "relativedelta" somewhat superfluouslyredundant?
> The word "delta" already implies something relative.

IMO, no. The "relative" here means that the operation made is dependent
on what you apply it, and not a "fixed" delta as timedelta would do.
Applying the same delta on two different dates might return the same
result, for example.

> (It's also ratherhardtoread.)

Ok, I'll rename it to "relativetimedelta". ;-))

> > >>> d
> > datetime.datetime(2004, 4, 4, 0, 0)
> > >>> d+relativedelta(month=1)
> > datetime.datetime(2004, 1, 4, 0, 0)
> 
> So a relativedelta can affect things in a way that's not
> relative at all? That sounds *very* confusing.

It makes sense in this context. Please, have a look at the examples
in the documentation. Tim is right about this being based on
mxDateTime.

> Wouldn't it be better if relativedelta confined itself to
> relative things only, and provide some other way of
> absolutely setting fields of a date?

I don't think so, but I'm of course open to suggestions.

> > MO(0) shouldn't be used as it makes no sense, but is the same
> > as MO(+1) which is the same as MO(1).
> 
> So there is a hole at 0. Something about that smells wrong.

If you discover what, please tell me. :-)

> > The expected type [of leapdays] is an integer. This is mainly used
> > to implement nlyearday.
> 
> Would a value other than 0 or 1 ever make sense for this? I'm
> having a hard time imagining how it could -- but maybe my imagination
> isn't twisted enough...

The only used values now are 0 and -1, for nlyearday. This parameter is
indeed confusing, as it doesn't seem useful by itself. I'll probably
remove it from the documentation.

> > >>> date(2000,2,29)+relativedelta(months=+12)
> > datetime.date(2001, 2, 28)
> 
> I think the OP's question was what happens if you do
> 
>   for i in range(12):
>     d += relativedelta(months=+1)

I answered that just below the above example. It lands on the same
date.

Thanks for exposing your ideas!

-- 
Gustavo Niemeyer
http://niemeyer.net

From Paul.Moore at atosorigin.com  Mon Mar 15 09:30:21 2004
From: Paul.Moore at atosorigin.com (Moore, Paul)
Date: Mon Mar 15 09:30:21 2004
Subject: [Python-Dev] dateutil
Message-ID: <16E1010E4581B049ABC51D4975CEDB8802C09D89@UKDCX001.uk.int.atosorigin.com>

From: Gustavo Niemeyer [mailto:niemeyer@conectiva.com]
>> >  d.forward_months(3).end_of_month()
>> 
>> In dateutil, I think it's d + relativedelta(months=+1, day=1, days=-1).
>> Arguably, this is fairly trivial (highly non-obvious, but trivial...)
>
> Yes, it'd work, but "d + relativedelta(day=31)" would be enough.

Hmm, I feel vaguely uncomfortable actually relying on the "round down out
of range values" behaviour. Presumably day=1000 would also work, and would
look even stranger to me :-)

Paul.

From niemeyer at conectiva.com  Mon Mar 15 09:56:32 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Mon Mar 15 09:56:27 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEJJJIAB.tim.one@comcast.net>
References: <20040314190331.GA2626@burma.localdomain>
	<LNBBLJKPBEHFEDALKOLCCEJJJIAB.tim.one@comcast.net>
Message-ID: <20040315145632.GC2705@burma.localdomain>

> > I'll try to answer your doubts here, and later I'll review the
> > documentation again trying to explain these issues in a better,
> > non-ambiguous way.
> 
> There's no need to reply to me at all <wink>.  It does suggest that if
> you want to fold it into the core, a PEP is really in order.  The
> usual way

Understood, and agreed.

> things go is that you get no feedback at all until *someone* asks
> questions in public.  That gets other people thinking about it too,
> and then the floodgates open.  For example, I see Greg Ewing already
> took the bait, and has his own set of design questions.  While I'm
> sure the bulk of the questions I asked have clear and
> non-controversial answers, some of the decisions are debatable.

Ok..

[...]
> I assume that if d had been date(2000, 1, 31) instead, we would have wound
> up with date(2001, 1, 29), so that adding relativedelta(months=+12) is the
> same as adding relativedelta(years=+1), but neither is necessarily the
> same as adding relativedelta(months=+1) 12 times.  That's defensible,
> it's just not obvious either way.

That's why it's "relative" after all, but I understand your concern.

> > So a relativedelta can affect things in a way that's not
> > relative at all? That sounds *very* confusing.

Yes, it's still relative:

>>> date(2000, 1, 30)+relativedelta(day=31)
datetime.date(2000, 1, 31)
>>> date(2000, 1, 20)+relativedelta(day=31)
datetime.date(2000, 1, 31)
>>> date(2000, 2, 20)+relativedelta(day=31)
datetime.date(2000, 2, 29)

> > Wouldn't it be better if relativedelta confined itself to
> > relative things only, and provide some other way of
> > absolutely setting fields of a date?
> 
> I'm sure this part was inherited from mxDateTime.  I find it confusing to

Yes, the idea is based on mxDateTime, but notice that the behavior is
not the same. They're two different implementations using different
concepts. For example:

>>> DateTime.DateTime(2000,2,1)+DateTime.RelativeDateTime(day=31)
<DateTime object for '2000-03-02 00:00:00.00' at 40588368>

>>> date(2000,2,1)+relativedelta(day=31)
datetime.date(2000, 2, 29)

> slam so much mixed functionality into a single type.  The Python

I don't think it's mixed, but it's my biased opinion of course.

> datetime types support .replace() methods already for replacing fields
> with absolute values, although they complain if an out-of-range
> replacement is attempted; e.g.,

Exactly.

[...]
> This makes operations like "move to the end of the current month"
> non-trivial (not *hard*, just non-trivial).  I'm not sure they're
> trivial in Gustavo's scheme either, though.  Java supports distinct
> notions of "set",

Yes, it's easy:

d + relativedelta(day=31)

We might even get some constants like:

end_of_month = relativedelta(day=31)

So that one might do "d+end_of_month".

> "add" and "roll" to cater to different use cases, and a discussion of that
> would be great to see in a PEP:
> 
>     http://java.sun.com/j2se/1.3/docs/api/java/util/Calendar.html
> 
> I'll confess in advance that I can't make sense of them either <wink>.

"""
Example: Consider a GregorianCalendar originally set to August 31, 1999.
Calling roll(Calendar.MONTH, 8) sets the calendar to April 30, 1999. Add
rule 1 sets the MONTH field to April. Using a GregorianCalendar, the
DAY_OF_MONTH cannot be 31 in the month April. Add rule 2 sets it to the
closest possible value, 30. Finally, the roll rule maintains the YEAR
field value of 1999.
"""

I'm still trying to understand why this is useful. By the comment below
this one, it looks like it's something for GUI interaction.

-- 
Gustavo Niemeyer
http://niemeyer.net

From niemeyer at conectiva.com  Mon Mar 15 10:34:54 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Mon Mar 15 10:34:45 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <16E1010E4581B049ABC51D4975CEDB8802C09D89@UKDCX001.uk.int.atosorigin.com>
References: <16E1010E4581B049ABC51D4975CEDB8802C09D89@UKDCX001.uk.int.atosorigin.com>
Message-ID: <20040315153454.GD2705@burma.localdomain>

> >> >  d.forward_months(3).end_of_month()
> >> 
> >> In dateutil, I think it's d + relativedelta(months=+1, day=1, days=-1).
> >> Arguably, this is fairly trivial (highly non-obvious, but trivial...)
> >
> > Yes, it'd work, but "d + relativedelta(day=31)" would be enough.
> 
> Hmm, I feel vaguely uncomfortable actually relying on the "round down out
> of range values" behaviour. Presumably day=1000 would also work, and would
> look even stranger to me :-)

This was a design decision, and I confess I'm pretty satisfied
by the results of this decision. This simple decision created
consistent and expectable results for operations which are not
obvious. Also, without this feature, I see almost no reason for
using this class instead of timedelta.

-- 
Gustavo Niemeyer
http://niemeyer.net

From tanzer at swing.co.at  Mon Mar 15 10:38:15 2004
From: tanzer at swing.co.at (Christian Tanzer)
Date: Mon Mar 15 10:39:28 2004
Subject: [Python-Dev] dateutil
In-Reply-To: Your message of "Mon, 15 Mar 2004 12:34:54 -0300."
	<20040315153454.GD2705@burma.localdomain>
Message-ID: <E1B2uAR-0008IR-00@swing.co.at>


> > >> >  d.forward_months(3).end_of_month()
> > >>
> > >> In dateutil, I think it's d + relativedelta(months=+1, day=1, days=-1).
> > >> Arguably, this is fairly trivial (highly non-obvious, but trivial...)
> > >
> > > Yes, it'd work, but "d + relativedelta(day=31)" would be enough.
> >
> > Hmm, I feel vaguely uncomfortable actually relying on the "round down out
> > of range values" behaviour. Presumably day=1000 would also work, and would
> > look even stranger to me :-)
>
> This was a design decision, and I confess I'm pretty satisfied
> by the results of this decision. This simple decision created
> consistent and expectable results for operations which are not
> obvious.

Why don't you use "d + relativedelta(day=-1)" instead?

Just as easy to use, better error detection, and a big precedent.

-- 
Christian Tanzer                                    http://www.c-tanzer.at/


From jacobs at theopalgroup.com  Mon Mar 15 10:42:26 2004
From: jacobs at theopalgroup.com (Kevin Jacobs)
Date: Mon Mar 15 10:46:04 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040315153454.GD2705@burma.localdomain>
References: <16E1010E4581B049ABC51D4975CEDB8802C09D89@UKDCX001.uk.int.atosorigin.com>
	<20040315153454.GD2705@burma.localdomain>
Message-ID: <4055CEE2.9010305@theopalgroup.com>

Gustavo Niemeyer wrote:

>>>>> d.forward_months(3).end_of_month()
>>>>>          
>>>>>
>>>>In dateutil, I think it's d + relativedelta(months=+1, day=1, days=-1).
>>>>Arguably, this is fairly trivial (highly non-obvious, but trivial...)
>>>>        
>>>>
>>>Yes, it'd work, but "d + relativedelta(day=31)" would be enough.
>>>      
>>>
>>Hmm, I feel vaguely uncomfortable actually relying on the "round down out
>>of range values" behaviour. Presumably day=1000 would also work, and would
>>look even stranger to me :-)
>>    
>>
>
>This was a design decision, and I confess I'm pretty satisfied
>by the results of this decision. This simple decision created
>consistent and expectable results for operations which are not
>obvious. Also, without this feature, I see almost no reason for
>using this class instead of timedelta.
>  
>

The alternative way to getting the end of the month (which I used in my 
datetime class)
was to perform two adjustments:

 >> end_of_month = 
date.adjust_absolute(day=1).adjust_relative(month=+1,day=-1)

This requires that the relative adjustments are applied in order from 
most significant
adjustment to least.  Now that I've seen the Gustavo's semantics, I 
think his may be
more sensible (even though the particular gymnastics where hidden behind 
a helper-
method, so that most developers never had to know these subtleties).

-Kevin


From jeremy at alum.mit.edu  Mon Mar 15 10:47:34 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon Mar 15 10:48:24 2004
Subject: [Python-Dev] PyCon BOF schedule
Message-ID: <1079365654.3337.83.camel@localhost.localdomain>

http://www.python.org/cgi-bin/moinmoin/PyConBofs

Sign up for PyCon Birds of a Feather (BOF) sessions, which will be held
on March 24 and 25, Wednesday and Thursday evenings, from 8 to 11 pm.
A BOF is an informal session, organized by attendees, to discuss a topic
of interest.  They are held in the evening so that they don't conflict
with talks.

Jeremy Hylton



From Paul.Moore at atosorigin.com  Mon Mar 15 10:52:17 2004
From: Paul.Moore at atosorigin.com (Moore, Paul)
Date: Mon Mar 15 10:52:17 2004
Subject: [Python-Dev] dateutil
Message-ID: <16E1010E4581B049ABC51D4975CEDB8802C09D8B@UKDCX001.uk.int.atosorigin.com>

From: Gustavo Niemeyer [mailto:niemeyer@conectiva.com]
>> >> >  d.forward_months(3).end_of_month()
>> >> 
>> >> In dateutil, I think it's d + relativedelta(months=+1, day=1, days=-1).
>> >> Arguably, this is fairly trivial (highly non-obvious, but trivial...)
>> >
>> > Yes, it'd work, but "d + relativedelta(day=31)" would be enough.
>> 
>> Hmm, I feel vaguely uncomfortable actually relying on the "round down out
>> of range values" behaviour. Presumably day=1000 would also work, and would
>> look even stranger to me :-)
>
> This was a design decision, and I confess I'm pretty satisfied
> by the results of this decision. This simple decision created
> consistent and expectable results for operations which are not
> obvious. Also, without this feature, I see almost no reason for
> using this class instead of timedelta.

How about a module-level constant dateutil.LAST_DAY = 31? Then we have

    d + relativedelta(day=LAST_DAY)

which looks pretty nice. Actually as someone else suggested, using -1
for the last day (and maybe -2, -3... for the next to last, etc) would
also make sense.

Paul

From ilya at bluefir.net  Mon Mar 15 11:08:35 2004
From: ilya at bluefir.net (Ilya Sandler)
Date: Mon Mar 15 11:09:09 2004
Subject: [Python-Dev] pdb enhancements (patch 896011, bugs 751124,875404)
Message-ID: <Pine.LNX.4.33.0403122137001.1614-100000@bagira.localdomain>

Hello,

Would it be possible for someone  to review pdb enhancement patch #896011?
(it has been sitting in the system for more than a month now)

In addition to providing functionality suggested earlier, it also 
addresses 2/3 of bug #751124...

I could also provide a patch for another pdb bug #875404 
(but would like first to know the outcome for  896011)

Many thanks,

Ilya

-----------------------------------------------------------
Guido van Rossum guido at python.org
Fri Feb 6 00:04:28 EST 2004

    * Previous message: [Python-Dev] pdb enhancements

> I was wondering whether I should submit a formal patch for the
> following pdb enhancements:
> 
> 1) right now if you do "pdb script"
> pdb will stop with a confusing
> > <string>(1)?()
> 
> and you need to execute "s" before you get to the actual source being
> debugged, this seems somewhat unexpected and inconvenient
> 
> A better way would be to go straight to the 1st line of user's code

If you can fix that, it would be great.  Just make sure that the fix
doesn't break anything else.

> 2) right now when the script exits, the debugger exits too, taking with 
it
> all the debugging settings (like breakpoints) this seems very 
inconvenient
> 
> Basically the cost of a too early exit is high (as you lose your
> breakpoints) while the cost of not exiting debugger at all is minimal 
(you
> just need to press Ctrl-D to exit when you need it)
> 
> So I would think that pdb should not exit on script's exit.
> (and, say, gdb does not exit when program exits...)
> 
> Do these suggestions make sense? Is implementing them likely to break
> anything else?

Ditto.  I don't know what it would break, but you will hve to test
this thoroughly.

> Thanks,
> 
> Ilya
> 
> PS. I am new on this list and can only hope that the questions are
> appropriate for python-dev

PS.  Use the SF patch manager for submitting patches.  Read
http://python.org/dev/.

--Guido van Rossum (home page: http://www.python.org/~guido/)







From niemeyer at conectiva.com  Mon Mar 15 11:28:27 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Mon Mar 15 11:28:19 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <E1B2uAR-0008IR-00@swing.co.at>
References: <20040315153454.GD2705@burma.localdomain>
	<E1B2uAR-0008IR-00@swing.co.at>
Message-ID: <20040315162827.GE2705@burma.localdomain>

> Why don't you use "d + relativedelta(day=-1)" instead?
> 
> Just as easy to use, better error detection, and a big precedent.

Supporting this might be cool indeed. OTOH, it doesn't replace the
current behavior. How do you get (day=30)?

-- 
Gustavo Niemeyer
http://niemeyer.net

From tanzer at swing.co.at  Mon Mar 15 12:28:00 2004
From: tanzer at swing.co.at (Christian Tanzer)
Date: Mon Mar 15 12:29:07 2004
Subject: [Python-Dev] dateutil
In-Reply-To: Your message of "Mon, 15 Mar 2004 13:28:27 -0300."
	<20040315162827.GE2705@burma.localdomain>
Message-ID: <E1B2vse-00008B-00@swing.co.at>


> > Why don't you use "d + relativedelta(day=-1)" instead?
> >
> > Just as easy to use, better error detection, and a big precedent.
>
> Supporting this might be cool indeed. OTOH, it doesn't replace the
> current behavior. How do you get (day=30)?

I don't follow. The semantics would be similar to list indexing.

`day=30` asks for the 30-est day of the month and raises an ValueError
if there isn't any. `day=-1` asks for the last day of the month (and
should never raise an error <wink>).

The allowed argument range for `day` would be between 1 and `n` with
`n` the number of days in the month for positive values and between -1
and `-n` for negative values.

-- 
Christian Tanzer                                    http://www.c-tanzer.at/


From niemeyer at conectiva.com  Mon Mar 15 12:56:13 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Mon Mar 15 12:56:04 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <E1B2vse-00008B-00@swing.co.at>
References: <20040315162827.GE2705@burma.localdomain>
	<E1B2vse-00008B-00@swing.co.at>
Message-ID: <20040315175613.GA22581@burma.localdomain>

> > Supporting this might be cool indeed. OTOH, it doesn't replace the
> > current behavior. How do you get (day=30)?
> 
> I don't follow. The semantics would be similar to list indexing.
> 
> `day=30` asks for the 30-est day of the month and raises an ValueError
> if there isn't any. `day=-1` asks for the last day of the month (and
> should never raise an error <wink>).

What advantage do you get with the raised error? The replace() method
of dateutil already does something similar, and I'm not trying to
mimic it. I'm trying to provide some aditionally useful mechanism.

> The allowed argument range for `day` would be between 1 and `n` with
> `n` the number of days in the month for positive values and between -1
> and `-n` for negative values.

The whole point of being relative is that it acts differently on
different dates. What happens when you do

date(2004, 1, 31) + relativedelta(months=1)

Will it raise an error as well? The current behavior is consistently
designed to get useful results, in addition to what datetime provides.

-- 
Gustavo Niemeyer
http://niemeyer.net

From tim.one at comcast.net  Mon Mar 15 13:43:37 2004
From: tim.one at comcast.net (Tim Peters)
Date: Mon Mar 15 13:43:42 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <16E1010E4581B049ABC51D4975CEDB8803060E2A@UKDCX001.uk.int.atosorigin.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGENDJIAB.tim.one@comcast.net>

[Moore, Paul]
> ...
> The key functionality here is "add N months" and "go to weekday N
> (forward or backward)". Both things that the core datetime avoids for
> well-explained reasons.

"go to weekday N" isn't controversial, it's just that datetime doesn't have
it.  The first time I *used* datetime in anger was to write a program to
list PSF Board meeting dates (1pm US Eastern on the second Tuesday of each
month).  That's PSF.py in Python CVS's sandbox/datetime module.  It uses a
utility function defined in dateutil.py (also in the sandbox):

def weekday_of_month(weekday, dt, index):
    """Return the index'th day of kind weekday in date's month.

    All the days of kind weekday (MONDAY .. SUNDAY) are viewed as if a
    Python list, where index 0 is the first day of that kind in dt's
    month, and index -1 is the last day of that kind in dt's month.
    Everything follows from that.  The time and tzinfo members (if any)
    aren't changed.

    Example:  Sundays in November.  The day part of the date is
    irrelevant.  Note that a "too large" index simply spills over to
    the next month.

    >>> base = datetime.datetime(2002, 11, 25, 13, 22, 44)
    >>> for index in range(5):
    ...     print index, weekday_of_month(SUNDAY, base, index).ctime()
    0 Sun Nov  3 13:22:44 2002
    1 Sun Nov 10 13:22:44 2002
    2 Sun Nov 17 13:22:44 2002
    3 Sun Nov 24 13:22:44 2002
    4 Sun Dec  1 13:22:44 2002

    Start from the end of the month instead:
    >>> for index in range(-1, -6, -1):
    ...     print index, weekday_of_month(SUNDAY, base, index).ctime()
    -1 Sun Nov 24 13:22:44 2002
    -2 Sun Nov 17 13:22:44 2002
    -3 Sun Nov 10 13:22:44 2002
    -4 Sun Nov  3 13:22:44 2002
    -5 Sun Oct 27 13:22:44 2002
    """

Viewing the dates as a Python list proved easy to work with.  Some other
utility functions there:

def first_weekday_on_or_after(weekday, dt):
    """First day of kind MONDAY .. SUNDAY on or after date.

    The time and tzinfo members (if any) aren't changed.

    >>> base = datetime.date(2002, 12, 28)  # a Saturday
    >>> base.ctime()
    'Sat Dec 28 00:00:00 2002'
    >>> first_weekday_on_or_after(SATURDAY, base).ctime()
    'Sat Dec 28 00:00:00 2002'
    >>> first_weekday_on_or_after(SUNDAY, base).ctime()
    'Sun Dec 29 00:00:00 2002'
    >>> first_weekday_on_or_after(TUESDAY, base).ctime()
    'Tue Dec 31 00:00:00 2002'
    >>> first_weekday_on_or_after(FRIDAY, base).ctime()
    'Fri Jan  3 00:00:00 2003'
    """

def first_weekday_on_or_before(weekday, dt):
    """First day of kind MONDAY .. SUNDAY on or before date.

    The time and tzinfo members (if any) aren't changed.

    >>> base = datetime.date(2003, 1, 3)  # a Friday
    >>> base.ctime()
    'Fri Jan  3 00:00:00 2003'
    >>> first_weekday_on_or_before(FRIDAY, base).ctime()
    'Fri Jan  3 00:00:00 2003'
    >>> first_weekday_on_or_before(TUESDAY, base).ctime()
    'Tue Dec 31 00:00:00 2002'
    >>> first_weekday_on_or_before(SUNDAY, base).ctime()
    'Sun Dec 29 00:00:00 2002'
    >>> first_weekday_on_or_before(SATURDAY, base).ctime()
    'Sat Dec 28 00:00:00 2002'
    """

I think these are all easily spelled as particular patterns of arguments to
Gustavo's relativedelta, but not necessarily *clearly* so spelled.  Unsure.
Regardless, it's hard to mistake the intended meaning of a
first_weekday_on_or_after() call <wink>.


From jim.jewett at eds.com  Mon Mar 15 14:23:04 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Mon Mar 15 14:24:08 2004
Subject: [Python-Dev] funcdef grammar production
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3BC@USAHM010.amer.corp.eds.com>


PEP 318 is likely to change the BNF productions for funcdef,
so I was taking a look at what is there right now.

  funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite
  parameter_list ::= (defparameter ",")*
                        ("*" identifier [, "**" identifier] |
				 "**" identifier | defparameter [","])
  defparameter ::= parameter ["=" expression]
  sublist ::= parameter ("," parameter)* [","]
  parameter ::= identifier | "(" sublist ")"
  funcname ::= identifier

But the text points out that no purely positional (without a 
default value) arguments can occur after any keyword arguments.
Is there a reason not to include this in the BNF?

(Assuming the other productions are unchanged,) this seems to work:

  parameter_list ::=  starparameter  |
                      defparameter ("," defparameter)* ["," [starparameter]]
|
                      parameter ("," parameter)* ("," defparameter)* [","
[starparameter]]

  starparameter ::= "*" identifier [, "**" identifier] | 
                    "**" identifier

  defparameter ::= parameter "=" expression

starparameter was separated out because a trailing comma is permitted
after a positional or default parameter, but not after *args or **kwargs.

Is there some technical reason not to do this?  For instance, is the
doco autogenerated from the actual code?  Is it somehow cheaper to
check for an optional first parameter than for three possibilities?

-jJ

From mwh at python.net  Mon Mar 15 14:29:38 2004
From: mwh at python.net (Michael Hudson)
Date: Mon Mar 15 14:29:47 2004
Subject: [Python-Dev] funcdef grammar production
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3BC@USAHM010.amer.corp.eds.com>
	(Jim J. Jewett's message of "Mon, 15 Mar 2004 14:23:04 -0500")
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3BC@USAHM010.amer.corp.eds.com>
Message-ID: <2m3c89rjrx.fsf@starship.python.net>

"Jewett, Jim J" <jim.jewett@eds.com> writes:

> PEP 318 is likely to change the BNF productions for funcdef,
> so I was taking a look at what is there right now.

Um, you do realise that this BNF is purely documentational?  The real
one is Grammar/Grammar in the source distribution, and lots of things
that you might expect to happen there actually happen inside the
compiler.

Cheers,
mwh


-- 
  For their next act, they'll no doubt be buying a firewall 
  running under NT, which makes about as much sense as 
  building a prison out of meringue.                     -- -:Tanuki:-
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html

From guido at python.org  Mon Mar 15 15:10:32 2004
From: guido at python.org (Guido van Rossum)
Date: Mon Mar 15 15:49:07 2004
Subject: [Python-Dev] funcdef grammar production
In-Reply-To: Your message of "Mon, 15 Mar 2004 14:23:04 EST."
	<B8CDFB11BB44D411B8E600508BDF076C1E96D3BC@USAHM010.amer.corp.eds.com> 
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3BC@USAHM010.amer.corp.eds.com>
Message-ID: <200403152010.i2FKAWX02148@guido.python.org>

> PEP 318 is likely to change the BNF productions for funcdef,
> so I was taking a look at what is there right now.
> 
>   funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite
>   parameter_list ::= (defparameter ",")*
>                         ("*" identifier [, "**" identifier] |
> 				 "**" identifier | defparameter [","])
>   defparameter ::= parameter ["=" expression]
>   sublist ::= parameter ("," parameter)* [","]
>   parameter ::= identifier | "(" sublist ")"
>   funcname ::= identifier
> 
> But the text points out that no purely positional (without a 
> default value) arguments can occur after any keyword arguments.
> Is there a reason not to include this in the BNF?
> 
> (Assuming the other productions are unchanged,) this seems to work:
> 
>   parameter_list ::=  starparameter  |
>      defparameter ("," defparameter)* ["," [starparameter]] |
>      parameter ("," parameter)* ("," defparameter)* ["," [starparameter]]
> 
>   starparameter ::= "*" identifier [, "**" identifier] | 
>                     "**" identifier
> 
>   defparameter ::= parameter "=" expression

This can't work (at least not in the parser -- I don't care what's put
in the docs) because parameter and defparameter have the same set of
initial tokens.

> starparameter was separated out because a trailing comma is permitted
> after a positional or default parameter, but not after *args or **kwargs.
> 
> Is there some technical reason not to do this?

Not to do what?  Allow a comma after *args?

> For instance, is the doco autogenerated from the actual code?

You should know by now that there are various tools for that, although
again maybe you're asking the question in a different context.

> Is it somehow cheaper to check for an optional first parameter than
> for three possibilities?

Who cares?

--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg at cosc.canterbury.ac.nz  Mon Mar 15 17:44:19 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 15 17:48:30 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040315141417.GA2705@burma.localdomain>
Message-ID: <200403152244.i2FMiJ98031602@cosc353.cosc.canterbury.ac.nz>

Gustavo Niemeyer <niemeyer@conectiva.com>:

> (just as examples)
> 
> d + relativedelta(day=1)
> d + relativedelta(day=31)

Sorry, but the more I see of this usage the worse it looks. Here
you're effectively using '+' as an assignment operator. To me, that's
a gross abuse of the Python language.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Mon Mar 15 17:59:09 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 15 18:02:09 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <16E1010E4581B049ABC51D4975CEDB8803060E2A@UKDCX001.uk.int.atosorigin.com>
Message-ID: <200403152259.i2FMx9EU031617@cosc353.cosc.canterbury.ac.nz>

> In dateutil, I think it's d + relativedelta(months=+1, day=1, days=-1).
> Arguably, this is fairly trivial (highly non-obvious, but trivial...)

Another thing that occurs to me. Presumably

  d + relativedelta(months=+1) + relativedelta(days=-1)

is not necessarily the same as

  d + relativedelta(days=-1) + relativedelta(months=+1)

so when you add two relativedeltas together, do you get
a relativedelta that does things in the right order?
In other words, can you do

  rd = relativedelta(days=-1) + relativedelta(months=+1)
  d += rd

and get the same result as

  d += relativedelta(days=-1)
  d += relativedelta(months=+1)

Also, there is presumably some order of precedence defined
when you specify multiple adjustments in one constructor.
>From your examples I'm guessing

  relativedelta(months=+1, days=-1) == \
    relativedelta(months=+1) + relativedelta(days=-1)

There could be a trap lurking here for the unwary, since
obviously then

  relativedelta(days=-1, months=+1) != \
    relativedelta(days=-1), relativedelta(months=+1)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From pf_moore at yahoo.co.uk  Mon Mar 15 18:04:06 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Mon Mar 15 18:04:10 2004
Subject: [Python-Dev] Re: dateutil
References: <20040315141417.GA2705@burma.localdomain>
	<200403152244.i2FMiJ98031602@cosc353.cosc.canterbury.ac.nz>
Message-ID: <7jxlemqh.fsf@yahoo.co.uk>

Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

> Gustavo Niemeyer <niemeyer@conectiva.com>:
>
>> (just as examples)
>> 
>> d + relativedelta(day=1)
>> d + relativedelta(day=31)
>
> Sorry, but the more I see of this usage the worse it looks. Here
> you're effectively using '+' as an assignment operator. To me, that's
> a gross abuse of the Python language.

You're right, that's a particularly nasty example. More "normal"
usages feel far better, because they do feel like additions to me. You
could improve these cases by renaming relativedelta as something like
duration_until(), but really that's only hiding the issue.

Nevertheless, I *do* like the functionality. And the conciseness.

In some ways, i'd rather spell these examples as

    d.replace(day=1)
    d.replace(day=-1)

The former works, but the latter doesn't. One problem is that negative
arguments only really make any sense for the day argument.

Hmm, thinking about it, how will relativedelta objects get used? Most
of the time, I'd be adding relativedelta "constants". So maybe using a
simple function would be better:

    dateutil.adjust(d, day=1)
    dateutil.adjust(d, day=-1)
    dateutil.adjust(d, day=1, months=+1, days=-1)

Does this look any better?

Paul
-- 
This signature intentionally left blank


From greg at cosc.canterbury.ac.nz  Mon Mar 15 18:17:47 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 15 18:20:26 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <4055A77E.7040106@theopalgroup.com>
Message-ID: <200403152317.i2FNHlpj031645@cosc353.cosc.canterbury.ac.nz>

Kevin Jacobs <jacobs@theopalgroup.com>:

> After using such a system in a large system with non-trivial date
> manipulations, I think you'd change your mind about that approach --
> it can be excessively verbose.

Maybe, but I'm skeptical of that. The less trivial the operations
are, the more I'm likely to want to see spelled out clearly what
is being done and what order it's being done in. 

I'm usually quite willing to accept some verbosity if it contributes
clarity.  Whenever I try to come up with some clever coding scheme to
reduce verboseness, I usually end up regretting it, because clarity
suffers and when I came back to the code later I have a hard time
figuring out what's going on. Usually I end up rewriting it verbosely
all over again so that I understand it.

Someday I may learn my lesson and just leave it verbose in the
first place...

> However, in my datetime classes, I also have a very complete set of
> calls like: startOfMonth, endOfMonth, startOfYear, endOfYear,
> lastWeekday, nextWeekday, etc...

That's good. Having two completely different ways of expressing the
same thing seems like Too Many Ways To Do It, though, especially if
this stuff is to be included in the standard library.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Mon Mar 15 18:54:52 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 15 19:00:10 2004
Subject: [Python-Dev] funcdef grammar production
In-Reply-To: <2m3c89rjrx.fsf@starship.python.net>
Message-ID: <200403152354.i2FNsqmF031684@cosc353.cosc.canterbury.ac.nz>

Michael Hudson <mwh@python.net>:

> Um, you do realise that this BNF is purely documentational?  The real
> one is Grammar/Grammar in the source distribution, and lots of things
> that you might expect to happen there actually happen inside the
> compiler.

And even if something *could* be done in the parser, it
might be desirable to leave it to the compiler if it can
produce a more helpful error message.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From aahz at pythoncraft.com  Mon Mar 15 19:28:05 2004
From: aahz at pythoncraft.com (Aahz)
Date: Mon Mar 15 19:29:28 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <200403152244.i2FMiJ98031602@cosc353.cosc.canterbury.ac.nz>
References: <20040315141417.GA2705@burma.localdomain>
	<200403152244.i2FMiJ98031602@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040316002804.GA12034@panix.com>

On Tue, Mar 16, 2004, Greg Ewing wrote:
> Gustavo Niemeyer <niemeyer@conectiva.com>:
>> 
>> (just as examples)
>> 
>> d + relativedelta(day=1)
>> d + relativedelta(day=31)
> 
> Sorry, but the more I see of this usage the worse it looks. Here
> you're effectively using '+' as an assignment operator. To me, that's
> a gross abuse of the Python language.

Part of the problem is that relativedelta does need to be a single
complex object.  Consider

today() + relativedelta(month=1, dayofweek=FRIDAY)

contrasted with

today() + relativedelta(weeks=4, dayofweek=FRIDAY)

Humans use complex relative date constructs, and many of them depend on
whatever the base time is; in order to do the adjustment correctly, you
need to provide a single change construct.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From greg at cosc.canterbury.ac.nz  Mon Mar 15 19:32:06 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 15 19:37:07 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040315142759.GB2705@burma.localdomain>
Message-ID: <200403160032.i2G0W6bH031737@cosc353.cosc.canterbury.ac.nz>

Gustavo Niemeyer <niemeyer@conectiva.com>:

> IMO, no. The "relative" here means that the operation made is dependent
> on what you apply it, and not a "fixed" delta as timedelta would do.

Hmmm. I see what you're getting at, but that interpretation goes
beyond what the word "relative" suggests to me. Maybe it makes sense
to you, but I think it's going to look confusing to anyone who doesn't
share your brain state.

Moreover, the terms "relative" and "delta" and the use of the "+"
operator all suggest that these things form some kind of algebra,
which they clearly don't.

> > So a relativedelta can affect things in a way that's not
> > relative at all? That sounds *very* confusing.
> 
> It makes sense in this context. Please, have a look at the examples
> in the documentation.

This seems to be a matter of opinion. I've looked at the examples, and
haven't seen anything to make me change my mind. I still think it's
nonsensical to have something called a "delta" that doesn't behave
algebraically when you add it to something.

> > So there is a hole at 0. Something about that smells wrong.
> 
> If you discover what, please tell me. :-)

I think what it means is that you haven't got a single operation with
an integer parameter. Rather, you've got two different operations,
each of which has a natural number parameter, and you're using the
sign of the parameter to encode which operation you want.

Also, I don't understand why the "weeks" parameter isn't used to
adjust the number of weeks here, instead of supplying it in a rather
funky way as a kind of parameter to a parameter. In other words,
instead of

  relativedelta(day = MO(+3))

why not

  relativedelta(day = MO, weeks = +2)

which would make more sense to me.

> > I think the OP's question was what happens if you do
> > 
> >   for i in range(12):
> >     d += relativedelta(months=+1)
> 
> I answered that just below the above example. It lands on the same
> date.

In all cases? You mean that

  d = datetime(2000, 1, 31)
  for i in range(12):
    d += relativedelta(months=+1)

will give the same result as

  d = datetime(2000, 1, 31)
  d += relativedelta(months=+12)

and/or

  d = datetime(2000, 1, 31)
  d += relativedelta(years=+1)

?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Mon Mar 15 19:39:29 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 15 19:55:49 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040315145632.GC2705@burma.localdomain>
Message-ID: <200403160039.i2G0dTZl031747@cosc353.cosc.canterbury.ac.nz>

Gustavo Niemeyer <niemeyer@conectiva.com>:

> We might even get some constants like:
> 
> end_of_month = relativedelta(day=31)
> 
> So that one might do "d+end_of_month".

That reads like nonsense. When I find the end of a month,
I am not "adding the end of the month" to the date.

I'm becoming more and more convinced that this whole
thing is an abuse of the '+' operator.

By the way, can you subtract relativedeltas as well as
add them? If '+' makes sense, '-' should make sense, too.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Mon Mar 15 19:50:07 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 15 19:55:56 2004
Subject: [Python-Dev] Re: dateutil
In-Reply-To: <7jxlemqh.fsf@yahoo.co.uk>
Message-ID: <200403160050.i2G0o7su031771@cosc353.cosc.canterbury.ac.nz>

Paul Moore <pf_moore@yahoo.co.uk>:

>     dateutil.adjust(d, day=1)
>     dateutil.adjust(d, day=-1)
>     dateutil.adjust(d, day=1, months=+1, days=-1)
> 
> Does this look any better?

Yes, I think it does. At least it avoids giving any spurious
impression that there might be an algebra lurking around
somewhere.

It might be better to have separate functions for absolute
and relative adjustments, e.g.

  dateutil.set(d, day=1)
  dateutil.add(d, months=+1, days=-1)

since the difference between the singular and plural keywords
is rather subtle and easy to miss.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Mon Mar 15 20:05:28 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 15 20:06:32 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040316002804.GA12034@panix.com>
Message-ID: <200403160105.i2G15Svm031800@cosc353.cosc.canterbury.ac.nz>

Aahz <aahz@pythoncraft.com>:

> Part of the problem is that relativedelta does need to be a single
> complex object.  Consider
> 
> today() + relativedelta(month=1, dayofweek=FRIDAY)
> 
> contrasted with
> 
> today() + relativedelta(weeks=4, dayofweek=FRIDAY)

I'm not sure what point you're trying to make here. Even
if it's true that a single composite operation is required,
I don't see any reason it has to be written using the '+'
operator.

Moreover, your first example tends to suggest that a single
composite operation is *not* always what is needed. Written
that way, it's ambiguous whether it means to go to a Friday
and then forward one month, or to go forward one month and
then to a Friday, which could give a different result.

It seems to me there are really two consecutive operations
there, and their order matters. Given that, it would seem
more sensible to write it as something like

  d2 = friday_of(next_month(d1))

or

  d2 = next_month(friday_of(d1))

depending on which one you want.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From FBatista at uniFON.com.ar  Mon Mar 15 21:01:12 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Mon Mar 15 21:02:57 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
Message-ID: <A128D751272CD411BC9200508BC2194D03383741@escpl.tcp.com.ar>

People:

I'll post a reviewed version of the PEP.

The only differences with the previous one will be the treatmen to float in
both explicit and implicit construction:

------------

In Explicit construction:

You can not call Decimal with a float. Instead you must use a method:
Decimal.fromFloat(). The syntax:

    Decimal.fromFloat(floatNumber, positions)
    
where floatNumber is the float number origin of the construction and
positions is the positions after the decimal point where you apply a
round-half-up
rounding. In this way you can do, for example:

    Decimal.fromFloat(1.1, 2): The same that doing Decimal('1.1').
    Decimal.fromFloat(1.1, 16): The same that doing
Decimal('1.1000000000000001').
    Decimal.fromFloat(1.1): The same that doing
Decimal('110000000000000008881784197001252...e-51').


In Implicit construction:

Raise TypeError. You can not mix Decimal and float.

------------

If you're ok, I'll post this.

Thank you!

.	Facundo

From ncoghlan at iinet.net.au  Mon Mar 15 23:30:04 2004
From: ncoghlan at iinet.net.au (Nick Coghlan)
Date: Mon Mar 15 23:30:14 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040315142759.GB2705@burma.localdomain>
References: <20040314190331.GA2626@burma.localdomain>	<200403150317.i2F3Hohl030236@cosc353.cosc.canterbury.ac.nz>
	<20040315142759.GB2705@burma.localdomain>
Message-ID: <405682CC.8090806@iinet.net.au>

Gustavo Niemeyer wrote:

>>>MO(0) shouldn't be used as it makes no sense, but is the same
>>>as MO(+1) which is the same as MO(1).
>>
>>So there is a hole at 0. Something about that smells wrong.
> 
> 
> If you discover what, please tell me. :-)
> 

The current version feels odd to me as well.

I would naturally interpret it along the lines of: MO = MO(0) = this 
Monday = today, or the nearest Monday in the future.

Then MO(+1) would be the next Monday after MO(0). In other words, the 
parameter becomes a standard offset from the current week, rather than 
using a positive 1-based count, and a negative 0-based offset.

Regards,
Nick.

-- 
Nick Coghlan               |     Brisbane, Australia
Email: ncoghlan@email.com  | Mobile: +61 409 573 268

From ncoghlan at iinet.net.au  Mon Mar 15 23:48:24 2004
From: ncoghlan at iinet.net.au (Nick Coghlan)
Date: Mon Mar 15 23:51:16 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040315162827.GE2705@burma.localdomain>
References: <20040315153454.GD2705@burma.localdomain>	<E1B2uAR-0008IR-00@swing.co.at>
	<20040315162827.GE2705@burma.localdomain>
Message-ID: <40568718.4070306@iinet.net.au>

Gustavo Niemeyer wrote:

>>Why don't you use "d + relativedelta(day=-1)" instead?
>>
>>Just as easy to use, better error detection, and a big precedent.
> 
> 
> Supporting this might be cool indeed. OTOH, it doesn't replace the
> current behavior. How do you get (day=30)?

day=30 is simply '30th day of the month'.

day=-2 is '2nd last day of the month'.

The negative simply means 'count from the end', as it does with Python 
indexing. This would indeed be useful, and far more obvious than 
'day=31' and relying on the 'use maximum day in month' feature.


The other question I had was why is there no absolute 'week' parameter? 
to get 'week 43', it is necessary to say 'month=1, day=1, weeks=43'.

Regards,
Nick.

-- 
Nick Coghlan               |     Brisbane, Australia
Email: ncoghlan@email.com  | Mobile: +61 409 573 268

From aahz at pythoncraft.com  Tue Mar 16 01:01:21 2004
From: aahz at pythoncraft.com (Aahz)
Date: Tue Mar 16 01:01:27 2004
Subject: [Python-Dev] Lynx vs SF
In-Reply-To: <2mfzflhygb.fsf@starship.python.net>
References: <LNBBLJKPBEHFEDALKOLCCEEOHMAB.tim.one@comcast.net>
	<AE94D680-2ED3-11D8-BFA1-0030656C6B9E@webcrunchers.com>
	<16349.52054.372100.462135@montanaro.dyndns.org>
	<76BB1B8C-2F42-11D8-BFA1-0030656C6B9E@webcrunchers.com>
	<16350.9342.865204.274114@montanaro.dyndns.org>
	<20031215231849.GB17244@panix.com>
	<2mfzflhygb.fsf@starship.python.net>
Message-ID: <20040316060120.GA23605@panix.com>

On Tue, Dec 16, 2003, Michael Hudson wrote:
> Aahz <aahz@pythoncraft.com> writes:
>> On Mon, Dec 15, 2003, Skip Montanaro wrote:
>>>
>>> Sounds like you have cookies disabled.  To use SourceForge you need to run
>>> with cookie support turned on.
>>
>> ...and you can't use Lynx, either.  :-(
> 
> It seems that you can, so long as you "attach" /dev/null to the bug
> report each time you do anything...

Looks like SF has finally fixed things to work with Lynx.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From aahz at pythoncraft.com  Tue Mar 16 01:17:02 2004
From: aahz at pythoncraft.com (Aahz)
Date: Tue Mar 16 01:17:08 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <A128D751272CD411BC9200508BC2194D03383741@escpl.tcp.com.ar>
References: <A128D751272CD411BC9200508BC2194D03383741@escpl.tcp.com.ar>
Message-ID: <20040316061702.GA26389@panix.com>

On Mon, Mar 15, 2004, Batista, Facundo wrote:
>
> Raise TypeError. You can not mix Decimal and float.

+1
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From kbk at shore.net  Tue Mar 16 01:36:27 2004
From: kbk at shore.net (Kurt B. Kaiser)
Date: Tue Mar 16 01:36:30 2004
Subject: [Python-Dev] Weekly Python Bug/Patch Summary
Message-ID: <200403160636.i2G6aRDm017107@hydra.localdomain>


Patch / Bug Summary
___________________

Patches :  251 open ( +1) /  2332 closed (+10) /  2583 total (+11)
Bugs    :  739 open ( +6) /  3921 closed ( +7) /  4660 total (+13)
RFE     :  128 open ( -3) /   123 closed ( +4) /   251 total ( +1)

New / Reopened Patches
______________________

Highlight builtins  (2003-09-13)
CLOSED http://python.org/sf/805830  reopened by  kbk

HTMLParser should support entities in attributes  (2004-03-08)
       http://python.org/sf/912410  opened by  Aaron Swartz 

add phi as a constant in math module  (2004-03-08)
CLOSED http://python.org/sf/912452  opened by  Lars R. Damerow 

latex compile error in current CVS  (2004-03-09)
CLOSED http://python.org/sf/912462  opened by  George Yoshida 

Little (improvement and standarization) to asyncore.  (2004-03-11)
       http://python.org/sf/914096  opened by  alejandro david weil 

Fix readline for utf-8 locales  (2004-03-11)
       http://python.org/sf/914291  opened by  Michal &#268;iha&#345; 

gzip.GzipFile to accept stream as fileobj.  (2004-03-11)
       http://python.org/sf/914340  opened by  Igor Belyi 

gzip.GzipFile to accept stream as fileobj.  (2004-03-11)
CLOSED http://python.org/sf/914358  opened by  Igor Belyi 

Make history recall a-cyclic  (2004-03-12)
       http://python.org/sf/914546  opened by  Noam Raphael 

difflib side by side diff support, diff.py s/b/s HTML option  (2004-03-11)
       http://python.org/sf/914575  opened by  Dan Gass 

list.__setitem__(slice) behavior  (2004-01-08)
CLOSED http://python.org/sf/873305  reopened by  jbrandmeyer

Create a freelist for dictionaries  (2004-03-14)
       http://python.org/sf/916251  opened by  Raymond Hettinger 

fix for bug #857297 (tarfile and hardlinks)  (2004-03-15)
       http://python.org/sf/916874  opened by  Lars Gustäbel 

dict type concat function  (2004-03-16)
       http://python.org/sf/917095  opened by  troy melhase 

Patches Closed
______________

Improvements to cStringIO.writelines()  (2004-03-01)
       http://python.org/sf/907403  closed by  rhettinger

Highlight builtins  (2003-09-13)
       http://python.org/sf/805830  closed by  kbk

add phi as a constant in math module  (2004-03-08)
       http://python.org/sf/912452  closed by  rhettinger

latex compile error in current CVS  (2004-03-09)
       http://python.org/sf/912462  closed by  perky

gzip.GzipFile to accept stream as fileobj.  (2004-03-11)
       http://python.org/sf/914358  closed by  belyi

Port tests to unittest (Part 2)  (2003-05-13)
       http://python.org/sf/736962  closed by  doerwalter

robot.txt must be robots.txt  (2004-03-07)
       http://python.org/sf/911431  closed by  rhettinger

Fix typos in pystate.h comments  (2004-02-28)
       http://python.org/sf/906501  closed by  rhettinger

trace.py: simple bug in write_results_file  (2004-01-08)
       http://python.org/sf/873211  closed by  rhettinger

list.__setitem__(slice) behavior  (2004-01-08)
       http://python.org/sf/873305  closed by  rhettinger

Add start and end optional args to array.index  (2004-01-02)
       http://python.org/sf/869688  closed by  rhettinger

Replace backticks with repr()  (2003-12-01)
       http://python.org/sf/852334  closed by  doerwalter

New / Reopened Bugs
___________________

Unable to overwrite file with findertools.move  (2004-03-09)
       http://python.org/sf/912747  opened by  Benjamin Schollnick 

AskFolder (EasyDialogs) does not work?  (2004-03-09)
       http://python.org/sf/912758  opened by  Benjamin Schollnick 

urllib2 checks for http return code 200 only.  (2004-03-09)
       http://python.org/sf/912845  opened by  Ahmed F. 

7.5.6 Thread Objects is too vague  (2004-03-09)
       http://python.org/sf/912943  opened by  Roy Smith 

PythonLauncher-run scripts have funny $CWD  (2004-03-10)
       http://python.org/sf/913581  opened by  Jack Jansen 

httplib:  HTTPS does not close() connection properly  (2004-03-10)
       http://python.org/sf/913619  opened by  rick 

smtplib module : fatal error with unresolved hostname  (2004-03-10)
CLOSED http://python.org/sf/913698  opened by  Zarro 

xml.sax segfault on error  (2004-03-11)
       http://python.org/sf/914148  opened by  Adam Sampson 

modulefinder is not documented  (2004-03-11)
       http://python.org/sf/914375  opened by  Fred L. Drake, Jr. 

CFStringGetUnicode() returns null-terminated unicode string  (2004-03-14)
       http://python.org/sf/915942  opened by  has 

Improving MacPython's IAC support  (2004-03-14)
       http://python.org/sf/916013  opened by  has 

add a stronger PRNG  (2004-03-16)
       http://python.org/sf/917055  opened by  paul rubin 

warnings.py does not define _test()  (2004-03-15)
       http://python.org/sf/917108  opened by  Aahz 

Bugs Closed
___________

Unknown color name on HP-UX  (2004-02-16)
       http://python.org/sf/897872  closed by  kbk

Exceptions when a thread exits  (2003-06-14)
       http://python.org/sf/754449  closed by  bcannon

test_coercion fails on AIX  (2003-01-31)
       http://python.org/sf/678265  closed by  nascheme

copy.copy fails for array.array  (2004-03-06)
       http://python.org/sf/910986  closed by  rhettinger

strftime ignores date format on winxp  (2004-02-16)
       http://python.org/sf/898253  closed by  rhettinger

PackMan recursive/force fails on pseudo packages  (2003-05-07)
       http://python.org/sf/733819  closed by  jackjansen

smtplib : module bug with unresolved hostname  (2004-03-10)
       http://python.org/sf/913698  closed by  chaica

New / Reopened RFE
__________________

Generator-support in map() and filter()  (2004-03-09)
CLOSED http://python.org/sf/912738  opened by  Ragnar Kjørstad 

RFE Closed
__________

Generator-support in map() and filter()  (2004-03-09)
       http://python.org/sf/912738  closed by  tim_one

Addition to break and continue  (2003-11-21)
       http://python.org/sf/846553  closed by  gvanrossum

Add copyrange method to array.  (2003-04-14)
       http://python.org/sf/721061  closed by  rhettinger

Minor array module enhancements  (2003-02-13)
       http://python.org/sf/686323  closed by  rhettinger


From tanzer at swing.co.at  Tue Mar 16 01:44:37 2004
From: tanzer at swing.co.at (Christian Tanzer)
Date: Tue Mar 16 01:45:41 2004
Subject: [Python-Dev] dateutil
In-Reply-To: Your message of "Mon, 15 Mar 2004 14:56:13 -0300."
	<20040315175613.GA22581@burma.localdomain>
Message-ID: <E1B38JZ-0001sz-00@swing.co.at>


> > > Supporting this might be cool indeed. OTOH, it doesn't replace the
> > > current behavior. How do you get (day=30)?
> >
> > I don't follow. The semantics would be similar to list indexing.
> >
> > `day=30` asks for the 30-est day of the month and raises an ValueError
> > if there isn't any. `day=-1` asks for the last day of the month (and
> > should never raise an error <wink>).
>
> What advantage do you get with the raised error? The replace() method
> of dateutil already does something similar, and I'm not trying to
> mimic it.

IMHO, it is more useful if the caller specifies the exact intent and
gets what he asks for. Being able to pass any positive large number
and get the last day of the month doesn't look like a good interface
to me.

> I'm trying to provide some aditionally useful mechanism.

I'm glad you do. Thank you.

> > The allowed argument range for `day` would be between 1 and `n` with
> > `n` the number of days in the month for positive values and between -1
> > and `-n` for negative values.
>
> The whole point of being relative is that it acts differently on
> different dates. What happens when you do
>
> date(2004, 1, 31) + relativedelta(months=1)
>
> Will it raise an error as well? The current behavior is consistently
> designed to get useful results, in addition to what datetime provides.

I don't know. The problem here is that it is not obvious what is going
to happen. Consider:

d1 = date(2004, 1, 31)
d2 = date(2004, 1, 28)

d1_1 = d1 + relativedelta(months=1)
d2_1 = d2 + relativedelta(months=1)

If you subscribe to d1_1 == d2_1, what do you expect to be the result
of:

d1_1 + relativedelta(months=1)
d2_1 + relativedelta(months=1)

Whatever you choose it will be seriously inconsistent from one point
of view or another.

-- 
Christian Tanzer                                    http://www.c-tanzer.at/


From oren-py-d at hishome.net  Tue Mar 16 06:05:55 2004
From: oren-py-d at hishome.net (Oren Tirosh)
Date: Tue Mar 16 06:05:59 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <A128D751272CD411BC9200508BC2194D03383741@escpl.tcp.com.ar>
References: <A128D751272CD411BC9200508BC2194D03383741@escpl.tcp.com.ar>
Message-ID: <20040316110555.GA51345@hishome.net>

On Mon, Mar 15, 2004 at 11:01:12PM -0300, Batista, Facundo wrote:
> People:
> 
> I'll post a reviewed version of the PEP.
> 
> The only differences with the previous one will be the treatmen to float in
> both explicit and implicit construction:
> 
> ------------
> 
> In Explicit construction:
> 
> You can not call Decimal with a float. Instead you must use a method:
> Decimal.fromFloat(). The syntax:
> 
>     Decimal.fromFloat(floatNumber, positions)

+1 on the behavior. One nitpick about the method name: this caseConvention 
is not consistent with the Python Style Guide (PEP 8)

  Oren

From mcherm at mcherm.com  Tue Mar 16 08:23:16 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Tue Mar 16 08:23:32 2004
Subject: [Python-Dev] Re: dateutil
Message-ID: <1079443396.4056ffc453708@mcherm.com>

Greg Ewing writes:
> Paul Moore <pf_moore@yahoo.co.uk>:
> >     dateutil.adjust(d, day=1)
> >     dateutil.adjust(d, day=-1)
> >     dateutil.adjust(d, day=1, months=+1, days=-1)
> > 
> > Does this look any better?
> 
> Yes, I think it does. At least it avoids giving any spurious
> impression that there might be an algebra lurking around
> somewhere.

+0.6  I do not feel as strongly as Greg does that use of +
      implies an alegbra (because it's glaringly obvious to
      me that the whole POINT of this library is to deal
      with the fact that date manipulations do NOT follow
      any sane algebra). However, there's some point to
      what he says and I like the function approach better.

Greg continues:
> It might be better to have separate functions for absolute
> and relative adjustments, e.g.
> 
>   dateutil.set(d, day=1)
>   dateutil.add(d, months=+1, days=-1)
> 
> since the difference between the singular and plural keywords
> is rather subtle and easy to miss.

+1.0  ABSOLUTELY! The singular vs plural keywords is downright
      misleading, while the use of "set()" vs "add()" adds
      greatly to clarity of intent.

I still feel that there is a danger of great confusion when we
use keyword parameters to specify various transforms that are
to be performed, but the order in which they are applied matters.

  EXAMPLE (NOT tested code, I hope I've got this right):
    >>> mar1 = datetime.datetime(2004,3,1)
    >>> mar1 + relativedelta(months=1) + relativedelta(days=-1)
    datetime.datetime(2004,3,31)
    >>> mar1 + relativedelta(day=-1) + relativedelta(month=1)
    datetime.datetime(2004,3,29)
    >>> mar1 + relativedelta(month=1, day=-1)  # I'm not sure of this one
    datetime.datetime(2004,3,31)

I probably would have tried a somewhat different design. Perhaps
individual transform objects which do just one thing at a time but
can be combined into a single object which applies them in series:

  HYPOTHETICAL SYNTAX:
    >>> mar1 = datetime.datetime(2004,3,1)
    >>> nextMonth = relativeDateDelta(months, 1)
    >>> prevDay = relativeDateDelta(days, -1)
    >>> mar1.addDelta(nextMonth).addDelta(prevDay)
    datetime.datetime(2004,3,31)
    >>> mar1.addDelta(prevDay).addDelta(nextMonth)
    datetime.datetime(2004,3,29)
    >>> firstCombo = relativeDateDelta([nextMonth, prevDay])
    >>> secondCombo = relativeDateDelta([prevDay, nextMonth])
    >>> # or, alternative syntax:
    >>> firstCombo = nextMonth.andThen(prevDay)

Anyway, although I do see some problems with the current syntax, and
I'm having fun playing around with ways that I think I might make
it better, I realize that my ideas haven't been tried out in real
code like Gustavo's. And what we're arguing here is syntax, not
functionality... I think it is clear that the functionality would be
a nice thing to have in the standard library if the api is acceptable
and the documentation is sufficient. I would urge Gustavo to listen
carefully to the ideas here (this discussion thread, not my post) and
see if there aren't some ideas which would improve his package.

-- Michael Chermside


From niemeyer at conectiva.com  Tue Mar 16 09:26:42 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Tue Mar 16 09:26:34 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <E1B38JZ-0001sz-00@swing.co.at>
References: <20040315175613.GA22581@burma.localdomain>
	<E1B38JZ-0001sz-00@swing.co.at>
Message-ID: <20040316142641.GA3162@burma.localdomain>

> I don't know. The problem here is that it is not obvious what is going
> to happen. Consider:
> 
> d1 = date(2004, 1, 31)
> d2 = date(2004, 1, 28)
> 
> d1_1 = d1 + relativedelta(months=1)
> d2_1 = d2 + relativedelta(months=1)

It's obvious to me, and should be obvious to anyone who knows the
behavior of this class. They'll land on Feb/29 and Feb/28,
respectively.

> If you subscribe to d1_1 == d2_1, what do you expect to be the result
> of:

Why would I 'subscribe' to that?

> d1_1 + relativedelta(months=1)
> d2_1 + relativedelta(months=1)
> 
> Whatever you choose it will be seriously inconsistent from one point
> of view or another.

They'll land on Mar/29 and Mar/28? Where's the incosistency!?

-- 
Gustavo Niemeyer
http://niemeyer.net

From tanzer at swing.co.at  Tue Mar 16 12:14:43 2004
From: tanzer at swing.co.at (Christian Tanzer)
Date: Tue Mar 16 12:15:49 2004
Subject: [Python-Dev] dateutil
In-Reply-To: Your message of "Tue, 16 Mar 2004 11:26:42 -0300."
	<20040316142641.GA3162@burma.localdomain>
Message-ID: <E1B3I9L-0003M5-00@swing.co.at>


> > I don't know. The problem here is that it is not obvious what is going
> > to happen. Consider:
> >
> > d1 = date(2004, 1, 31)
> > d2 = date(2004, 1, 28)
> >
> > d1_1 = d1 + relativedelta(months=1)
> > d2_1 = d2 + relativedelta(months=1)
>
> It's obvious to me, and should be obvious to anyone who knows the
> behavior of this class. They'll land on Feb/29 and Feb/28,
> respectively.
>
> > If you subscribe to d1_1 == d2_1, what do you expect to be the result
> > of:
>
> Why would I 'subscribe' to that?
>
> > d1_1 + relativedelta(months=1)
> > d2_1 + relativedelta(months=1)
> >
> > Whatever you choose it will be seriously inconsistent from one point
> > of view or another.
>
> They'll land on Mar/29 and Mar/28? Where's the incosistency!?

Slapping my forehead: I used the wrong year. How about 2003?

-- 
Christian Tanzer                                    http://www.c-tanzer.at/


From niemeyer at conectiva.com  Tue Mar 16 12:52:19 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Tue Mar 16 12:52:16 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <E1B3I9L-0003M5-00@swing.co.at>
References: <20040316142641.GA3162@burma.localdomain>
	<E1B3I9L-0003M5-00@swing.co.at>
Message-ID: <20040316175219.GA26873@burma.localdomain>

> Slapping my forehead: I used the wrong year. How about 2003?

They'll both end up on Mar/28. That's why it's relative. If you
want a fixed delta stick with timedelta.

-- 
Gustavo Niemeyer
http://niemeyer.net

From jim at zope.com  Tue Mar 16 14:58:42 2004
From: jim at zope.com (Jim Fulton)
Date: Tue Mar 16 14:59:15 2004
Subject: [Python-Dev] Re: Draft: PEP for imports
In-Reply-To: <20040119214528.GA9767@panix.com>
References: <20040119214528.GA9767@panix.com>
Message-ID: <40575C72.7040204@zope.com>


Summary of response: Yay!

Oh, I mean, +1.

I am one of those who likes Python's package system quite a bit,
except for the issues addressed by this PEP.

I like the simple dot-prefix proposal.  We use this now in Zope 3's
configuration system to refer to modules within a package.  A single
leading dot refers to the current package, so:

   ".interfaces.IFoo"

refers to the IFoo object in the interfaces module within the current
package.

Additional dots refer to containing packages, so:

   "..interfaces.IFoo"

refers to the IFoo object in the interfaces module within the containing
package of the current package.

We've never used more than two dots.

I don't really think that I've ever had problems counting up to two.
I imagine that I could make it to three if pressed. ;)

I've never had problems seeing the leading dot (or dots).

I find this syntax to be simple and unobtrusive.

Jim

-- 
Jim Fulton           mailto:jim@zope.com       Python Powered!
CTO                  (540) 361-1714            http://www.python.org
Zope Corporation     http://www.zope.com       http://www.zope.org



From greg at cosc.canterbury.ac.nz  Tue Mar 16 19:12:44 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar 16 19:14:35 2004
Subject: [Python-Dev] Re: dateutil
In-Reply-To: <1079443396.4056ffc453708@mcherm.com>
Message-ID: <200403170012.i2H0CiUS000941@cosc353.cosc.canterbury.ac.nz>

Michael Chermside <mcherm@mcherm.com>:

> +0.6  I do not feel as strongly as Greg does that use of +
>       implies an alegbra (because it's glaringly obvious to
>       me that the whole POINT of this library is to deal
>       with the fact that date manipulations do NOT follow
>       any sane algebra).

I wouldn't insist that all uses of '+' be strictly algebraic,
but what we're talking about here is so far from it that I
have trouble thinking of it as any kind of addition operation.

> Perhaps individual transform objects which do just one thing at a time
> but can be combined into a single object which applies them in series:
> 
>     >>> firstCombo = relativeDateDelta([nextMonth, prevDay])
>     >>> secondCombo = relativeDateDelta([prevDay, nextMonth])
>     >>> # or, alternative syntax:
>     >>> firstCombo = nextMonth.andThen(prevDay)

I can't really see the advantage of having a special kind of 
object for this. We already have objects that represent operations 
in Python -- they're called functions! And it's easy to create one 
that combines others:

  def thursday_next_month(d):
    return thursday_of(next_month(d))

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From davida at activestate.com  Wed Mar 17 05:50:59 2004
From: davida at activestate.com (David Ascher)
Date: Wed Mar 17 05:56:31 2004
Subject: [Python-Dev] test_tcl failing...
In-Reply-To: <000101c40967$3b1c2c00$8a01a044@oemcomputer>
Message-ID: <Pine.LNX.4.30.0403170249170.9071-100000@latte.ActiveState.com>

On Sat, 13 Mar 2004, Raymond Hettinger wrote:

> It also has been failing on Windows since it was checked in on 2-18-2004
> as part of patch #869468.

That's my code.  My bad.  I'm on vacation w/ no access to a decent machine
right now on which to fix it.  (That may change later today - trying to
setup wifi at my parent's beach place =).

I'll investigate as soon as I can, but that may not be for a few days.  If
someone wants to comment out the test for now, that's ok w/ me.

--david


From FBatista at uniFON.com.ar  Wed Mar 17 08:17:00 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 17 08:18:45 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
Message-ID: <A128D751272CD411BC9200508BC2194D03383754@escpl.tcp.com.ar>

Oren Tiroshwrote:

#- >     Decimal.fromFloat(floatNumber, positions)
#- 
#- +1 on the behavior. One nitpick about the method name: this 
#- caseConvention 
#- is not consistent with the Python Style Guide (PEP 8)

Which part?

What name do you suggest?

Thanks.

.	Facundo

From oren-py-d at hishome.net  Wed Mar 17 09:48:56 2004
From: oren-py-d at hishome.net (Oren Tirosh)
Date: Wed Mar 17 09:48:59 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <A128D751272CD411BC9200508BC2194D03383754@escpl.tcp.com.ar>
References: <A128D751272CD411BC9200508BC2194D03383754@escpl.tcp.com.ar>
Message-ID: <20040317144856.GA87292@hishome.net>

On Wed, Mar 17, 2004 at 10:17:00AM -0300, Batista, Facundo wrote:
> Oren Tiroshwrote:
> 
> #- >     Decimal.fromFloat(floatNumber, positions)
> #- 
> #- +1 on the behavior. One nitpick about the method name: this 
> #- caseConvention 
> #- is not consistent with the Python Style Guide (PEP 8)
> 
> Which part?
> 
> What name do you suggest?

fromfloat or from_float

There are examples of both lowercase and lower_case_with_underscores in 
methods of Python builtins. The former seems more common.

   Oren

From jim.jewett at eds.com  Wed Mar 17 10:00:45 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Wed Mar 17 10:01:08 2004
Subject: [Python-Dev] funcdef grammar production
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C1@USAHM010.amer.corp.eds.com>

Michael Hudson:

>> PEP 318 is likely to change the BNF productions for funcdef,
>> so I was taking a look at what is there right now.

> Um, you do realise that this BNF is purely documentational? 

I realize that documentation is purely documentational.

I didn't understand why this particular rule [all keywords
arguments after all purely positional arguments] was left 
out of the pseudo-code and mentioned only in text.  I also 
wondered why the parameter_list rule was so long and convoluted; 
one possible path is

	(defparameter ",")* (defparameter [","])

In general, python documentation is either missing or very good.
If *this* was the best explanation, then there was a problem with
my mental model, and I wanted to know what I was missing.  I 
suggested possible reasons in an attempt to clarify my question.

Guido explained by correcting my assumptions about what had to 
be unambiguous when.  Under the current production, 

	"(a"

is ambiguous only over which defparameter (in the same production)
the "a" represents.  Under my formulation, it would be ambiguous
whether "a" was a defparameter or a (regular) parameter, which are
different productions.  Looking ahead for the "=" violates a
no-lookahead rule.

-jJ

From jim.jewett at EDS.COM  Wed Mar 17 11:28:13 2004
From: jim.jewett at EDS.COM (Jewett, Jim J)
Date: Wed Mar 17 11:29:09 2004
Subject: [Python-Dev] dateutil
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C2@USAHM010.amer.corp.eds.com>

Tim Peters:

> "go to weekday N" isn't controversial,

...

>    Example:  Sundays in November.  The day part of the date is
>    irrelevant.  Note that a "too large" index simply spills over to
>    the next month.

In my experience, most meeting planners either skip the meeting
that month, or move it up a week (so that they really wanted -1,
even if they don't say it that way).  These both differ from your
suggestion, which means it is controversial.

Greg Ewing:

>> However, in my datetime classes, I also have a very complete 
>> set of calls like: startOfMonth, endOfMonth, startOfYear, 
>> endOfYear, lastWeekday, nextWeekday, etc...

> That's good. Having two completely different ways of expressing the
> same thing seems like Too Many Ways To Do It, though, especially if
> this stuff is to be included in the standard library.

The logging library exposes several names for the name objects.
CRITICAL = FATAL; WARN = WARNING.

A single spelling is desirable, but not at the cost of more surprise
somewhere else. 

> Also, I don't understand why the "weeks" parameter isn't used to
> adjust the number of weeks here, instead of supplying it in a rather
> funky way as a kind of parameter to a parameter. 

>   relativedelta(day = MO(+3))

> why not

>   relativedelta(day = MO, weeks = +2)

Do you add weeks to the current day, or to the "start"
of the current week?

M	T[1]	W	Th[2]	F 
M	T	W	Th	F 
M	T[3]	W	Th[4]	F 
M	T[5]	W	Th	F 

>From T[1], do you mean Th[4] (counting the current partial week)?
If so, does starting at Th[2] and asking for Tuesday take you to
T[3] or T[5]?

I'm willing to believe that there is a perfectly sensible answer;
I'm not ready to believe that everyone will have agree on what it
is before talking it out.

Gustavo Niemeyer (answering Christian Tanzer):

>[
>	Adding a month to Jan 31 = Feb 29 (last day of Feb)
>	Adding a month to Feb 29 = Mar 29.
>]

But adding two months to Jan 31 = Mar 31

I've debugged code (not in python) that got to production
falsely assuming that x + 2 == x+1+1

Using negatives to count from the end is less surprising.

-jJ

From FBatista at uniFON.com.ar  Wed Mar 17 11:29:43 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 17 11:31:33 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
Message-ID: <A128D751272CD411BC9200508BC2194D03383761@escpl.tcp.com.ar>

#- Oren Tiroshwrote:

#- > What name do you suggest?
#- 
#- fromfloat or from_float
#- 
#- There are examples of both lowercase and 
#- lower_case_with_underscores in 
#- methods of Python builtins. The former seems more common.

I'll go for fromfloat if nobody opposes.

.	Facundo





. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
ADVERTENCIA  

La informaci?n contenida en este mensaje y cualquier archivo anexo al mismo,
son para uso exclusivo del destinatario y pueden contener informaci?n
confidencial o propietaria, cuya divulgaci?n es sancionada por la ley. 

Si Ud. No es uno de los destinatarios consignados o la persona responsable
de hacer llegar este mensaje a los destinatarios consignados, no est?
autorizado a divulgar, copiar, distribuir o retener informaci?n (o parte de
ella) contenida en este mensaje. Por favor notif?quenos respondiendo al
remitente, borre el mensaje original y borre las copias (impresas o grabadas
en cualquier medio magn?tico) que pueda haber realizado del mismo. 

Todas las opiniones contenidas en este mail son propias del autor del
mensaje y no necesariamente coinciden con las de Telef?nica Comunicaciones
Personales S.A. o alguna empresa asociada. 

Los mensajes electr?nicos pueden ser alterados, motivo por el cual
Telef?nica Comunicaciones Personales S.A. no aceptar? ninguna obligaci?n
cualquiera sea el resultante de este mensaje. 

Muchas Gracias.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20040317/a000eb5d/attachment.html
From mcherm at mcherm.com  Wed Mar 17 12:29:02 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Wed Mar 17 12:29:07 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
Message-ID: <1079544542.40588ade13947@mcherm.com>

Oren writes:
> >     Decimal.fromFloat(floatNumber, positions)
> 
> +1 on the behavior. One nitpick about the method name: this caseConvention 
> is not consistent with the Python Style Guide (PEP 8)

It _IS_ consistant as I read it:

>     Function Names
> 
>       Plain functions exported by a module can either use the CapWords
>       style or lowercase (or lower_case_with_underscores).  There is
>       no strong preference, but it seems that the CapWords style is
>       used for functions that provide major functionality
>       (e.g. nstools.WorldOpen()), while lowercase is used more for
>       "utility" functions (e.g. pathhack.kos_root()).
   [...]
>     Method Names
> 
>       The story is largely the same as for functions.  Use lowercase
>       for methods accessed by other classes or functions that are part
>       of the implementation of an object type.  Use one leading
>       underscore for "internal" methods and instance variables when
>       there is no chance of a conflict with subclass or superclass
>       attributes or when a subclass might actually need access to
>       them.  Use two leading underscores (class-private names,
>       enforced by Python 1.4) in those cases where it is important
>       that only the current class accesses an attribute.  (But realize
>       that Python contains enough loopholes so that an insistent user
>       could gain access nevertheless, e.g. via the __dict__ attribute.)

I interpret that to mean that CapWords (fromFloat) is a perfectly valid 
style for method names. If I'm mis-interpreting it, please let me know.

And if I AM mis-interpreting it, then it's too bad. This is one minor
area where Java has a better design than Python. Both languages allow
any identifier, without considering case as significant. However,
Java has had a clear, well-defined, and well known convention since its
inception (CapWords for classes, lowercase for packages, lowerCapWords
for methods), while Python's convention is vaguely specified by PEP 8
and inconsistantly applied throughout the standard library and most
developer's code. The result is that in Java, I always know how to
spell a name, but in Python sometimes I have to look it up.

-- Michael Chermside



From niemeyer at conectiva.com  Wed Mar 17 14:22:59 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Wed Mar 17 14:22:52 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <40568718.4070306@iinet.net.au>
References: <20040315153454.GD2705@burma.localdomain>
	<E1B2uAR-0008IR-00@swing.co.at>
	<20040315162827.GE2705@burma.localdomain>
	<40568718.4070306@iinet.net.au>
Message-ID: <20040317192259.GA9249@burma.localdomain>

> >Supporting this might be cool indeed. OTOH, it doesn't replace the
> >current behavior. How do you get (day=30)?
> 
> day=30 is simply '30th day of the month'.
> 
> day=-2 is '2nd last day of the month'.
> 
> The negative simply means 'count from the end', as it does with Python 
> indexing. This would indeed be useful, and far more obvious than 
> 'day=31' and relying on the 'use maximum day in month' feature.

Thanks for explaining that. OTOH, that's exactly what I meant
in the sentence above. Negative indexing is nice, but doesn't
replace the current behavior. Please, have a look at the other
messages to understand why.

> The other question I had was why is there no absolute 'week'
> parameter?  to get 'week 43', it is necessary to say 'month=1, day=1,
> weeks=43'.

Because there's no simple definition of what is "week 43". The ISO
notation requires other parameters, like first week day, which is not
covered in relativedelta. If you want to deal with week numbering,
please have a look at rrule.

-- 
Gustavo Niemeyer
http://niemeyer.net

From niemeyer at conectiva.com  Wed Mar 17 14:26:02 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Wed Mar 17 14:26:05 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <200403160039.i2G0dTZl031747@cosc353.cosc.canterbury.ac.nz>
References: <20040315145632.GC2705@burma.localdomain>
	<200403160039.i2G0dTZl031747@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040317192602.GB9249@burma.localdomain>

> By the way, can you subtract relativedeltas as well as
> add them? If '+' makes sense, '-' should make sense, too.

Yes, you can.

-- 
Gustavo Niemeyer
http://niemeyer.net

From niemeyer at conectiva.com  Wed Mar 17 14:38:59 2004
From: niemeyer at conectiva.com (Gustavo Niemeyer)
Date: Wed Mar 17 14:40:26 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <200403152244.i2FMiJ98031602@cosc353.cosc.canterbury.ac.nz>
References: <20040315141417.GA2705@burma.localdomain>
	<200403152244.i2FMiJ98031602@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040317193858.GC9249@burma.localdomain>

> > (just as examples)
> > 
> > d + relativedelta(day=1)
> > d + relativedelta(day=31)
> 
> Sorry, but the more I see of this usage the worse it looks. Here
> you're effectively using '+' as an assignment operator. To me, that's

Assignment operator!?! Where's the assignment?! Where's the assigned
variable?

> a gross abuse of the Python language.

Ok, Greg. You got it. I'm not interested in discussing this issue
anymore.

-- 
Gustavo Niemeyer
http://niemeyer.net

From fumanchu at amor.org  Wed Mar 17 16:23:44 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Wed Mar 17 16:25:28 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561EFA@opus.amorhq.net>

Michael Chermside wrote:
> Oren writes:
> > >     Decimal.fromFloat(floatNumber, positions)
> > 
> > +1 on the behavior. One nitpick about the method name: this 
> caseConvention 
> > is not consistent with the Python Style Guide (PEP 8)
> 
> It _IS_ consistant as I read it:
> 
> >     Function Names
> > 
> >       Plain functions exported by a module can either use 
> the CapWords
> >       style or lowercase (or lower_case_with_underscores).  There is
> >       no strong preference, but it seems that the CapWords style is
> >       used for functions that provide major functionality
> >       (e.g. nstools.WorldOpen()), while lowercase is used more for
> >       "utility" functions (e.g. pathhack.kos_root()).
> 
> I interpret that to mean that CapWords (fromFloat) is a 
> perfectly valid 
> style for method names. If I'm mis-interpreting it, please 
> let me know.

Only that CapWords in PEP 8 seems to be distinct from mixedCase, where
the first character is lowercase (I think that's what Oren meant). So
"fromFloat" (mixedCase) should be "FromFloat" (CapWords) or "from_float"
(lower_case_with_underscores).

However, you're bang on about rampant deviation. Probably too late in
the game to do anything about it with existing core code. :(


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From skip at pobox.com  Wed Mar 17 17:29:41 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 17 17:29:53 2004
Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Objects
	dictobject.c, 2.153, 2.154
In-Reply-To: <E1B3j0E-0005bD-PT@sc8-pr-cvs1.sourceforge.net>
References: <E1B3j0E-0005bD-PT@sc8-pr-cvs1.sourceforge.net>
Message-ID: <16472.53589.983213.587279@montanaro.dyndns.org>


    raymond> * Added a freelist scheme styled after that for tuples.  Saves
    raymond>   around 80% of the calls to malloc and free.  About 10% of the
    raymond>   time, the previous dictionary was completely empty; in those
    raymond>   cases, the dictionary initialization with memset() can be
    raymond>   skipped.

% timeit.py 'd = {} ; d = {"a": 1}'
1000000 loops, best of 3: 1.66 usec per loop
... cvs up ...
... make ...
% ./python.exe ~/local/bin/timeit.py 'd = {} ; d = {"a": 1}'
1000000 loops, best of 3: 1.23 usec per loop

Cool.

Skip

From ark at acm.org  Wed Mar 17 18:17:04 2004
From: ark at acm.org (Andrew Koenig)
Date: Wed Mar 17 18:17:05 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python to
	redefine "is"
Message-ID: <005401c40c75$f5ff9050$6402a8c0@arkdesktop>

It's not my proposal, and I don't know if I'm for it or against it, but I
thought I'd mention it because I think it's interesting.

The observation is that if an object is immutable, there's no legitimate
reason to know whether it's distinct from another object with the same type
and value.  Therefore, the proposal is to change the definition of "is" as
follows:

	1) Unless x and y are both immutable, the meaning of "x is y"
	   remains as it has always been.

	2) If x and y have different types, "x is y" is false (as ever).

	3) If x and y are scalars, the definition of "x is y" is
	   changed to mean "x == y"

	4) Otherwise x and y are immutable collections of the same type,
	   in which case "x is y" is true if and only if every component
	   of x is the corresponding component of y (i.e. yields True when
	   compared using "is").

This change would clear up some seeming anomalies, such as:

	>>> 1000 is 1000
	True
	>>> x = 1000
	>>> y = 1000
	>>> x is y
	False
	>>> x = 42
	>>> y = 42
	>>> x is y
	True

The only disadvantage I can think of is that some comparisons might become
much slower (for example, using "is" to compare two long strings that share
a long common prefix).  However, I have a hard time understanding why anyone
would actually want to use "is" on immutable objects, and if one wants to
force a true object-identity comparison, one can always use id(x)==id(y).

So my question to the more experienced members of this group:  Is this idea
worth considering?  If not, why not?  (I'm partly expecting a response of
"It's not Pythonic," but I don't actually know why not).


From pje at telecommunity.com  Wed Mar 17 18:21:52 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 17 18:22:32 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	to redefine "is"
In-Reply-To: <005401c40c75$f5ff9050$6402a8c0@arkdesktop>
Message-ID: <5.1.1.6.0.20040317181950.02ef6310@telecommunity.com>

At 06:17 PM 3/17/04 -0500, Andrew Koenig wrote:
>(I'm partly expecting a response of
>"It's not Pythonic," but I don't actually know why not).

Three reasons why not:

Simple is better than complex.
In the face of ambiguity, refuse the temptation to guess.
If the implementation is hard to explain, it's a bad idea.


From ark at acm.org  Wed Mar 17 18:31:15 2004
From: ark at acm.org (Andrew Koenig)
Date: Wed Mar 17 18:31:16 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python to
	redefine "is"
In-Reply-To: <5.1.1.6.0.20040317181950.02ef6310@telecommunity.com>
Message-ID: <006901c40c77$f11ce220$6402a8c0@arkdesktop>

> At 06:17 PM 3/17/04 -0500, Andrew Koenig wrote:
> >(I'm partly expecting a response of
> >"It's not Pythonic," but I don't actually know why not).
> 
> Three reasons why not:
> 
> Simple is better than complex.
> In the face of ambiguity, refuse the temptation to guess.
> If the implementation is hard to explain, it's a bad idea.

And a reason on the other side:

Don't expose the implementation to users needlessly.

This kind of implementation-defined behavior makes me nervous, because it
makes it easy to write programs with bugs that no amount of testing can ever
reveal.



From skip at pobox.com  Wed Mar 17 18:44:38 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 17 18:44:46 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python to
	redefine "is"
In-Reply-To: <006901c40c77$f11ce220$6402a8c0@arkdesktop>
References: <5.1.1.6.0.20040317181950.02ef6310@telecommunity.com>
	<006901c40c77$f11ce220$6402a8c0@arkdesktop>
Message-ID: <16472.58086.828464.990361@montanaro.dyndns.org>


    Andrew> Don't expose the implementation to users needlessly.

    Andrew> This kind of implementation-defined behavior makes me nervous,
    Andrew> because it makes it easy to write programs with bugs that no
    Andrew> amount of testing can ever reveal.

I don't know what the original impetus for this long discussion of "is" vs
"==" was, but my guess is that it was misguided (premature) optimization.
The somewhat anomalous behavior of "is" in CPython is an implementation
detail.  If Jython was used as the basis for comparison I suspect the
implemntation might match the definition a bit more precisely.

I'm not sure what the use case for the recursive "is" is, but it would be no
cheaper than "==" in that case.  I'm disinclined to change things.

Skip


From guido at python.org  Wed Mar 17 18:45:32 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 17 18:46:34 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python to
	redefine "is"
In-Reply-To: Your message of "Wed, 17 Mar 2004 18:31:15 EST."
	<006901c40c77$f11ce220$6402a8c0@arkdesktop> 
References: <006901c40c77$f11ce220$6402a8c0@arkdesktop> 
Message-ID: <200403172345.i2HNjW807716@guido.python.org>

[Andrew Koenig describes a proposal to redefine 'if' for immutables]

[Phillip Eby]
> > Three reasons why not:
> > 
> > Simple is better than complex.
> > In the face of ambiguity, refuse the temptation to guess.
> > If the implementation is hard to explain, it's a bad idea.

I'm not sure any of those apply to Andrew's proposal though.

[Andrew]
> And a reason on the other side:
> 
> Don't expose the implementation to users needlessly.
> 
> This kind of implementation-defined behavior makes me nervous,
> because it makes it easy to write programs with bugs that no amount
> of testing can ever reveal.

I'm curious what the reason is to want to redefine 'is' for
immutables.  If I understand Andrew, it's fix broken programs after
the fact (almost as if by time machine :-).

It seems to me that 'is' should never be used for immutables except
for singletons like None.  Perhaps PyChecker should warn about
inappropriate use of 'is' to compare two immutable objects, unless one
of them is None?  This would be a cheaper solution than changing the
implementation.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg at cosc.canterbury.ac.nz  Wed Mar 17 18:48:45 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 17 18:49:18 2004
Subject: [Python-Dev] funcdef grammar production
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C1@USAHM010.amer.corp.eds.com>
Message-ID: <200403172348.i2HNmj1M002625@cosc353.cosc.canterbury.ac.nz>

"Jewett, Jim J" <jim.jewett@eds.com>:

> one possible path is
> 
> 	(defparameter ",")* (defparameter [","])

Even that seems unnecessarily obtuse. I would write it as

  defparameter ("," defparameter)* [","]

> Under my formulation, it would be ambiguous whether "a" was a
> defparameter or a (regular) parameter, which are different
> productions.  Looking ahead for the "=" violates a no-lookahead rule.

If the grammar is only for documentation, none of that matters -- it
doesn't have to be LL(1) or anything in particular, as long as it
unambiguously specifies the language.

By the way, it occurs to me that BNF on its own doesn't seem
to be up to the task of specifying this sort of thing clearly
to a human reader. If it weren't for the commas, we could
say something like

  defparams ::= name* (name "=" expr)* ["*" name] ["**" name]

All we need to say on top of that is "separate this list of
things with commas", which could perhaps be expressed as

  defparams ::= 
    (name* (name "=" expr)* ["*" name] ["**" name]) separated_by ","

with some sort of meta-rule like

  ITEM* separated_by SEP ::= [ITEM ("," ITEM*) [","]]

Hmmm, this is starting to look dangerously like a two-level
grammar...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Wed Mar 17 19:07:51 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 17 19:07:57 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C2@USAHM010.amer.corp.eds.com>
Message-ID: <200403180007.i2I07pX9002653@cosc353.cosc.canterbury.ac.nz>

Jewett, Jim J" <jim.jewett@EDS.COM>:

> Do you add weeks to the current day, or to the "start"
> of the current week?
> 
> M	T[1]	W	Th[2]	F 
> M	T	W	Th	F 
> M	T[3]	W	Th[4]	F 
> M	T[5]	W	Th	F 

It doesn't matter. If we start from T[1] and do

  relativedelta(day = TH, weeks = +2)

then if we go to the next Thursday (Th[2]) and then forward
two weeks, we get Th[4]. If we go forward two weeks to
T[3] and then to the next Thursday, we still get Th[4].

> If so, does starting at Th[2] and asking for Tuesday take you to
> T[3] or T[5]?

A mechanism is still needed to distinguish between going forwards
and backwards when aligning the day. Maybe day=+TU or day=-TU.

All I'm saying is that munging a number of weeks in with 'day'
seems to lead to a confusing overlap of functionality with the
'weeks' parameter. The reader is likely to wonder why there
are two different ways of adding/subtracting weeks and whether
there is some subtle difference between them.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+
				   

From ark-mlist at att.net  Wed Mar 17 19:38:33 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 17 19:38:34 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	toredefine "is"
In-Reply-To: <16472.58086.828464.990361@montanaro.dyndns.org>
Message-ID: <006f01c40c81$584fb630$6402a8c0@arkdesktop>

> I'm not sure what the use case for the recursive "is" is, but it would be
> no cheaper than "==" in that case.  I'm disinclined to change things.

I can understand why you might be disinclined, but I do see why recursive
"is" might be cheaper.  The simplest case is probably comparing two
2-element tuples, where the elements are arbitrary objects.  == might not
even be defined on those objects, but "is" would be.


From ark-mlist at att.net  Wed Mar 17 19:46:09 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 17 19:46:14 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	toredefine "is"
In-Reply-To: <200403172345.i2HNjW807716@guido.python.org>
Message-ID: <007001c40c82$67bc6c20$6402a8c0@arkdesktop>

> I'm curious what the reason is to want to redefine 'is' for
> immutables.  If I understand Andrew, it's fix broken programs after
> the fact (almost as if by time machine :-).
> 
> It seems to me that 'is' should never be used for immutables except
> for singletons like None.  Perhaps PyChecker should warn about
> inappropriate use of 'is' to compare two immutable objects, unless one
> of them is None?  This would be a cheaper solution than changing the
> implementation.

It wasn't my proposal, so I don't know the reason.  However, I suspect that
one motivation is just what you observed: "is" isn't a real good idea for
immutables.  One of the people in the comp.lang.python discussion suggested
that with the change, "is" would be equivalent to "is substitutable for".
Right now, "x is y" implies that x is substitutable for y, but the reverse
is not always true.

My own concern is that if someone writes expressions such as "x is 0" by
mistake, the resulting program might always work on one implementation but
fail on another.



From greg at cosc.canterbury.ac.nz  Wed Mar 17 19:57:11 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 17 19:57:22 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <1079544542.40588ade13947@mcherm.com>
Message-ID: <200403180057.i2I0vBrH003021@cosc353.cosc.canterbury.ac.nz>

It _IS_ consistant as I read it:

>     Function Names
> 
>       Plain functions exported by a module can either use the CapWords
>       style or lowercase (or lower_case_with_underscores).  There is
>       no strong preference, but it seems that the CapWords style is
>       used for functions that provide major functionality...

In other words, this so-called style guide doesn't
guide us much at all in this area. :-(

Although it's true that examples of all sorts of styles
can be found in the standard library, from skimming through
the index of the Library Reference, it seems that the
vast majority of function and method names are either
alllowercase or lowercase_with_underscores.

This agrees with my general impression that the oldest
and most fundamental parts of the language and library
seem to use this style. If I were to identify anything
as the "Python style" for naming, this would be it.

Rather than wimping out on the issue, I think this PEP
could perhaps show some leadership and help guide people
towards some kind of consistent naming style. Just
because a hodgepodge has made its way into the library
in the past doesn't mean we should keep doing things
that way.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From guido at python.org  Wed Mar 17 19:59:52 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 17 20:00:09 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	toredefine "is"
In-Reply-To: Your message of "Wed, 17 Mar 2004 19:38:33 EST."
	<006f01c40c81$584fb630$6402a8c0@arkdesktop> 
References: <006f01c40c81$584fb630$6402a8c0@arkdesktop> 
Message-ID: <200403180059.i2I0xqI07896@guido.python.org>

> I can understand why you might be disinclined, but I do see why recursive
> "is" might be cheaper.  The simplest case is probably comparing two
> 2-element tuples, where the elements are arbitrary objects.  == might not
> even be defined on those objects, but "is" would be.

Ah, but that would be definite incompatible change in semantics.  I
would be very suprised if this printed "IS":

a = []
b = []
if (a, b) is (a, b): print "IS"
else: print "ISN'T"

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Mar 17 20:00:33 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 17 20:00:51 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	toredefine "is"
In-Reply-To: Your message of "Wed, 17 Mar 2004 19:46:09 EST."
	<007001c40c82$67bc6c20$6402a8c0@arkdesktop> 
References: <007001c40c82$67bc6c20$6402a8c0@arkdesktop> 
Message-ID: <200403180100.i2I10X807918@guido.python.org>

> My own concern is that if someone writes expressions such as "x is 0" by
> mistake, the resulting program might always work on one implementation but
> fail on another.

I'd rather make this PyChecker's job than redefine 'is'.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From ark-mlist at att.net  Wed Mar 17 20:08:14 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 17 20:08:17 2004
Subject: [Python-Dev] A proposal has surfaced on
	comp.lang.pythontoredefine "is"
In-Reply-To: <200403180059.i2I0xqI07896@guido.python.org>
Message-ID: <007401c40c85$7e9b30e0$6402a8c0@arkdesktop>

> Ah, but that would be definite incompatible change in semantics.  I
> would be very suprised if this printed "IS":
> 
> a = []
> b = []
> if (a, b) is (a, b): print "IS"
> else: print "ISN'T"

Indeed, it would be a change.  And I can go along with an argument that an
incompatible change of that magnitude should be rejected for that reason
alone.  But why would the change cause a problem?  Consider:

	a = []
	b = []
	x = (a, b)
	y = (a, b)

Can you think of a program that can make productive use of the value of
"x is y"?  It seems to me that x and y are mutually substitutable.


From guido at python.org  Wed Mar 17 20:11:54 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 17 20:12:11 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	toredefine "is"
In-Reply-To: Your message of "Wed, 17 Mar 2004 16:59:52 PST."
	<200403180059.i2I0xqI07896@guido.python.org> 
References: <006f01c40c81$584fb630$6402a8c0@arkdesktop>  
	<200403180059.i2I0xqI07896@guido.python.org> 
Message-ID: <200403180111.i2I1BsJ07977@guido.python.org>

Anyone else amused by the fact that 4 year after Clinton we're
discussing what the meaning of "is" should be? :)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg at cosc.canterbury.ac.nz  Wed Mar 17 20:13:09 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 17 20:13:28 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python to
	redefine "is"
In-Reply-To: <005401c40c75$f5ff9050$6402a8c0@arkdesktop>
Message-ID: <200403180113.i2I1D9pQ003045@cosc353.cosc.canterbury.ac.nz>

> Therefore, the proposal is to change the definition of "is" as
> follows:

I completely fail to see what the problem is that this is purporting
to solve. You don't use 'is' to compare integers, unless for some very
special reason you care whether they're actually the same object. Most
of the time you don't care, so you use '=='. The "anomalies" you
mention are only a problem in the minds of newbies who haven't
actually used Python very much, as far as I can see.

> Is this idea worth considering?  If not, why not?

IMO, it's not worth considering, because its utility is zero
considering the way Python is used in practice.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From tjreedy at udel.edu  Wed Mar 17 20:15:43 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed Mar 17 20:16:02 2004
Subject: [Python-Dev] Re: A proposal has surfaced on comp.lang.python
	toredefine "is"
References: <006901c40c77$f11ce220$6402a8c0@arkdesktop>
	<200403172345.i2HNjW807716@guido.python.org>
Message-ID: <c3at7r$fre$1@sea.gmane.org>


"Guido van Rossum" <guido@python.org> wrote in message
news:200403172345.i2HNjW807716@guido.python.org...
> [Andrew Koenig describes a proposal to redefine 'if' for immutables]
>
> [Phillip Eby]
> > > Three reasons why not:
> > >
> > > Simple is better than complex.
> > > In the face of ambiguity, refuse the temptation to guess.
> > > If the implementation is hard to explain, it's a bad idea.
>
> I'm not sure any of those apply to Andrew's proposal though.

A simple, easy-to-understand, one-sentence rule versus a seemingly
arbitrary four-sentence rule strikes me as simple versus complex.  I would
not relish trying to explain the proposal to anyone.

> I'm curious what the reason is to want to redefine 'is' for immutables.

>From what I have read, the objection is aesthetic, rather than something
driven by practical production code needs.  It struck me as a typical "this
bothers me, here's how someone should fix it" thread that keeps clp from
getting too quiet.

Terry J. Reedy




From guido at python.org  Wed Mar 17 20:22:02 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 17 20:22:14 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: Your message of "Thu, 18 Mar 2004 13:57:11 +1300."
	<200403180057.i2I0vBrH003021@cosc353.cosc.canterbury.ac.nz> 
References: <200403180057.i2I0vBrH003021@cosc353.cosc.canterbury.ac.nz> 
Message-ID: <200403180122.i2I1M2P08024@guido.python.org>

> Rather than wimping out on the issue, I think this PEP
> could perhaps show some leadership and help guide people
> towards some kind of consistent naming style. Just
> because a hodgepodge has made its way into the library
> in the past doesn't mean we should keep doing things
> that way.

It's hard not to wimp out, given that we can't change the existing
library.  Most C extension and nearly all built-in types use
alllowercase or lowercase_with_underscore, but many fundamental Python
modules use camelCase, e.g. threading.py.

BTW I feel less wimpy for class and module/package names: classes
should use CamelCase (except *built-in* non-exception types) and
module/package names should be all underscore.  For example, modules
like StringIO, SocketServer, cPickle and TERMIOS are historical
violations, as are the asyncore classes.

But for methodnaming there's much less a near-consensus in the
standard library.  (Builtins are mostly lowercase.)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From pje at telecommunity.com  Wed Mar 17 20:39:34 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 17 20:34:08 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	toredefine "is"
In-Reply-To: <007001c40c82$67bc6c20$6402a8c0@arkdesktop>
References: <200403172345.i2HNjW807716@guido.python.org>
Message-ID: <5.1.0.14.0.20040317203724.038ea170@mail.telecommunity.com>

At 07:46 PM 3/17/04 -0500, Andrew Koenig wrote:
>My own concern is that if someone writes expressions such as "x is 0" by
>mistake, the resulting program might always work on one implementation but
>fail on another.

In theory, the compiler could consider it an error for an 'is' expression 
to have a constant on either side.  That would be preferable to redefining 
'is'.


From skip at pobox.com  Wed Mar 17 20:41:32 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 17 20:41:40 2004
Subject: [Python-Dev] A proposal has surfaced on
	comp.lang.pythontoredefine "is"
In-Reply-To: <007401c40c85$7e9b30e0$6402a8c0@arkdesktop>
References: <200403180059.i2I0xqI07896@guido.python.org>
	<007401c40c85$7e9b30e0$6402a8c0@arkdesktop>
Message-ID: <16472.65100.608412.688015@montanaro.dyndns.org>

    Andrew> Consider:

    Andrew>     a = []
    Andrew>     b = []
    Andrew>     x = (a, b)
    Andrew>     y = (a, b)

    Andrew> Can you think of a program that can make productive use of the
    Andrew> value of "x is y"?  It seems to me that x and y are mutually
    Andrew> substitutable.

Is "mutually substitutable" a fancy way of saying "equal"?  In other words,
why would "x is y" be preferred over "x == y"?

Skip


From greg at cosc.canterbury.ac.nz  Wed Mar 17 20:58:04 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 17 20:58:15 2004
Subject: [Python-Dev] dateutil
In-Reply-To: <20040317193858.GC9249@burma.localdomain>
Message-ID: <200403180158.i2I1w4dr003101@cosc353.cosc.canterbury.ac.nz>

> > a gross abuse of the Python language.
> 
> Ok, Greg. You got it. I'm not interested in discussing this issue
> anymore.

Hey, I didn't mean that as a personal insult or anything.
Sorry if it came across that way.

> Assignment operator!?! Where's the assignment?! Where's the assigned
> variable?

All right, perhaps it's not strictly an assignment, since it's
constructing a new object rather than mutating an existing one. But
it's substituting a value for one of the object's fields that doesn't
depend in any way on the old value of that field. This is completely
unlike what 'addition' usually means.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From tim.one at comcast.net  Wed Mar 17 21:05:19 2004
From: tim.one at comcast.net (Tim Peters)
Date: Wed Mar 17 21:05:27 2004
Subject: [Python-Dev] A proposal has surfaced oncomp.lang.pythontoredefine
	"is"
In-Reply-To: <007401c40c85$7e9b30e0$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELPJJAB.tim.one@comcast.net>

[Andrew Koenig]
> ...
> Indeed, it would be a change.  And I can go along with an argument
> that an incompatible change of that magnitude should be rejected for
> that reason alone.  But why would the change cause a problem?
> Consider:
>
> 	a = []
> 	b = []
> 	x = (a, b)
> 	y = (a, b)
>
> Can you think of a program that can make productive use of the value
> of "x is y"?  It seems to me that x and y are mutually substitutable.

It's the purpose of "is" to give Python code a handle on the actual object
graph an implementation creates.  This is important in "system code" that
needs to manipulate, analyze, or clone parts of the system object graph *for
its own sake*.  For example, if your x and y above are both also bound to
attributes of a class instance, the actual object graph is plain different
depending on whether instance.x is instance.y.  Application-level code is
free to rely on that distinction or not, as it likes; if it does rely on it,
it's also free to arrange to make any conclusion it likes a consequence of
"is" or "is not".

System code like cPickle relies on it to faithfully recreate isomorphic
object graphs, and to optimize pickle size by planting cheap indirections to
already-pickled subobjects.  There aren't many memory analyzers for Python,
but such as exist rely on being able to determine when distinct bindings are
to the same object (if they are, it's vital not to "double count" the memory
consumed by the subobjects).   Stuff like that, "substitutable" generally
doesn't make any sense for low-level analysis of what the implementation is
actually doing, and that kind of analysis is also an important application
for Python.

I think all those things *can* be done via manipulating id()s instead.
Indeed, I think they usually *are* done that way in most system code today,
but "is" was part of Python long before id() was added.

I don't know what alternative meaning has been suggested, but I don't have
to in order to know I won't like it -- the meaning of "is" now is dead
simple, and useful in system-level spelunking code.

Someday Guido has to figure out a way to convice Python programmers that
they really are allowed to write their own functions when they find a
describable behavior that isn't built in <wink>.


From d.holton at vanderbilt.edu  Wed Mar 17 21:09:28 2004
From: d.holton at vanderbilt.edu (Doug Holton)
Date: Wed Mar 17 21:09:54 2004
Subject: [Python-Dev] bundle pychecker with python [was "Re: A proposal has
	surfaced..."]
In-Reply-To: <E1B3mQy-0006ty-Hc@mail.python.org>
References: <E1B3mQy-0006ty-Hc@mail.python.org>
Message-ID: <405904D8.2000203@vanderbilt.edu>

> I'd rather make this PyChecker's job than redefine 'is'.
> 
> --Guido van Rossum (home page: http://www.python.org/~guido/)

This reminded me, I was wondering if pychecker could be included with 
the next python distribution.
I think it would be very helpful to beginners in particular, but also 
make it easier for everyone.

Pychecker seems to be very stable now.  I don't know if the people 
maintaining it would agree to having it bundled with python though.

In addition to merely adding the pychecker module, we might eventually:

A. Add a command line switch for the python executable that enables 
pychecker.  The next time someone complains about some error that the 
python compiler didn't catch, just tell them to run "python -strict" or 
something to that effect.

B. Add a preference option to IDLE perhaps that will tacitly enter 
"import pychecker.checker" before a new interactive prompt appears.


From greg at cosc.canterbury.ac.nz  Wed Mar 17 21:35:01 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 17 21:35:09 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <200403180122.i2I1M2P08024@guido.python.org>
Message-ID: <200403180235.i2I2Z1Id003184@cosc353.cosc.canterbury.ac.nz>

Guido:

> It's hard not to wimp out, given that we can't change the existing
> library.

I'm not suggesting anything in the standard library should
be changed, but I can't see why we can't at least make some
recommendations about new additions to the library. The
fact that it's a bit of a mess doesn't seem like a reason
to keep on making it more messy.

> But for methodnaming there's much less a near-consensus in the
> standard library.

I just did an experiment which involved running the Python
script appended below over the Library Reference index.
The results were:

Total names: 1908
lower_names: 1684
Percent lower: 88.2599580713

An 88% majority seems like a fairly clear vote to me. :-)

---------------------------------------------------------------------
# Count function name styles from the Python Library Reference Index.
import re, sys
pat = re.compile(r"[A-Za-z_][A-Za-z0-9_]*\(\)")
all = {}
lc = {}
for line in sys.stdin:
  for word in pat.findall(line):
    all[word] = 1
    if word == word.lower():
      lc[word] = 1
print "Total names:", len(all)
print "lower_names:", len(lc)
print "Percent lower:", 100.0 * len(lc) / len(all)
---------------------------------------------------------------------

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From tim.one at comcast.net  Wed Mar 17 21:41:57 2004
From: tim.one at comcast.net (Tim Peters)
Date: Wed Mar 17 21:41:53 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <200403180235.i2I2Z1Id003184@cosc353.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEMEJJAB.tim.one@comcast.net>

[Greg Ewing]
> ...
> I just did an experiment which involved running the Python
> script appended below over the Library Reference index.
> The results were:
>
> Total names: 1908
> lower_names: 1684
> Percent lower: 88.2599580713
>
> An 88% majority seems like a fairly clear vote to me. :-)

More importantly, lower_names are correct.  This gets confused in PythonLand
because some major contributors (like Zope Corp) have institutionalized
aWrongPolicy for naming methods.  If you don't believe me, ask Barry.  I
believe him on this issue because he just can't be wrong about *everything*
<wink>.


From raymond.hettinger at verizon.net  Wed Mar 17 22:04:05 2004
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Wed Mar 17 22:06:11 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <338366A6D2E2CA4C9DAEAE652E12A1DE014602B6@au3010avexu1.global.avaya.com>
Message-ID: <000001c40c95$ace70c20$4321a044@oemcomputer>

For those with an interest, here are some timing scorecards which track
the performance of dictionary iteration:


C:\python24\python timedictiter.py
0.638557468278 keys()
0.648748721903 values()
2.97803432843 items()
1.04057057611 list(d)
1.19934712281 tuple(d)
2.23167921018 for k in d.iterkeys(): pass
2.2001936003 for v in d.itervalues(): pass
4.07347675958 for k, v in d.iteritems(): pass

C:\pydev>\python23\python timedictiter.py
0.886520893746 keys()
0.861855713304 values()
3.44381233343 items()
6.86479910827 list(d)
2.48302854557 tuple(d)
2.85350994821 for k in d.iterkeys(): pass
2.8332120887 for v in d.itervalues(): pass
6.70084312509 for k, v in d.iteritems(): pass

C:\pydev>\python22\python timedictiter.py
0.81167636065 keys()
0.893352218441 values()
2.99887443638 items()
6.83444576677 list(d)
2.48634656967 tuple(d)
4.54763489163 for k in d.iterkeys(): pass
4.53761544779 for v in d.itervalues(): pass
7.77635645921 for k, v in d.iteritems(): pass


Raymond Hettinger

#------------------------------------------------------
P.S.  Here is the timing suite:

from timeit import Timer

setup = """
import random
n = 1000
d = {}
for i in xrange(n):
    d[random.random()] = i
keys = d.keys
values = d.values
items = d.items
"""

stmts = [
    'keys()',
    'values()',
    'items()',
    'list(d)',
    'tuple(d)',
    'for k in d.iterkeys(): pass',
    'for v in d.itervalues(): pass',
    'for k, v in d.iteritems(): pass',
]

for stmt in stmts:
    print min(Timer(stmt, setup).repeat(5, 10000)), stmt


From guido at python.org  Wed Mar 17 22:15:28 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 17 22:15:39 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: Your message of "Thu, 18 Mar 2004 15:35:01 +1300."
	<200403180235.i2I2Z1Id003184@cosc353.cosc.canterbury.ac.nz> 
References: <200403180235.i2I2Z1Id003184@cosc353.cosc.canterbury.ac.nz> 
Message-ID: <200403180315.i2I3FSS08314@guido.python.org>

> I just did an experiment which involved running the Python
> script appended below over the Library Reference index.
> The results were:
> 
> Total names: 1908
> lower_names: 1684
> Percent lower: 88.2599580713
> 
> An 88% majority seems like a fairly clear vote to me. :-)
> 
> ---------------------------------------------------------------------
> # Count function name styles from the Python Library Reference Index.
> import re, sys
> pat = re.compile(r"[A-Za-z_][A-Za-z0-9_]*\(\)")
> all = {}
> lc = {}
> for line in sys.stdin:
>   for word in pat.findall(line):
>     all[word] = 1
>     if word == word.lower():
>       lc[word] = 1
> print "Total names:", len(all)
> print "lower_names:", len(lc)
> print "Percent lower:", 100.0 * len(lc) / len(all)
> ---------------------------------------------------------------------

Now try that again but only look for names following 'def'.

You've counted all language keywords, builtins, modules, etc.

We should be looking for method and function definitions only.

OTOH, since Tim has Spoken, maybe we should just adopt alllowercase()
as the preferred convention. :)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Mar 17 22:17:30 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 17 22:17:52 2004
Subject: [Python-Dev] bundle pychecker with python [was "Re: A proposal
	has surfaced..."]
In-Reply-To: Your message of "Wed, 17 Mar 2004 20:09:28 CST."
	<405904D8.2000203@vanderbilt.edu> 
References: <E1B3mQy-0006ty-Hc@mail.python.org>  
	<405904D8.2000203@vanderbilt.edu> 
Message-ID: <200403180317.i2I3HUX08328@guido.python.org>

> This reminded me, I was wondering if pychecker could be included with 
> the next python distribution.
> I think it would be very helpful to beginners in particular, but also 
> make it easier for everyone.
> 
> Pychecker seems to be very stable now.  I don't know if the people 
> maintaining it would agree to having it bundled with python though.
> 
> In addition to merely adding the pychecker module, we might eventually:
> 
> A. Add a command line switch for the python executable that enables 
> pychecker.  The next time someone complains about some error that the 
> python compiler didn't catch, just tell them to run "python -strict" or 
> something to that effect.

Works for me.  Patch please?

> B. Add a preference option to IDLE perhaps that will tacitly enter 
> "import pychecker.checker" before a new interactive prompt appears.

I may be mistaken, but I thought that IDLE can already "check" modules
using PyChecker if it is installed via a menu item.  Otherwise it
would be a great feature.  I don't think it makes sense to do
automatically on all code typed; it's too noisy for that.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From bac at OCF.Berkeley.EDU  Wed Mar 17 22:40:17 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Wed Mar 17 22:40:43 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <000001c40c95$ace70c20$4321a044@oemcomputer>
References: <000001c40c95$ace70c20$4321a044@oemcomputer>
Message-ID: <40591A21.9040602@ocf.berkeley.edu>

Raymond Hettinger wrote:

> For those with an interest, here are some timing scorecards which track
> the performance of dictionary iteration:
> 
> 
> C:\python24\python timedictiter.py
> 0.638557468278 keys()
> 0.648748721903 values()
> 2.97803432843 items()
> 1.04057057611 list(d)
> 1.19934712281 tuple(d)
> 2.23167921018 for k in d.iterkeys(): pass
> 2.2001936003 for v in d.itervalues(): pass
> 4.07347675958 for k, v in d.iteritems(): pass
> 
> C:\pydev>\python23\python timedictiter.py
> 0.886520893746 keys()
> 0.861855713304 values()
> 3.44381233343 items()
> 6.86479910827 list(d)
> 2.48302854557 tuple(d)
> 2.85350994821 for k in d.iterkeys(): pass
> 2.8332120887 for v in d.itervalues(): pass
> 6.70084312509 for k, v in d.iteritems(): pass
> 
> C:\pydev>\python22\python timedictiter.py
> 0.81167636065 keys()
> 0.893352218441 values()
> 2.99887443638 items()
> 6.83444576677 list(d)
> 2.48634656967 tuple(d)
> 4.54763489163 for k in d.iterkeys(): pass
> 4.53761544779 for v in d.itervalues(): pass
> 7.77635645921 for k, v in d.iteritems(): pass
> 


Interesting how items() slowed down between 2.2 and 2.3 but is now a 
sliver faster than 2.2 was.

All very cool, Raymond.  Cleaner code and faster to boot!  Thanks for 
doing this.

-Brett

From neal at metaslash.com  Wed Mar 17 23:23:37 2004
From: neal at metaslash.com (Neal Norwitz)
Date: Wed Mar 17 23:23:43 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python to
	redefine "is"
In-Reply-To: <200403172345.i2HNjW807716@guido.python.org>
References: <006901c40c77$f11ce220$6402a8c0@arkdesktop>
	<200403172345.i2HNjW807716@guido.python.org>
Message-ID: <20040318042337.GP469@epoch.metaslash.com>

On Wed, Mar 17, 2004 at 03:45:32PM -0800, Guido van Rossum wrote:
> 
> It seems to me that 'is' should never be used for immutables except
> for singletons like None.  Perhaps PyChecker should warn about
> inappropriate use of 'is' to compare two immutable objects, unless one
> of them is None?

There currently is a check for using 'is' or 'is not' with any literal:

    def f(x):
      if x is 505: print "don't do that"

    def g(x):
      if x is 'xbc': print "don't do that"

    def h(x):
      if x is ('xbc',): print "don't do that"

    def i(x):
      if x is ['xbc']: print "don't do that"

    def none_test(x):
      if x is None: print "ok, fine, do that"

    def ok(x):
      a = ['xbc']
      if x is a: print "ok, fine, do that"

$ checker.py tt.py
tt.py:3: Using is 505, may not always work
tt.py:6: Using is xbc, may not always work
tt.py:9: Using is (Stack Item: (xbc, <type 'str'>, 1),), may not always work
tt.py:12: Using is [Stack Item: (xbc, <type 'str'>, 1)], may not always work

It's supposed to work with True/False, but there's a bug.  That, 
the print of StackItems, and many more. :-)

Neal

From jeremy at alum.mit.edu  Wed Mar 17 23:41:35 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed Mar 17 23:42:21 2004
Subject: [Python-Dev] bundle pychecker with python [was "Re: A proposal
	has surfaced..."]
In-Reply-To: <405904D8.2000203@vanderbilt.edu>
References: <E1B3mQy-0006ty-Hc@mail.python.org>
	<405904D8.2000203@vanderbilt.edu>
Message-ID: <1079584894.20481.0.camel@localhost.localdomain>

On Wed, 2004-03-17 at 21:09, Doug Holton wrote:
> > I'd rather make this PyChecker's job than redefine 'is'.
> > 
> > --Guido van Rossum (home page: http://www.python.org/~guido/)
> 
> This reminded me, I was wondering if pychecker could be included with 
> the next python distribution.
> I think it would be very helpful to beginners in particular, but also 
> make it easier for everyone.

I think we'd need to agree on a small subset of pychecker warnings that
everyone could agree on.  I find pychecker useful for finding bugs in
Zope, but a great many of its warnings aren't helpful.

Jeremy



From greg at cosc.canterbury.ac.nz  Thu Mar 18 00:20:03 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu Mar 18 00:20:43 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <200403180315.i2I3FSS08314@guido.python.org>
Message-ID: <200403180520.i2I5K3JW003431@cosc353.cosc.canterbury.ac.nz>

Guido:

> Now try that again but only look for names following 'def'.
> 
> You've counted all language keywords, builtins, modules, etc.

Nope, I only counted the names which appear in the index immediately
followed by (). That ought to pick up only functions and
methods. It'll include built-in functions and methods, but that's
okay, because I think they should be included.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From perky at i18n.org  Thu Mar 18 01:21:45 2004
From: perky at i18n.org (Hye-Shik Chang)
Date: Thu Mar 18 01:21:53 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <000001c40c95$ace70c20$4321a044@oemcomputer>
References: <338366A6D2E2CA4C9DAEAE652E12A1DE014602B6@au3010avexu1.global.avaya.com>
	<000001c40c95$ace70c20$4321a044@oemcomputer>
Message-ID: <20040318062145.GA78290@i18n.org>

On Wed, Mar 17, 2004 at 10:04:05PM -0500, Raymond Hettinger wrote:
> For those with an interest, here are some timing scorecards which track
> the performance of dictionary iteration:
> 
> 
[snip]

Yay!  I'm a big fan of your series of optimizations.
Great thanks! :)

By the way, I think there're many junior python hacker wannabes
like me to challenge objects/compiler/vm optimizations.  Can you
share your roadmap or work queues/plans, list of known bottle-necks
for them?

Cheers,
Hye-Shik

From pnorvig at google.com  Thu Mar 18 04:26:09 2004
From: pnorvig at google.com (Peter Norvig)
Date: Thu Mar 18 04:26:18 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python to redefine
	"is"
Message-ID: <BB8A764C.795333E3@mail.google.com>

Let me point out that in Common Lisp, there are five equality predicates:

  eq     like Python's 'is', only true for identical objects
  eql    also true for numbers with same value
  equal  like Python's '=='
  equalp also true for strings with different case
  =      only works on numbers, true if they are eql after
         conversion to the same type

I would say that Python is served well by the two equality predicates
it has, that it is impossible to please everyone, and that users
should get used to writing the predicate they want if it is not one of
the builtins.

See also Kent Pitman's "EQUAL Rights -- and wrongs -- in Lisp,
http://www.nhplace.com/kent/PS/EQUAL.html

-Peter Norvig

From mwh at python.net  Thu Mar 18 05:46:57 2004
From: mwh at python.net (Michael Hudson)
Date: Thu Mar 18 05:47:12 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	toredefine "is"
In-Reply-To: <007001c40c82$67bc6c20$6402a8c0@arkdesktop> (Andrew Koenig's
	message of "Wed, 17 Mar 2004 19:46:09 -0500")
References: <007001c40c82$67bc6c20$6402a8c0@arkdesktop>
Message-ID: <2my8pyo2ji.fsf@starship.python.net>

"Andrew Koenig" <ark-mlist@att.net> writes:

>> I'm curious what the reason is to want to redefine 'is' for
>> immutables.  If I understand Andrew, it's fix broken programs after
>> the fact (almost as if by time machine :-).
>> 
>> It seems to me that 'is' should never be used for immutables except
>> for singletons like None.  Perhaps PyChecker should warn about
>> inappropriate use of 'is' to compare two immutable objects, unless one
>> of them is None?  This would be a cheaper solution than changing the
>> implementation.
>
> It wasn't my proposal, so I don't know the reason.

AFAIK this came up in a thread about reload() where the complaint was
that reload() didn't do what the OP suggested.

The consequent 'is' proposal isn't so much the tail wagging the dog as
the handbrake wagging the canoe.

I'd much rather people just knew how Python worked (and also agree
with Peter Norvig: Common Lisp has *five* equality predicates, and
still you end up writing your own every now and again).

Cheers,
mwh

-- 
  Screaming 14-year-old boys attempting to prove to each other that
  they are more 3133t than j00.
         -- Reason #8 for quitting slashdot today, from
            http://www.cs.washington.edu/homes/klee/misc/slashdot.html

From GMcCaughan at synaptics-uk.com  Thu Mar 18 08:01:06 2004
From: GMcCaughan at synaptics-uk.com (Gareth McCaughan)
Date: Thu Mar 18 08:01:22 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python to r
	edefine "is"
Message-ID: <81D9DC7CCBF5D311AEC000508BC20765389FC3@uk_exchange.synaptics-uk.com>

Peter Norvig wrote:

> Let me point out that in Common Lisp, there are five equality 
> predicates:
> 
>   eq     like Python's 'is', only true for identical objects
>   eql    also true for numbers with same value

Numbers and characters; characters as well as numbers
may behave counterintuitively w.r.t EQ.

>   equal  like Python's '=='
>   equalp also true for strings with different case
>   =      only works on numbers, true if they are eql after
>          conversion to the same type

Ha. Don't forget

    char=        only works on characters, equivalent to EQL then
    char-equal   only works on characters, ignores case
    string=      only works on strings, equivalent to EQUAL then
    string-equal only works on strings, ignores case

> I would say that Python is served well by the two equality predicates
> it has, that it is impossible to please everyone, and that users
> should get used to writing the predicate they want if it is not one of
> the builtins.

Agreed. Although the fact that EQL is much more useful in practice
than EQ suggests that there's something to be said for making Python's
"is" more EQL-like. (Not enough, in my opinion, to make it worth doing.)

-- 
g

From mcherm at mcherm.com  Thu Mar 18 08:28:04 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Thu Mar 18 08:28:11 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
Message-ID: <1079616484.4059a3e40eaaf@mcherm.com>

Greg writes:
> I just did an experiment which involved running the Python
> script appended below over the Library Reference index.
> The results were:
> 
> Total names: 1908
> lower_names: 1684
> Percent lower: 88.2599580713

Unfortunately, there is a flaw in your methodology, and I
can't think of a reasonable way to correct for it. Consider
this code:

  class AllMixedCase:
      def thisIsMixedCase(): pass
      def thisIsAlsoMixed(): pass
      def mixed(): pass

  class AllLowerCase:
      def thisisalllowercase(): pass
      def thisisalsolowercase(): pass
      def lower(): pass

I'm guessing that your code counts 4 lowercase and 2 mixed
case. The problem is that a one-word mixedCase looks just
like a one-word lowercase. And when it's possible, using
a simple one-word name is a GOOD thing... so hopefully our
standard library is chock full of one-word functions.

I could also point out that in those cases where it ISN'T
possible to use one word, the alllowercaseformat is a real
pain to read, especially for non-native english speakers.
But that would start a coding-style war, and a major purpose
of having standards is to AVOID coding-style wars, so I'd
better not bring it up. ;-)

-- Michael Chermside


From fredrik at pythonware.com  Thu Mar 18 08:33:26 2004
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu Mar 18 08:33:34 2004
Subject: [Python-Dev] Re: A proposal has surfaced on comp.lang.python
	toredefine "is"
References: <005401c40c75$f5ff9050$6402a8c0@arkdesktop>
Message-ID: <c3c8f9$ab3$1@sea.gmane.org>

Andrew Koenig wrote:

> So my question to the more experienced members of this group:  Is this idea
> worth considering?  If not, why not?

how do you plan to implement isimmutable(x) ?

</F>




From mcherm at mcherm.com  Thu Mar 18 08:43:01 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Thu Mar 18 08:43:05 2004
Subject: [Python-Dev] bundle pychecker with python [was "Re: A proposal
	hassurfaced..."]
Message-ID: <1079617381.4059a765eb721@mcherm.com>

> I was wondering if pychecker could be included with 
> the next python distribution.

+1

-- Michael Chermside

(There was nothing more to say!)


From gstein at lyra.org  Thu Mar 18 09:14:13 2004
From: gstein at lyra.org (gstein@lyra.org)
Date: Thu Mar 18 09:13:29 2004
Subject: [Python-Dev] received your email
Message-ID: <200403181414.i2IEEDZ4023391@nebula.lyra.org>

Hi,

  [ Re: warning ]

I have received your email, but it may take a while to respond. I'm really
sorry to have to hook up this auto-responder, as it is so impersonal.
However, I get a lot of email every day and find it very difficult to keep
up with it. Please be patient while I try to get to your message.

Please feel free to resend your message if you think I've missed it.

I'll always respond to personal email first. If your email is regarding some
of the software that I work on (if you have questions, comments,
suggestions, etc), then please resend it to the appropriate mailing list:

    mod_dav      <mailto:dav-dev@lyra.org>
    WebDAV       <mailto:w3c-dist-auth@w3.org>
    ViewCVS      <mailto:viewcvs@lyra.org>
    Subversion   <mailto:dev@subversion.tigris.org>
    edna         <mailto:edna@lyra.org>

Thank you!

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/

From tim.one at comcast.net  Thu Mar 18 10:11:27 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 18 10:11:32 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <1079616484.4059a3e40eaaf@mcherm.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEONJJAB.tim.one@comcast.net>

[Michael Chermside]
> ...
> I could also point out that in those cases where it ISN'T
> possible to use one word, the alllowercaseformat is a real
> pain to read, especially for non-native english speakers.

But all_lower_case_format isn't, and some claim it's easier for non-native
readers to read than stuffRammedTogetherLikeThis.  I believe it, in part
because it's easier for this native reader to read:  WordRamming doesn't
exist in real-life English.  In SmallDoses such affectations can be helpful
to draw attention, and that's fine for class names, or Wiki page titles.
ButOveruseOf snyAffectation isWearying.



From shane at zope.com  Thu Mar 18 10:18:19 2004
From: shane at zope.com (Shane Hathaway)
Date: Thu Mar 18 10:18:33 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEMEJJAB.tim.one@comcast.net>
References: <200403180235.i2I2Z1Id003184@cosc353.cosc.canterbury.ac.nz>
	<LNBBLJKPBEHFEDALKOLCOEMEJJAB.tim.one@comcast.net>
Message-ID: <4059BDBB.3070206@zope.com>

Tim Peters wrote:
> More importantly, lower_names are correct.  This gets confused in PythonLand
> because some major contributors (like Zope Corp) have institutionalized
> aWrongPolicy for naming methods.  If you don't believe me, ask Barry.  I
> believe him on this issue because he just can't be wrong about *everything*
> <wink>.

Since most of my Python work has been on Zope, I didn't notice Python 
has this naming convention until this thread.  I've read PEP 8 several 
times, but the "method names" section only says to use lowercase.  It 
doesn't say what to do if the method name requires multiple words.  This 
is left to interpretation by the reader, and I've always interpreted it 
as meaning mixedCase.  People who come to Python through Zope tend to 
guess that Python uses the same naming conventions as Java.

I suggest the "method names" section needs to be more specific to 
correct this misunderstanding.  Perhaps: "Use lowercase_with_underscores 
for all method names.  Single-word method names are preferred."  It 
might also say that not all of the Python library follows this 
convention, but all new modules will.

I think I'll convert Ape (http://hathaway.freezope.org/Software/Ape) to 
this convention.

Shane

From jim.jewett at eds.com  Thu Mar 18 10:23:07 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Thu Mar 18 10:24:07 2004
Subject: [Python-Dev] redefining is
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C8@USAHM010.amer.corp.eds.com>

Andrew Koenig:

> The observation is that if an object is immutable, there's 
> no legitimate reason to know whether it's distinct from
> another object with the same type and value.

There is an idiom (I've seen it more in Lisp than in python) 
of creating a fresh object to act as a sentinel.

"done with this data" might well appear in the input, but
the specific newly-created-string (which happens to look
just like that) can't appear.

The sentinal is usually a mutable object, but it is sometimes
a string indicating the object's meaning.  ("fail")  It is
surprising that some objects (like small integers) cannot be
used, but I don't think the answer is to make the entire
idiom unusable.

You could argue that they ought to be using (id(x) == id(y))
to emphasize that == isn't enough, but ... (x is y) seems
just as clear, and the reference manual (5.9) says that is 
tests for object identity. 

-jJ

From barry at python.org  Thu Mar 18 10:48:00 2004
From: barry at python.org (Barry Warsaw)
Date: Thu Mar 18 10:48:09 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <4059BDBB.3070206@zope.com>
References: <200403180235.i2I2Z1Id003184@cosc353.cosc.canterbury.ac.nz>
	<LNBBLJKPBEHFEDALKOLCOEMEJJAB.tim.one@comcast.net>
	<4059BDBB.3070206@zope.com>
Message-ID: <1079624880.26767.64.camel@anthem.wooz.org>

On Thu, 2004-03-18 at 10:18, Shane Hathaway wrote:
> Tim Peters wrote:
> > More importantly, lower_names are correct.  This gets confused in PythonLand
> > because some major contributors (like Zope Corp) have institutionalized
> > aWrongPolicy for naming methods.  If you don't believe me, ask Barry.  I
> > believe him on this issue because he just can't be wrong about *everything*
> > <wink>.
> 
> Since most of my Python work has been on Zope, I didn't notice Python 
> has this naming convention until this thread.  I've read PEP 8 several 
> times, but the "method names" section only says to use lowercase.  It 
> doesn't say what to do if the method name requires multiple words.  This 
> is left to interpretation by the reader, and I've always interpreted it 
> as meaning mixedCase.  People who come to Python through Zope tend to 
> guess that Python uses the same naming conventions as Java.
> 
> I suggest the "method names" section needs to be more specific to 
> correct this misunderstanding.  Perhaps: "Use lowercase_with_underscores 
> for all method names.  Single-word method names are preferred."  It 
> might also say that not all of the Python library follows this 
> convention, but all new modules will.

I've gone back and forth on this issue several times, but the rationale
that Tim refers to has to do with conversations we've had on various
lists, and the decision we made with the email package.  At the time,
lowercase_with_underscores was preferred by non-native English speakers
over mixedCaseWords.  It seems that the latter can give non-native
English speakers more difficulty.

Sometimes though, I still like mixedCase. ;)

+1 on updating PEP 8 to encourage <wink> lowercase_with_underscores.

-Barry



From barry at python.org  Thu Mar 18 10:50:11 2004
From: barry at python.org (Barry Warsaw)
Date: Thu Mar 18 10:50:17 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C8@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C8@USAHM010.amer.corp.eds.com>
Message-ID: <1079625010.26767.67.camel@anthem.wooz.org>

On Thu, 2004-03-18 at 10:23, Jewett, Jim J wrote:

> There is an idiom (I've seen it more in Lisp than in python) 
> of creating a fresh object to act as a sentinel.

A very common use case in Python is where None is a valid value in a
dictionary:

missing = object()

if d.get('somekey', missing) is missing:
   # it ain't there

It even reads well!

-Barry



From jim.jewett at eds.com  Thu Mar 18 10:53:07 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Thu Mar 18 10:54:06 2004
Subject: [Python-Dev] funcdef grammar production
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C9@USAHM010.amer.corp.eds.com>

Greg Ewing:
>Jewett, Jim J:

>> one possible path is

>> 	(defparameter ",")* (defparameter [","])

> Even that seems unnecessarily obtuse.

I agree, but that is the current definition.  (Except that 
I explicitly chose the last of several alternatives, 
instead of leaving in the "choice".)

> I would write it as

>    defparameter ("," defparameter)* [","]

That would be incorrect.

Even the first defparameter is optional if there is an 
*args or a **kwargs.  

There does need to be something in the list.  (An empty
parameter list is handled by making it an optional part of 
funcdef.)  I assume this restriction is also due to the 
unambiguous-at-first-token rule.

That optional "," is required if you have both a defparameter
and an *args or **kwargs.  It is forbidden if there is
no defparameter.

>> Under my formulation, it would be ambiguous whether 
>> "a" was a defparameter or a (regular) parameter, which
>> are different productions.  Looking ahead for the "="
>> violates a no-lookahead rule.

> If the grammar is only for documentation, none of that matters

There is value in keeping the grammar documentation as close 
as possible to the actual coded grammar.  If a rule expressible 
in the grammar is actually implemented outside it, then either
the documented grammar and the real grammar diverge, or some 
rules must be listed only in the text.  Pick your poison.

> By the way, it occurs to me that BNF on its own doesn't seem
> to be up to the task of specifying this sort of thing clearly
> to a human reader.

Agreed, but text often fails.  The advantage of BNF is that it
can be checked to ensure that the writer and the reader agree.

> If it weren't for the commas, 

Yes.  Particularly evil is the rule that allows a trailing 
comma sometimes but not always.  That said, I would rather
have a confusing grammar than have to add trailing commas
for standardization.

The sublist possibilities don't make things any easier,
particularly since they aren't full parameter lists.  
(You can't use defaults, *, or ** within a sublist, though 
you can use a default for the entire list.)  I can only 
assume that this was an attempt to meet lispers halfway;
there are people who consider destructuring-bind to be a 
valuable feature.  

-jJ

From jacobs at theopalgroup.com  Thu Mar 18 11:06:32 2004
From: jacobs at theopalgroup.com (Kevin Jacobs)
Date: Thu Mar 18 11:06:36 2004
Subject: [Python-Dev] Python GC/type()/weakref mystery
Message-ID: <4059C908.7090709@theopalgroup.com>

Hi all,

I received a (false) report of a potential memory leak in one of the 
modules that I've
released.  In the process of tracking down what was going on, I stumbled 
upon
some _very_ odd behavior in both Python 2.2.3 and 2.3.3.  What I am 
seeing _may_
be a bug, but it is strange enough that I thought I'd share it here.

This module has a heuristic to detect memory leaks relating to new-style 
classes and
metaclasses that existed in previous Python versions (which Tim and I 
subsequently
tracked down and fixed).  It works by counting how many new objects are 
left behind
after running N iterations of a test suite:

  def leak_test(N, test_func):
    gc.collect()
    orig_objects = len(gc.get_objects())
    for i in xrange(N):
      test_func()
    gc.collect()
    new_objects = len(gc.get_objects()) - orig_objects
    if new_objects > 0:
      print 'Leak detected (N=%d, %d new objects)' % (N,new_objects)

This is a crude heuristic, at best, but was sufficient to detect when my 
module was
running against older Python versions that did not include the necessary 
fixes.  I
left the check in, just in case anything new cropped up.

Here is what is confusing me.  Running this code:

  def new_type():
    type('foo', (object,), {})
    #gc.collect()

  leak_test(50, new_type)

produces: Leak detected (N=50, 50 new objects)

These new objects are all dead weak references, so I am left wondering 
why they seem
to have such odd interactions with garbage collection and manual type 
construction.

After uncomment the gc.collect call in new_type: Leak detected (N=50, 1 
new objects),
so there looks to be something funny going on with the garbage collector.

In contrast,

  class Foo(object): pass
  leak_test(50, Foo)

produces no output.  I.e., the strange dead weak references only seem to 
occur when
manually constructing types via the type constructor.

Varying the value of N also results in strange results:

  for i in range(6):
    leak_test(10**i, new_type)

Produces:
  Leak detected (N=1, 1 new objects)
  Leak detected (N=10, 9 new objects)
  Leak detected (N=100, 90 new objects)
  Leak detected (N=1000, 78 new objects)
  Leak detected (N=10000, 9 new objects)
  Leak detected (N=100000, 8 new objects)

This is good news, in that it does not represent an unbounded memory 
leak, however the
behavior is still very puzzling and may be indicative of a bug, or at 
least an implementation
detail that bears some discussion.

Thanks,
-Kevin


From shane at zope.com  Thu Mar 18 11:06:59 2004
From: shane at zope.com (Shane Hathaway)
Date: Thu Mar 18 11:07:37 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <1079624880.26767.64.camel@anthem.wooz.org>
References: <4059BDBB.3070206@zope.com>
	<1079624880.26767.64.camel@anthem.wooz.org>
Message-ID: <4059C923.60904@zope.com>

Barry Warsaw wrote:
> I've gone back and forth on this issue several times, but the rationale
> that Tim refers to has to do with conversations we've had on various
> lists, and the decision we made with the email package.  At the time,
> lowercase_with_underscores was preferred by non-native English speakers
> over mixedCaseWords.  It seems that the latter can give non-native
> English speakers more difficulty.

I can understand that.  alllowercaseismuchworsethough. ;-)

> Sometimes though, I still like mixedCase. ;)

I do too, since IMHO it's easier to type, but I think it's more 
important to agree on a standard.  (Now, it just occurred to me that if 
I map shift+space to the underscore character, typing 
lowercase_with_underscores is easier.  You can do this with khotkeys or 
xmodmap.  Now I wonder if I can train myself to use it.)

Shane

From fumanchu at amor.org  Thu Mar 18 11:27:40 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Thu Mar 18 11:29:24 2004
Subject: [Python-Dev] redefining is
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561F07@opus.amorhq.net>

Barry Warsaw wrote:
> On Thu, 2004-03-18 at 10:23, Jewett, Jim J wrote:
> 
> > There is an idiom (I've seen it more in Lisp than in python) 
> > of creating a fresh object to act as a sentinel.
> 
> A very common use case in Python is where None is a valid value in a
> dictionary:
> 
> missing = object()
> 
> if d.get('somekey', missing) is missing:
>    # it ain't there
> 
> It even reads well!

/Fu bonks himself in the forehead

That's *so* much nicer than the contortions I have gone through from
time to time, not just for dicts, but arg lists. I was writing ugly crap
like:

# Something nobody would ever create
LessThanNothing = (0, -38, (None, '7 1/2'))
def func(arg1, arg2, arg3=LessThanNothing):
    if arg3 == LessThanNothing:
        arg3 = None # or whatever the *real* default should be...

Bah. Thanks, Barry. :)


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From tjreedy at udel.edu  Thu Mar 18 11:33:02 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu Mar 18 11:33:20 2004
Subject: [Python-Dev] Re: Changes to PEP 327: Decimal data type
References: <4059BDBB.3070206@zope.com><1079624880.26767.64.camel@anthem.wooz.org>
	<4059C923.60904@zope.com>
Message-ID: <c3civq$b2p$1@sea.gmane.org>


"Shane Hathaway" <shane@zope.com> wrote in message
news:4059C923.60904@zope.com...
> I can understand that.  alllowercaseismuchworsethough. ;-)

This reminds me of written Sanskrit, which has no caps and in which words
in a phrase or sentence are run together as much as possible (with final
and initial symbols also sometimes combined as one).  This is partly due to
the syllabic nature of the alphabet and partly because the writing was
invented to phonetically transcribe Vedic chants and song stories.

> I do too, since IMHO it's easier to type, but I think it's more
> important to agree on a standard.  (Now, it just occurred to me that if
> I map shift+space to the underscore character, typing
> lowercase_with_underscores is easier.  You can do this with khotkeys or
> xmodmap.  Now I wonder if I can train myself to use it.)

This might be a nice addition to Idle and PyWin editors, if possible.

Terry J. Reedy




From lomax at pumpichank.com  Thu Mar 18 11:39:36 2004
From: lomax at pumpichank.com (Frank "Synesthetic Polydactalist" Lomax)
Date: Thu Mar 18 11:39:45 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
References: <4059BDBB.3070206@zope.com>
	<1079624880.26767.64.camel@anthem.wooz.org>
	<4059C923.60904@zope.com>
Message-ID: <16473.53448.570559.873077@anthem.wooz.org>



    > Barry Warsaw wrote:
    >> I've gone back and forth on this issue several times, but the rationale
    >> that Tim refers to has to do with conversations we've had on various
    >> lists, and the decision we made with the email package.  At the time,
    >> lowercase_with_underscores was preferred by non-native English speakers
    >> over mixedCaseWords.  It seems that the latter can give non-native
    >> English speakers more difficulty.

    > I can understand that.  alllowercaseismuchworsethough. ;-)

WECOULDPROBABLYMAKEITEVENWORSEIFWETRIED

    >> Sometimes though, I still like mixedCase. ;)

    > I do too, since IMHO it's easier to type, but I think it's more
    > important to agree on a standard.

There is something about the font and color scheme I have assigned to the
underscore character.  While it is marginally more difficult to type than an
uppercase letter (see below), the resulting pixel pattern massages certain
pleasure nodules in my visual cortex, often resulting in the most engaging
smells.

You are right, though, standards are important.  I will be submitting an IETF
draft on the subject in about three weeks, after I have sufficiently recovered
from the Python conference.

    > (Now, it just occurred to me that
    > if I map shift+space to the underscore character, typing
    > lowercase_with_underscores is easier.  You can do this with khotkeys
    > or xmodmap.  Now I wonder if I can train myself to use it.)

I personally map underscore to alt-meta-shift-ctrl-F11, but then I have to use
all three pinkies and my nose.  My desktop is widely regarded as the most
undriveable desktop in the world, a fact that I am quite proud of, although my
brother, Sabo "Four Thumbs" Lomax would disagree.

i-don't-even-want-to-think-about-what-appendages-tim-uses-ly y'rs, -frank

From ark at acm.org  Thu Mar 18 11:40:18 2004
From: ark at acm.org (Andrew Koenig)
Date: Thu Mar 18 11:40:17 2004
Subject: [Python-Dev] A proposal has surfaced on
	comp.lang.pythontoredefine "is"
In-Reply-To: <16472.65100.608412.688015@montanaro.dyndns.org>
Message-ID: <002101c40d07$b33cb990$6402a8c0@arkdesktop>

> Is "mutually substitutable" a fancy way of saying "equal"?  In other
> words,
> why would "x is y" be preferred over "x == y"?

"Mutually substitutable" is a fancy way of saying "equal" for immutable
objects only.  For mutable objects, or for immutable objects with mutable
components, the situation is more complicated.  For example:

	a = []
	b = []
	x = (a, a)
	y = (b, b)

Here, x and y are equal but not mutably substitutable, because if I execute
x[0].append(42), it changes x but not y.  On the other hand, if a and b were
() rather than [], then x and y would be mutually substitutable because
there would be no way to distinguish x from y except by their id.



From casey at zope.com  Thu Mar 18 11:23:56 2004
From: casey at zope.com (Casey Duncan)
Date: Thu Mar 18 11:40:24 2004
Subject: [Python-Dev] indiscernible objects (was: A proposal has surfaced
 oncomp.lang.pythontoredefine   "is")
References: <007401c40c85$7e9b30e0$6402a8c0@arkdesktop>
	<LNBBLJKPBEHFEDALKOLCEELPJJAB.tim.one@comcast.net>
Message-ID: <20040318112356.1a0e1c8b.casey@zope.com>

On Wed, 17 Mar 2004 21:05:19 -0500
"Tim Peters" <tim.one@comcast.net> wrote:

> [Andrew Koenig]
> > ...
> > Indeed, it would be a change.  And I can go along with an argument
> > that an incompatible change of that magnitude should be rejected for
> > that reason alone.  But why would the change cause a problem?
> > Consider:
> >
> > 	a = []
> > 	b = []
> > 	x = (a, b)
> > 	y = (a, b)
> >
> > Can you think of a program that can make productive use of the value
> > of "x is y"?  It seems to me that x and y are mutually
> > substitutable.
> 
> It's the purpose of "is" to give Python code a handle on the actual
> object graph an implementation creates.  This is important in "system
> code" that needs to manipulate, analyze, or clone parts of the system
> object graph *for its own sake*.  For example, if your x and y above
> are both also bound to attributes of a class instance, the actual
> object graph is plain different depending on whether instance.x is
> instance.y.  Application-level code is free to rely on that
> distinction or not, as it likes; if it does rely on it, it's also free
> to arrange to make any conclusion it likes a consequence of"is" or "is
> not".

One interesting case I ran into where I had some indecision about
whether to use 'is' or '==' was implementing a generator function which
could accept an arbitrary sentinel object which when encountered would
yield an intermediate result (it was an implementation to find all paths
in a graph that lead to a specific end node).

Since the "target" could be an arbitrary object, it seemed logical to
use 'is', to identify the terminal node, except that in many cases the
node objects would be integers (or strings). So, I decided to use '=='
instead. This gave me a slightly uneasy feeling though because the
semantic meaning of '==' is only advisory. In fact any aplication object
can claim to be equal to any other object. IOW the system code needs to
trust the application code.

What I really wanted was a system-level assertion that "objectA is
indiscernible from objectB" (the same idea as mutually subsititutable I
think). In any case I wanted something more discerning than equals but
not as particular as 'is'.

Specifically, objects are indiscernible if they are the same type and
have the same state. Unlike 'is', they *do not* need to be the same
physical object. In the case of sequences they would need to be the same
type and have indiscernible members.

Another obvious invariant is that all objects are indiscernible from
themselves. Which means that if 'a is b' is true then a is also
indiscernible from b.

Yes, I could write my own function to implement this. But I doubt I
would get it right. A naive and general function to implement this might
serialize the objects to a string and compare the strings. If objects
serialize to the same representation they are indiscernible. Obviously
there are shortcuts one can take to optimize this for simple types
(simply compare type and binary value in memory directly for instance).
The first thing this function would probably do is an identity check
anyway since that is cheap and identical objects are always
indiscernible.

I could certainly envision an operator for this being useful, but even
just a function in the std lib would suffice (or a new builtin
function). I know I would find it very useful if it existed.

-Casey


From aahz at pythoncraft.com  Thu Mar 18 11:45:38 2004
From: aahz at pythoncraft.com (Aahz)
Date: Thu Mar 18 11:46:00 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079625010.26767.67.camel@anthem.wooz.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C8@USAHM010.amer.corp.eds.com>
	<1079625010.26767.67.camel@anthem.wooz.org>
Message-ID: <20040318164538.GA17197@panix.com>

On Thu, Mar 18, 2004, Barry Warsaw wrote:
> On Thu, 2004-03-18 at 10:23, Jewett, Jim J wrote:
>> 
>> There is an idiom (I've seen it more in Lisp than in python) 
>> of creating a fresh object to act as a sentinel.
> 
> A very common use case in Python is where None is a valid value in a
> dictionary:
> 
> missing = object()
> if d.get('somekey', missing) is missing:
>    # it ain't there
> 
> It even reads well!

Ugh.  While I agree that the idiom has its place, this ain't one of
them; you should be using ``in`` (or ``has_key()``).  The standard idiom
is even more readable, and there should be only one way to do it.  Maybe
you meant something more like

    if d['somekey'] is missing:
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From barry at python.org  Thu Mar 18 11:48:29 2004
From: barry at python.org (Barry Warsaw)
Date: Thu Mar 18 11:48:36 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <20040318164538.GA17197@panix.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C8@USAHM010.amer.corp.eds.com>
	<1079625010.26767.67.camel@anthem.wooz.org>
	<20040318164538.GA17197@panix.com>
Message-ID: <1079628508.3361.1.camel@anthem.wooz.org>

On Thu, 2004-03-18 at 11:45, Aahz wrote:
> On Thu, Mar 18, 2004, Barry Warsaw wrote:
> > On Thu, 2004-03-18 at 10:23, Jewett, Jim J wrote:
> >> 
> >> There is an idiom (I've seen it more in Lisp than in python) 
> >> of creating a fresh object to act as a sentinel.
> > 
> > A very common use case in Python is where None is a valid value in a
> > dictionary:
> > 
> > missing = object()
> > if d.get('somekey', missing) is missing:
> >    # it ain't there
> > 
> > It even reads well!
> 
> Ugh.  While I agree that the idiom has its place, this ain't one of
> them; you should be using ``in`` (or ``has_key()``).  The standard idiom
> is even more readable, and there should be only one way to do it.  Maybe
> you meant something more like
> 
>     if d['somekey'] is missing:

Let me rephrase that:

missing = object()
value = d.get('somekey', missing)
if value is missing:
   # it ain't there
else:
   return value

-Barry



From guido at python.org  Thu Mar 18 12:08:12 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 18 12:08:21 2004
Subject: [Python-Dev] redefining is
In-Reply-To: Your message of "Thu, 18 Mar 2004 10:23:07 EST."
	<B8CDFB11BB44D411B8E600508BDF076C1E96D3C8@USAHM010.amer.corp.eds.com> 
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C8@USAHM010.amer.corp.eds.com>
Message-ID: <200403181708.i2IH8Cj10283@guido.python.org>

> There is an idiom (I've seen it more in Lisp than in python) 
> of creating a fresh object to act as a sentinel.
> 
> "done with this data" might well appear in the input, but
> the specific newly-created-string (which happens to look
> just like that) can't appear.
> 
> The sentinal is usually a mutable object, but it is sometimes
> a string indicating the object's meaning.  ("fail")  It is
> surprising that some objects (like small integers) cannot be
> used, but I don't think the answer is to make the entire
> idiom unusable.

Sorry, if you're usign *any* immutable value there and expecting it to
be a unique object, you're cruisin' for a bruisin', so to speak.  The
language spec explicitly *allows* but does not *require* the
implementation to cache and reuse immutable values.

> You could argue that they ought to be using (id(x) == id(y))
> to emphasize that == isn't enough, but ... (x is y) seems
> just as clear, and the reference manual (5.9) says that is 
> tests for object identity. 

Please, just use None, [] or object() as a sentinel if you're going to
compare using 'is'.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From edloper at gradient.cis.upenn.edu  Thu Mar 18 12:11:48 2004
From: edloper at gradient.cis.upenn.edu (Edward Loper)
Date: Thu Mar 18 12:10:28 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <E1B3p52-0006Vy-Rl@mail.python.org>
References: <E1B3p52-0006Vy-Rl@mail.python.org>
Message-ID: <4059D854.6050908@gradient.cis.upenn.edu>

> I just did an experiment which involved running the Python
> script appended below over the Library Reference index.
> The results were:  [...]

I ran a similar experiment, but counted the number of *modules* that 
define each type of function (since presumably most modules are 
internally consistent).  The numbers are:

Total modules defining functions: 985
under_score+lowercase : 44%
mixedCase+lowercase   : 22%
lowercase only        : 16%
InitialCaps           : 10%
mixedCase only        : 2%
under_score only      : 2%

So underscore is most common; but mixedCase has a definite presence.

-Edward

p.s., I'm definitely +1 on making a stronger statement in the style 
guide.  Consistency is good.

=========================================================================
# Count function name styles from the Python Library Reference Index.
import re, sys

InitialCaps  = re.compile(r"def\s*_*[A-Z0-9][A-Za-z0-9]*_*\s*\(")
mixedCase    = re.compile(r"def\s*_*[a-z0-9]+[A-Z][A-Za-z0-9]+_*\s*\(")
lowercase    = re.compile(r"def\s*_*[a-z0-9]+_*\s*\(")
under_score  = re.compile(r"def\s*_*[A-Za-z0-9]+_[A-Za-z0-9_]+_*\s*\(")
anydef       = re.compile(r"def\s+.*")

count = {'InitialCaps':0, 'mixedCase only':0, 'lowercase only':0,
          'under_score only':0, 'mixedCase+lowercase':0, 'anydef':0,
          'under_score+lowercase':0}

for file in sys.argv[1:]:
     s = open(file).read()
     if InitialCaps.search(s):
         count['InitialCaps'] += 1
     elif mixedCase.search(s) and lowercase.search(s):
         count['mixedCase+lowercase'] += 1
     elif under_score.search(s) and lowercase.search(s):
         count['under_score+lowercase'] += 1
     elif mixedCase.search(s):
         count['mixedCase only'] += 1
     elif under_score.search(s):
         count['under_score only'] += 1
     elif lowercase.search(s):
         count['lowercase only'] += 1
     else:
         if anydef.search(s):
             print 'huh', anydef.search(s).group()
     if anydef.search(s): count['anydef'] += 1
total = count['anydef']
print 'Total files defining functions:', total
items = count.items()
items.sort(lambda a,b:cmp(b[1],a[1]))
for (key, val) in items:
     print '%-21s : %d%%' % (key, 100*val/total)
=========================================================================


From aahz at pythoncraft.com  Thu Mar 18 12:18:32 2004
From: aahz at pythoncraft.com (Aahz)
Date: Thu Mar 18 12:18:42 2004
Subject: [Python-Dev] Python GC/type()/weakref mystery
In-Reply-To: <4059C908.7090709@theopalgroup.com>
References: <4059C908.7090709@theopalgroup.com>
Message-ID: <20040318171832.GA23041@panix.com>

On Thu, Mar 18, 2004, Kevin Jacobs wrote:
>
> Varying the value of N also results in strange results:
> 
>  for i in range(6):
>    leak_test(10**i, new_type)
> 
> Produces:
>  Leak detected (N=1, 1 new objects)
>  Leak detected (N=10, 9 new objects)
>  Leak detected (N=100, 90 new objects)
>  Leak detected (N=1000, 78 new objects)
>  Leak detected (N=10000, 9 new objects)
>  Leak detected (N=100000, 8 new objects)

This looks to me like standard boundary conditions for GC; GC normally
only gets provoked when more than gc.get_threshold() objects have been
created than deleted.  What makes you think something unusual is
happening?
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From jacobs at theopalgroup.com  Thu Mar 18 12:34:13 2004
From: jacobs at theopalgroup.com (Kevin Jacobs)
Date: Thu Mar 18 12:34:22 2004
Subject: [Python-Dev] Python GC/type()/weakref mystery
In-Reply-To: <20040318171832.GA23041@panix.com>
References: <4059C908.7090709@theopalgroup.com>
	<20040318171832.GA23041@panix.com>
Message-ID: <4059DD95.1000205@theopalgroup.com>

Aahz wrote:

>On Thu, Mar 18, 2004, Kevin Jacobs wrote:
>  
>
>>Varying the value of N also results in strange results:
>>
>> for i in range(6):
>>   leak_test(10**i, new_type)
>>
>>Produces:
>> Leak detected (N=1, 1 new objects)
>> Leak detected (N=10, 9 new objects)
>> Leak detected (N=100, 90 new objects)
>> Leak detected (N=1000, 78 new objects)
>> Leak detected (N=10000, 9 new objects)
>> Leak detected (N=100000, 8 new objects)
>>    
>>
>
>This looks to me like standard boundary conditions for GC; GC normally
>only gets provoked when more than gc.get_threshold() objects have been
>created than deleted.  What makes you think something unusual is
>happening?
>  
>

I realize that this does look like standard GC threshold behavior  
However, it /shouldn't/, because
I manually run gc.collect() both before and after the test suite.  The 
other thing that is strange is
that all of these dead weakrefs disappear when I run gc.collect() as 
part of each iteration
of the test suite, but not afterwards.

Please read over my original mail again, since the magnitude of 
weirdness may not have
been obvious the first time through.

Thanks,
-Kevin



From python-kbutler at sabaydi.com  Thu Mar 18 12:35:52 2004
From: python-kbutler at sabaydi.com (Kevin J. Butler)
Date: Thu Mar 18 12:36:14 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	toredefine "is"
In-Reply-To: <E1B3mRC-0006z5-74@mail.python.org>
References: <E1B3mRC-0006z5-74@mail.python.org>
Message-ID: <4059DDF8.5070807@sabaydi.com>

From: Guido van Rossum <guido@python.org>

>Anyone else amused by the fact that 4 year after Clinton we're
>discussing what the meaning of "is" should be? :)
>  
>
I'm still not sure this isn't just an elaborate and very successful troll.

But then, (classify(proposal) is troll) depends on the definition of is...

kb


From guido at python.org  Thu Mar 18 12:45:07 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 18 12:45:47 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: Your message of "Thu, 18 Mar 2004 12:11:48 EST."
	<4059D854.6050908@gradient.cis.upenn.edu> 
References: <E1B3p52-0006Vy-Rl@mail.python.org>  
	<4059D854.6050908@gradient.cis.upenn.edu> 
Message-ID: <200403181745.i2IHj8k10449@guido.python.org>

> p.s., I'm definitely +1 on making a stronger statement in the style 
> guide.  Consistency is good.

OK.  Then here is my pronouncement (somebody else can edit the PEP):
for the standard library (C and Python code), we'll use
lowercase_with_underscores for all new modules, allowing camelCase
only in contexts where that's already the prevailing style
(e.g. threading.py), to retain backwards compatibility.

Third parties are encouraged to adhere to the same style but it
doesn't make sense to tell large code bases to change.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From mcherm at mcherm.com  Thu Mar 18 12:47:05 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Thu Mar 18 12:47:07 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
Message-ID: <1079632025.4059e099a395a@mcherm.com>

No sooner do I suggest that something is impossible:

Michael Chermside writes:
> Unfortunately, there is a flaw in your methodology, and I
> can't think of a reasonable way to correct for it.

Then someone here goes and DOES it:

Edward Loper writes:
> I ran a similar experiment, but counted the number of *modules* that 
> define each type of function (since presumably most modules are 
> internally consistent).

Good idea... I'd been considering and rejecting various dictionary
based approaches for identifying multi-word identifiers, but your
solution is better.

> Total modules defining functions: 985
> under_score+lowercase : 44%
> mixedCase+lowercase   : 22%
> lowercase only        : 16%
> InitialCaps           : 10%
> mixedCase only        : 2%
> under_score only      : 2%
> 
> So underscore is most common; but mixedCase has a definite presence.
   [...]
> p.s., I'm definitely +1 on making a stronger statement in the style 
> guide.  Consistency is good.

Yep, +1 from me too. And if we strengthen the statement in PEP 8, then 
all NEW library code will be consistant (beware the wrath of the PEP 8
Police!). More importantly, people engaged in green-field Python
projects will be much more likely to follow the practice. To make a
good start, I hereby state my intention to stop using mixedCase and
switch to under_scores instead.

-- Michael Chermside


From ilya at bluefir.net  Thu Mar 18 12:53:25 2004
From: ilya at bluefir.net (Ilya Sandler)
Date: Thu Mar 18 12:54:05 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C8@USAHM010.amer.corp.eds.com>
Message-ID: <Pine.LNX.4.33.0403180929160.2717-100000@bagira.localdomain>


> > The observation is that if an object is immutable, there's 
> > no legitimate reason to know whether it's distinct from
> > another object with the same type and value.
> 

Here is a real life example where "is" on immutable objects is very 
helpful for perfomance reasons

I have DatabaseRecord objects which have a field "schema": 
(which comes from Database objects)

schema is just a string with textual description of database schema and 
this string might be quite long (like 500-700 bytes)

given 2 databaserecord objects I want to be able to quickly say whether 
they have the same schema..(with time cost being independent on number of 
field, record size, etc)

It's easy to achieve with is, I just intern schemas:
database.schema=intern(schema)
dbRec1.schema=database.schema
dbRec2.schema=database.schame

(Essentially, intern() is done once per database open() operation)

then comparing 2 schemas is much quicker done with "is"

(dbRec1.schema is dbRec3.schema)

than with either id() or "=="

Or am I missing something here?

Ilya

PS and just for reference some timings:

src>./python Lib/timeit.py -n 10000 -s 'l=500; x="1" * l; y="1"* l' 'x is y'
10000 loops, best of 3: 0.143 usec per loop

src>./python Lib/timeit.py -n 10000 -s 'l=500; x="1" * l; y="1"* l' 'x == y'
10000 loops, best of 3: 0.932 usec per loop

src>./python Lib/timeit.py -n 10000 -s 'l=500; x="1" * l; y="1"* l' 'id(x) == id(y)'
10000 loops, best of 3: 0.482 usec per loop
s






> Unsubscribe: http://mail.python.org/mailman/options/python-dev/ilya%40bluefir.net
> 


From aahz at pythoncraft.com  Thu Mar 18 13:02:19 2004
From: aahz at pythoncraft.com (Aahz)
Date: Thu Mar 18 13:02:23 2004
Subject: [Python-Dev] Python GC/type()/weakref mystery
In-Reply-To: <4059DD95.1000205@theopalgroup.com>
References: <4059C908.7090709@theopalgroup.com>
	<20040318171832.GA23041@panix.com>
	<4059DD95.1000205@theopalgroup.com>
Message-ID: <20040318180219.GA26663@panix.com>

On Thu, Mar 18, 2004, Kevin Jacobs wrote:
> Aahz wrote:
>>
>>This looks to me like standard boundary conditions for GC; GC normally
>>only gets provoked when more than gc.get_threshold() objects have been
>>created than deleted.  What makes you think something unusual is
>>happening?
> 
> I realize that this does look like standard GC threshold behavior
> However, it /shouldn't/, because I manually run gc.collect() both
> before and after the test suite.  The other thing that is strange is
> that all of these dead weakrefs disappear when I run gc.collect() as
> part of each iteration of the test suite, but not afterwards.
>
> Please read over my original mail again, since the magnitude of
> weirdness may not have been obvious the first time through.

Hmmmm...  First of all, not all.  When I tried your test with N=1, I did
still get one leaked ref.  I tried running with an extra gc.collect() at
the end of each run, and that didn't help.  I've verified that
gc.collect() immediately after the running test_func() does reap a bunch
of stuff.  And I've verified that the sum of len(gc.getobjects()) for
each run equals that of running it at the start and end of the whole
program.

I'll let someone else who can look at the C code take over....
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From guido at python.org  Thu Mar 18 13:16:48 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 18 13:16:54 2004
Subject: [Python-Dev] redefining is
In-Reply-To: Your message of "Thu, 18 Mar 2004 09:53:25 PST."
	<Pine.LNX.4.33.0403180929160.2717-100000@bagira.localdomain> 
References: <Pine.LNX.4.33.0403180929160.2717-100000@bagira.localdomain> 
Message-ID: <200403181816.i2IIGmU10541@guido.python.org>

> Here is a real life example where "is" on immutable objects is very 
> helpful for perfomance reasons

I apreciate that 'is' is faster than '==' for comparing two strings of
length 500, but have you tried to measure how much time you are saving
in your application?

I doubt it's in the noise.  And unless the intern()ed strings are
compared over and over, the cost of intern() is mroe than the savings
in comparison.

(The best scheme is probably to use intern() but still use '==' for
comparisons; '==' is smart enough to skip comparing an object to
itself.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From jacobs at theopalgroup.com  Thu Mar 18 13:30:34 2004
From: jacobs at theopalgroup.com (Kevin Jacobs)
Date: Thu Mar 18 13:30:38 2004
Subject: [Python-Dev] Python GC/type()/weakref mystery
In-Reply-To: <20040318180219.GA26663@panix.com>
References: <4059C908.7090709@theopalgroup.com>	<20040318171832.GA23041@panix.com>	<4059DD95.1000205@theopalgroup.com>
	<20040318180219.GA26663@panix.com>
Message-ID: <4059EACA.9060009@theopalgroup.com>

Aahz wrote:

>On Thu, Mar 18, 2004, Kevin Jacobs wrote:
>
>>Please read over my original mail again, since the magnitude of
>>weirdness may not have been obvious the first time through.
>>    
>>
>
>Hmmmm...  First of all, not all.  When I tried your test with N=1, I did
>still get one leaked ref.  I tried running with an extra gc.collect() at
>the end of each run, and that didn't help.  I've verified that
>gc.collect() immediately after the running test_func() does reap a bunch
>of stuff.  And I've verified that the sum of len(gc.getobjects()) for
>each run equals that of running it at the start and end of the whole
>program.
>
>I'll let someone else who can look at the C code take over....
>  
>

Thanks for double-checking that I'm not just hallucinating all of this.  
I too will
start looking into the C code, though I tend to avoid the GC and weakref
subsystems at all costs.  Others who know the GC process better (hint: Tim)
are warmly invited to accompany me.

Thanks,
-Kevin


From ark-mlist at att.net  Thu Mar 18 13:39:50 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Thu Mar 18 13:39:49 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079625010.26767.67.camel@anthem.wooz.org>
Message-ID: <002401c40d18$65b34480$6402a8c0@arkdesktop>

> A very common use case in Python is where None is a valid value in a
> dictionary:
> 
> missing = object()
> 
> if d.get('somekey', missing) is missing:
>    # it ain't there
> 
> It even reads well!

Indeed.  Of course, object() is mutable, so there is no proposal to change
the meaning of this program.  What I'm concerned about is someone trying to
do the same thing this way:

	missing = 'missing'

	if d.get('somekey', missing) is 'missing':
		# it ain't there

This code contains a bug, but on an implementation that interns strings that
happen to look like identifiers, no test will detect the bug.


From barry at python.org  Thu Mar 18 13:47:01 2004
From: barry at python.org (Barry Warsaw)
Date: Thu Mar 18 13:47:26 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <002401c40d18$65b34480$6402a8c0@arkdesktop>
References: <002401c40d18$65b34480$6402a8c0@arkdesktop>
Message-ID: <1079635620.4350.2.camel@anthem.wooz.org>

On Thu, 2004-03-18 at 13:39, Andrew Koenig wrote:
> > A very common use case in Python is where None is a valid value in a
> > dictionary:
> > 
> > missing = object()
> > 
> > if d.get('somekey', missing) is missing:
> >    # it ain't there
> > 
> > It even reads well!
> 
> Indeed.  Of course, object() is mutable, so there is no proposal to change
> the meaning of this program.  What I'm concerned about is someone trying to
> do the same thing this way:
> 
> 	missing = 'missing'
> 
> 	if d.get('somekey', missing) is 'missing':
> 		# it ain't there
> 
> This code contains a bug, but on an implementation that interns strings that
> happen to look like identifiers, no test will detect the bug.

Sure, but it's long been recommended against doing stuff like:

try:
  foo()
except 'quit':
  pass

def foo():
  raise 'quit'

This doesn't seem like it's something broken that needs fixing.  Once
you understand what's going on, it makes sense.

-Barry



From guido at python.org  Thu Mar 18 13:51:02 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 18 13:51:10 2004
Subject: [Python-Dev] redefining is
In-Reply-To: Your message of "Thu, 18 Mar 2004 13:39:50 EST."
	<002401c40d18$65b34480$6402a8c0@arkdesktop> 
References: <002401c40d18$65b34480$6402a8c0@arkdesktop> 
Message-ID: <200403181851.i2IIp2d10678@guido.python.org>

> Indeed.  Of course, object() is mutable, so there is no proposal to change
> the meaning of this program.  What I'm concerned about is someone trying to
> do the same thing this way:
> 
> 	missing = 'missing'
> 
> 	if d.get('somekey', missing) is 'missing':
> 		# it ain't there
> 
> This code contains a bug, but on an implementation that interns strings that
> happen to look like identifiers, no test will detect the bug.

I'm ready to pronounce.  The code is buggy.  There are good reasons to
keep 'is' the way it always was.  The definition of 'is' ain't gonna
change.  So be it.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From casey at zope.com  Thu Mar 18 14:38:03 2004
From: casey at zope.com (Casey Duncan)
Date: Thu Mar 18 14:41:08 2004
Subject: [Python-Dev] Re: redefining is
References: <002401c40d18$65b34480$6402a8c0@arkdesktop>
	<200403181851.i2IIp2d10678@guido.python.org>
Message-ID: <20040318143803.7e36a5b0.casey@zope.com>

On Thu, 18 Mar 2004 10:51:02 -0800
Guido van Rossum <guido@python.org> wrote:

> > Indeed.  Of course, object() is mutable, so there is no proposal to
> > change the meaning of this program.  What I'm concerned about is
> > someone trying to do the same thing this way:
> > 
> > 	missing = 'missing'
> > 
> > 	if d.get('somekey', missing) is 'missing':
> > 		# it ain't there
> > 
> > This code contains a bug, but on an implementation that interns
> > strings that happen to look like identifiers, no test will detect
> > the bug.
> 
> I'm ready to pronounce.  The code is buggy.  There are good reasons to
> keep 'is' the way it always was.  The definition of 'is' ain't gonna
> change.  So be it.

So then: is is as is was ;^)

-Casey


From tim.one at comcast.net  Thu Mar 18 14:45:24 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 18 14:46:51 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <200403181816.i2IIGmU10541@guido.python.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEAIJKAB.tim.one@comcast.net>

[Guido]
> ...
> (The best scheme is probably to use intern() but still use '==' for
> comparisons; '==' is smart enough to skip comparing an object to
> itself.)

Well, string_richcompare() takes that shortcut, so the advice is good, but
PyObject_RichCompare() doesn't in general (PyObject_Compare() still does,
but that's not triggered by '==').


From tim.one at comcast.net  Thu Mar 18 15:06:09 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 18 15:08:18 2004
Subject: [Python-Dev] Python GC/type()/weakref mystery
In-Reply-To: <4059EACA.9060009@theopalgroup.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEAKJKAB.tim.one@comcast.net>

[Kevin Jacobs]
> ...
> Others who know the GC process better (hint: Tim) are warmly
> invited to accompany me.

I'm sorry, but I can't make time for this now.  If nobody else jumps in soon
(hint: Neil <wink>), open a bug report on SF so it doesn't get lost forever.


From tim.one at comcast.net  Thu Mar 18 16:02:33 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 18 16:02:48 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <1079624880.26767.64.camel@anthem.wooz.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBBJKAB.tim.one@comcast.net>

Shane, doesn't our employer have a style guide that requires
doingAWrongThing here?  Maybe that was just a Zope3 thing, though.  I can
never find anything in Wikis <0.5 wink> ... ok,

    http://tinyurl.com/2885n

[Barry]
> I've gone back and forth on this issue several times, but the
> rationale that Tim refers to has to do with conversations we've had
> on various lists, and the decision we made with the email package.
> At the time, lowercase_with_underscores was preferred by non-native
> English speakers over mixedCaseWords.  It seems that the latter can
> give non-native English speakers more difficulty.
>
> Sometimes though, I still like mixedCase. ;)

A happy thing is that sometimes alllower is right too (although not for
alllower, while allLower lies about itself).  Python's has_key is the poster
child for a method that should have been spelled haskey.  If you don't
believe me, ask Guido.  There was much rejoicing in Guido's mirthful head
when "key in mapping" was added so he didn't have to type has_key again.

> +1 on updating PEP 8 to encourage <wink> lowercase_with_underscores.

Encouragement is all we can do.  Plus breaking a few legs now & again just
to drive the point home.


From barry at python.org  Thu Mar 18 16:08:43 2004
From: barry at python.org (Barry Warsaw)
Date: Thu Mar 18 16:08:51 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBBJKAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCEEBBJKAB.tim.one@comcast.net>
Message-ID: <1079644121.4350.50.camel@anthem.wooz.org>

On Thu, 2004-03-18 at 16:02, Tim Peters wrote:

> A happy thing is that sometimes alllower is right too (although not for
> alllower, while allLower lies about itself).  Python's has_key is the poster
> child for a method that should have been spelled haskey.  If you don't
> believe me, ask Guido.  There was much rejoicing in Guido's mirthful head
> when "key in mapping" was added so he didn't have to type has_key again.

You're singing to the choir with this one.

la-la-la-ly y'rs,
-Barry



From barry at barrys-emacs.org  Thu Mar 18 16:28:50 2004
From: barry at barrys-emacs.org (Barry Scott)
Date: Thu Mar 18 16:28:59 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python
	toredefine "is"
In-Reply-To: <200403180100.i2I10X807918@guido.python.org>
References: <007001c40c82$67bc6c20$6402a8c0@arkdesktop>
	<200403180100.i2I10X807918@guido.python.org>
Message-ID: <6.0.3.0.2.20040318211629.034d78e8@torment.chelsea.private>

Since this is not a reasonable semantic change we are left we warning
users about misuse.

Are there a number of cases of misuse that python could raise an exception
under? What would pychecker look for? Are they the same question?

Barry

At 18-03-2004 01:00, Guido van Rossum wrote:
> > My own concern is that if someone writes expressions such as "x is 0" by
> > mistake, the resulting program might always work on one implementation but
> > fail on another.
>
>I'd rather make this PyChecker's job than redefine 'is'.
>
>--Guido van Rossum (home page: http://www.python.org/~guido/)
>
>_______________________________________________
>Python-Dev mailing list
>Python-Dev@python.org
>http://mail.python.org/mailman/listinfo/python-dev
>Unsubscribe: 
>http://mail.python.org/mailman/options/python-dev/nospam%40barrys-emacs.org



From shane at zope.com  Thu Mar 18 17:03:27 2004
From: shane at zope.com (Shane Hathaway)
Date: Thu Mar 18 17:03:30 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBBJKAB.tim.one@comcast.net>
References: <1079624880.26767.64.camel@anthem.wooz.org>
	<LNBBLJKPBEHFEDALKOLCEEBBJKAB.tim.one@comcast.net>
Message-ID: <405A1CAF.7080501@zope.com>

Tim Peters wrote:
> Shane, doesn't our employer have a style guide that requires
> doingAWrongThing here?  Maybe that was just a Zope3 thing, though.  I can
> never find anything in Wikis <0.5 wink> ... ok,
> 
>     http://tinyurl.com/2885n

Yes, but that policy fights with Python, leading to bad combinations. 
The first time I had to say something like 
"ZPythonScriptHTML_changePrefs" to a customer over the phone, I realized 
that the mixed conventions are a real problem.

BTW, I assume this discussion is only about refining the conventions for 
method and function names.  CamelCase for class names is still 
recommended, right?

Shane

From barry at python.org  Thu Mar 18 17:06:09 2004
From: barry at python.org (Barry Warsaw)
Date: Thu Mar 18 17:06:18 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <405A1CAF.7080501@zope.com>
References: <1079624880.26767.64.camel@anthem.wooz.org>
	<LNBBLJKPBEHFEDALKOLCEEBBJKAB.tim.one@comcast.net>
	<405A1CAF.7080501@zope.com>
Message-ID: <1079647568.4350.74.camel@anthem.wooz.org>

On Thu, 2004-03-18 at 17:03, Shane Hathaway wrote:

> BTW, I assume this discussion is only about refining the conventions for 
> method and function names.  CamelCase for class names is still 
> recommended, right?

I hope so. ;)

hobgoblin-ly y'rs,
-Barry



From tim.one at comcast.net  Thu Mar 18 17:16:58 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 18 17:16:53 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <405A1CAF.7080501@zope.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net>

[Shane Hathaway, on a piece of the Zope3 style guide
    http://tinyurl.com/2885n
]

> Yes, but that policy fights with Python, leading to bad combinations.

I said "Zope" before, right <wink>?

> The first time I had to say something like
> "ZPythonScriptHTML_changePrefs" to a customer over the phone, I
> realized that the mixed conventions are a real problem.

No argument here.

> BTW, I assume this discussion is only about refining the conventions
> for method and function names.  CamelCase for class names is still
> recommended, right?

It is by me, and I channel that it also is by Guido.  Dissension will be
crushed <wink>.  Module and package names seem muddier, although I think a
lot of projects (including the Python core) have been moving toward short
one-word all-lower names for those (especially for packages at top level).


From kbk at shore.net  Thu Mar 18 17:41:13 2004
From: kbk at shore.net (Kurt B. Kaiser)
Date: Thu Mar 18 17:41:16 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net> (Tim
	Peters's message of "Thu, 18 Mar 2004 17:16:58 -0500")
References: <LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net>
Message-ID: <87oeqtwzg6.fsf@hydra.localdomain>

"Tim Peters" <tim.one@comcast.net> writes:

>> BTW, I assume this discussion is only about refining the conventions
>> for method and function names.  CamelCase for class names is still
>> recommended, right?
>
> It is by me, and I channel that it also is by Guido.  Dissension will be
> crushed <wink>.  Module and package names seem muddier, although I think a
> lot of projects (including the Python core) have been moving toward short
> one-word all-lower names for those (especially for packages at top level).

+1

-- 
KBK

From guido at python.org  Thu Mar 18 17:53:58 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 18 17:54:46 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: Your message of "Thu, 18 Mar 2004 17:16:58 EST."
	<LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net> 
References: <LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net> 
Message-ID: <200403182253.i2IMrw911169@guido.python.org>

> > BTW, I assume this discussion is only about refining the conventions
> > for method and function names.  CamelCase for class names is still
> > recommended, right?
> 
> It is by me, and I channel that it also is by Guido.

Indeed.

> Dissension will be
> crushed <wink>.  Module and package names seem muddier, although I think a
> lot of projects (including the Python core) have been moving toward short
> one-word all-lower names for those (especially for packages at top level).

Another pronouncement then: modules/packages need to be short
all-lower names (no underscores).

StringIO, cPickle and SimpleXMLRPCServer etc. were mistakes.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Thu Mar 18 17:57:25 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu Mar 18 17:58:22 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <200403182253.i2IMrw911169@guido.python.org>
References: <Your message of "Thu, 18 Mar 2004 17:16:58 EST."
	<LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net>
	<LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net>
Message-ID: <5.1.1.6.0.20040318175558.024fdec0@telecommunity.com>

At 02:53 PM 3/18/04 -0800, Guido van Rossum wrote:
> > > BTW, I assume this discussion is only about refining the conventions
> > > for method and function names.  CamelCase for class names is still
> > > recommended, right?
> >
> > It is by me, and I channel that it also is by Guido.
>
>Indeed.
>
> > Dissension will be
> > crushed <wink>.  Module and package names seem muddier, although I think a
> > lot of projects (including the Python core) have been moving toward short
> > one-word all-lower names for those (especially for packages at top level).
>
>Another pronouncement then: modules/packages need to be short
>all-lower names (no underscores).
>
>StringIO, cPickle and SimpleXMLRPCServer etc. were mistakes.

Does that mean that SimpleHTTPServer would now be servers.http.simple?  Or 
http.servers.simple?  Or...?


From barry at python.org  Thu Mar 18 18:04:52 2004
From: barry at python.org (Barry Warsaw)
Date: Thu Mar 18 18:05:02 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <5.1.1.6.0.20040318175558.024fdec0@telecommunity.com>
References: <Your message of "Thu, 18 Mar 2004 17:16:58 EST."
	<LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net>
	<LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net>
	<5.1.1.6.0.20040318175558.024fdec0@telecommunity.com>
Message-ID: <1079651091.4350.110.camel@anthem.wooz.org>

On Thu, 2004-03-18 at 17:57, Phillip J. Eby wrote:

> Does that mean that SimpleHTTPServer would now be servers.http.simple?  Or 
> http.servers.simple?  Or...?

Probably something the web-sig should figure out.

glad-we-didn't-call-it-the-MIMESavvyElectronicMail-package-ly y'rs,
-Barry



From guido at python.org  Thu Mar 18 18:06:38 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 18 18:08:35 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: Your message of "Thu, 18 Mar 2004 17:57:25 EST."
	<5.1.1.6.0.20040318175558.024fdec0@telecommunity.com> 
References: <Your message of "Thu, 18 Mar 2004 17:16:58 EST."
	<LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net>
	<LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net> 
	<5.1.1.6.0.20040318175558.024fdec0@telecommunity.com> 
Message-ID: <200403182306.i2IN6cW11219@guido.python.org>

> >StringIO, cPickle and SimpleXMLRPCServer etc. were mistakes.
> 
> Does that mean that SimpleHTTPServer would now be servers.http.simple?  Or 
> http.servers.simple?  Or...?

Creating a hierarchy for future stuff is a different discussion.
We're not going to rename existing modules that violate PEP 8.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From tim.one at comcast.net  Thu Mar 18 18:20:28 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 18 18:20:23 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <5.1.1.6.0.20040318175558.024fdec0@telecommunity.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIECEJKAB.tim.one@comcast.net>

[Phillip J. Eby]
> Does that mean that SimpleHTTPServer would now be
> servers.http.simple?  Or http.servers.simple?  Or...?

Old-timers will have no trouble recognizing it if it's renamed to s14r.
Another level of helpful compression could reduce that to s2r then.
Something saner than that probably has to wait from someone to design a sane
package hierarchy.

b1t-c5d-to-an-e5e-it's-o5e-ly y'rs  - t1m


From greg at cosc.canterbury.ac.nz  Thu Mar 18 18:37:03 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu Mar 18 18:37:09 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C8@USAHM010.amer.corp.eds.com>
Message-ID: <200403182337.i2INb3D5004712@cosc353.cosc.canterbury.ac.nz>

> There is an idiom (I've seen it more in Lisp than in python) 
> of creating a fresh object to act as a sentinel.

I find it extremely rare that None isn't sufficient for this kind of
thing in Python. It may be more common in Lisp due to the fact that
Lisp's equivalent of None is indistinguishable from an empty list...

In those rare cases where None is also a legitimate value, it's not
hard to manufacture some unique object and test for it with 'is'. You
just have to remember that strings and integers are not necessarily
unique, which I don't think is too hard to keep in mind.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From jim.jewett at eds.com  Thu Mar 18 18:41:54 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Thu Mar 18 18:43:07 2004
Subject: [Python-Dev] (class) module names clarification
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CD@USAHM010.amer.corp.eds.com>

Guido van Rossum:

>> CamelCase for class names is still recommended
> Indeed.

...

> Another pronouncement then: modules/packages need to be short
> all-lower names (no underscores).

> StringIO, cPickle and SimpleXMLRPCServer etc. were mistakes.

The whole point of StringIO is its class.  So if it were rewritten
in modern style, a user would write:

	from userlist import UserList
	from stringio import StringIO

Is this correct?

-jJ

From guido at python.org  Thu Mar 18 18:58:11 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 18 18:58:20 2004
Subject: [Python-Dev] (class) module names clarification
In-Reply-To: Your message of "Thu, 18 Mar 2004 18:41:54 EST."
	<B8CDFB11BB44D411B8E600508BDF076C1E96D3CD@USAHM010.amer.corp.eds.com> 
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CD@USAHM010.amer.corp.eds.com>
Message-ID: <200403182358.i2INwBY11337@guido.python.org>

> The whole point of StringIO is its class.  So if it were rewritten
> in modern style, a user would write:
> 
> 	from userlist import UserList
> 	from stringio import StringIO
> 
> Is this correct?

For example, although perhaps better module names could be found.

This would eliminate a whole class of errors where one writes code
assuming the import had the form "import StringIO" but it was actually
"from StringIO import StringIO"; or vice versa.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From eric at enthought.com  Thu Mar 18 19:07:05 2004
From: eric at enthought.com (eric jones)
Date: Thu Mar 18 19:07:21 2004
Subject: [Python-Dev] (class) module names clarification
In-Reply-To: <200403182358.i2INwBY11337@guido.python.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CD@USAHM010.amer.corp.eds.com>
	<200403182358.i2INwBY11337@guido.python.org>
Message-ID: <405A39A9.7080503@enthought.com>

Guido van Rossum wrote:

>>The whole point of StringIO is its class.  So if it were rewritten
>>in modern style, a user would write:
>>
>>	from userlist import UserList
>>	from stringio import StringIO
>>
>>Is this correct?
>>    
>>
>
>For example, although perhaps better module names could be found.
>
>This would eliminate a whole class of errors where one writes code
>assuming the import had the form "import StringIO" but it was actually
>"from StringIO import StringIO"; or vice versa.
>
>  
>
We've adopted this same coding standard for all our code at for this 
exact reason.  We also use CamelCase for classes (although old SciPy 
code use lower case for everything), and lower_case for methods, 
variables, and functions exactly as you have advised.  So, I'm all in 
favor of these becoming standard. :-)

eric

>--Guido van Rossum (home page: http://www.python.org/~guido/)
>
>_______________________________________________
>Python-Dev mailing list
>Python-Dev@python.org
>http://mail.python.org/mailman/listinfo/python-dev
>Unsubscribe: http://mail.python.org/mailman/options/python-dev/eric%40enthought.com
>  
>


From jack at performancedrivers.com  Thu Mar 18 19:23:22 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Thu Mar 18 19:23:26 2004
Subject: [Python-Dev] (class) module names clarification
In-Reply-To: <405A39A9.7080503@enthought.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CD@USAHM010.amer.corp.eds.com>
	<200403182358.i2INwBY11337@guido.python.org>
	<405A39A9.7080503@enthought.com>
Message-ID: <20040319002322.GA16407@performancedrivers.com>

On Thu, Mar 18, 2004 at 06:07:05PM -0600, eric jones wrote:
> >This would eliminate a whole class of errors where one writes code
> >assuming the import had the form "import StringIO" but it was actually
> >"from StringIO import StringIO"; or vice versa.
> >
> We've adopted this same coding standard for all our code at for this 
> exact reason.  We also use CamelCase for classes (although old SciPy 
> code use lower case for everything), and lower_case for methods, 
> variables, and functions exactly as you have advised.  So, I'm all in 
> favor of these becoming standard. :-)
> 
Our early modules had the Graph.Graph() naming problem.  It took
quite a few [brush] burns before the wisdom of plotting.Graph() sunk in.
Now that SVN (which has proper renaming support) has hit 1.0 I'm looking
forward to switching from CVS and doing:
# svn commit -m "The Great Day of Renaming"

-jackdied


From greg at cosc.canterbury.ac.nz  Thu Mar 18 19:58:51 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu Mar 18 19:59:18 2004
Subject: [Python-Dev] indiscernible objects (was: A proposal has surfaced
	oncomp.lang.pythontoredefine   "is")
In-Reply-To: <20040318112356.1a0e1c8b.casey@zope.com>
Message-ID: <200403190058.i2J0wpUa004809@cosc353.cosc.canterbury.ac.nz>

Casey Duncan <casey@zope.com>:

> Yes, I could write my own function to implement this. But I doubt I
> would get it right.

Since you're the only person with a chance of knowing exactly what
"right" means in your application, if you can't get it right, nobody
can.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Thu Mar 18 20:33:16 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu Mar 18 20:33:52 2004
Subject: [Python-Dev] (class) module names clarification
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CD@USAHM010.amer.corp.eds.com>
Message-ID: <200403190133.i2J1XG22004875@cosc353.cosc.canterbury.ac.nz>

"Jewett, Jim J" <jim.jewett@eds.com>:

> The whole point of StringIO is its class.  So if it were rewritten
> in modern style, a user would write:
> 
> 	from userlist import UserList
> 	from stringio import StringIO

That's actually an excellent argument for this convention.
It neatly solves the problem of having a module name the
same as a class defined within it.

It would have been good to have socket.Socket, too.

Maybe alternative names could be provided for these, with
the intention of phasing out use of the old names?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From janssen at parc.com  Thu Mar 18 20:44:48 2004
From: janssen at parc.com (Bill Janssen)
Date: Thu Mar 18 20:45:12 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type 
In-Reply-To: Your message of "Thu, 18 Mar 2004 15:04:52 PST."
	<1079651091.4350.110.camel@anthem.wooz.org> 
Message-ID: <04Mar18.174449pst."58611"@synergy1.parc.xerox.com>

My 2 cents:

http.server.simple.

Bill

> On Thu, 2004-03-18 at 17:57, Phillip J. Eby wrote:
> 
> > Does that mean that SimpleHTTPServer would now be servers.http.simple?  Or 
> > http.servers.simple?  Or...?
> 
> Probably something the web-sig should figure out.
> 
> glad-we-didn't-call-it-the-MIMESavvyElectronicMail-package-ly y'rs,
> -Barry

From janssen at parc.com  Thu Mar 18 20:46:21 2004
From: janssen at parc.com (Bill Janssen)
Date: Thu Mar 18 20:46:46 2004
Subject: [Python-Dev] (class) module names clarification 
In-Reply-To: Your message of "Thu, 18 Mar 2004 17:33:16 PST."
	<200403190133.i2J1XG22004875@cosc353.cosc.canterbury.ac.nz> 
Message-ID: <04Mar18.174624pst."58611"@synergy1.parc.xerox.com>

> > 	from stringio import StringIO

  from string import StringIO

Bill

From tim.one at comcast.net  Thu Mar 18 21:13:14 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 18 21:13:22 2004
Subject: [Python-Dev] (class) module names clarification
In-Reply-To: <405A39A9.7080503@enthought.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGECLJKAB.tim.one@comcast.net>

[eric jones]
> We've adopted this same coding standard for all our code at for this
> exact reason.  We also use CamelCase for classes (although old SciPy
> code use lower case for everything), and lower_case for methods,
> variables, and functions exactly as you have advised.  So, I'm all in
> favor of these becoming standard. :-)

I hope the young are absorbing this valuable lesson.  Young ones, if you
simply do the objectively correct thing, instead of arguing about it or
(maybe even worse) wondering what it is, you'll conform to Guido's
pronouncements well before they're made, saving everyone a lot of typing and
grief.


From greg at cosc.canterbury.ac.nz  Thu Mar 18 21:57:07 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu Mar 18 21:57:14 2004
Subject: [Python-Dev] funcdef grammar production
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3C9@USAHM010.amer.corp.eds.com>
Message-ID: <200403190257.i2J2v7jG005008@cosc353.cosc.canterbury.ac.nz>

"Jewett, Jim J" <jim.jewett@eds.com>:

> There is value in keeping the grammar documentation as close 
> as possible to the actual coded grammar.

Yes, but the question is whether that value outweighs the disadvantage
of making the documented grammar hard to follow, and thus less useful
for the purpose of documentation. To my mind, being clear to a human
reader is the first and most important goal to satisfy when
documenting something.

> > BNF on its own doesn't seem
> > to be up to the task of specifying this sort of thing clearly
> > to a human reader.
> 
> Agreed, but text often fails.  The advantage of BNF is that it
> can be checked to ensure that the writer and the reader agree.

I was exploring a third option, which is to use a higher level
formalism. The ideas involved are not inherently difficult to
formalise, it's just that BNF seems to be too low-level to easily
express the particular combination we need here. It's natural to
wonder whether theres' some other tool that would do a better job.

Railroad diagrams might be just what we need here. Take a look
at this:

http://www.cosc.canterbury.ac.nz/~greg/python/paramlist.png

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From kbk at shore.net  Thu Mar 18 23:03:11 2004
From: kbk at shore.net (Kurt B. Kaiser)
Date: Thu Mar 18 23:03:17 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <200403182253.i2IMrw911169@guido.python.org> (Guido van
	Rossum's message of "Thu, 18 Mar 2004 14:53:58 -0800")
References: <LNBBLJKPBEHFEDALKOLCCEBMJKAB.tim.one@comcast.net>
	<200403182253.i2IMrw911169@guido.python.org>
Message-ID: <87brmtwkjk.fsf@hydra.localdomain>

Guido van Rossum <guido@python.org> writes:

>> Dissension will be
>> crushed <wink>.  Module and package names seem muddier, although I think a
>> lot of projects (including the Python core) have been moving toward short
>> one-word all-lower names for those (especially for packages at top level).
>
> Another pronouncement then: modules/packages need to be short
> all-lower names (no underscores).
>
> StringIO, cPickle and SimpleXMLRPCServer etc. were mistakes.

Patch 919256 submitted for review.

http://sourceforge.net/tracker/index.php?func=detail&aid=919256&group_id=5470&atid=305470

-- 
KBK

From raymond.hettinger at verizon.net  Fri Mar 19 00:21:33 2004
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Fri Mar 19 00:24:16 2004
Subject: [Python-Dev] Joys of Optimization
Message-ID: <006e01c40d72$1d729d60$a0ba2c81@oemcomputer>

 [Hye-Shik]
> Yay!  I'm a big fan of your series of optimizations.
> Great thanks! :)
> 
> By the way, I think there're many junior python hacker wannabes
> like me to challenge objects/compiler/vm optimizations.  Can you
> share your roadmap or work queues/plans, list of known bottle-necks
> for them?

Here are the optimizations on my todo list:

1) Objects can define equality tests through rich comparisons or
__cmp__.  A few timings suggest that the latter run 30% to 50% faster.
This may mean that PyObject_RichCompareBool(v, w, Py_EQ) has a potential
for a significant speedup.  Python pervasively uses equality tests
(internally to dictionaries and just about everywhere else).  So,
efforts to speedup rich comparisions (without changing semantics) would
result in an across the board improvement.  As a proof of concept, I
replaced all instances of rich equality testing with a new function,
_PyObject_RichCompareBoolEQ(v, w) which tries PyObject_Compare() and if
there is an error, falls back to the regular rich comparision.  This
gave the expected speedup and passed most, but not all of the
test_suite.  The reasons for the test failures are that the simplistic
redefinition slightly changed semantics when both __cmp__ and rich
comparisons were defined.  The correct approach is to investigate rich
comparisons and find the bottleneck.  At the root of it all, most
objects implement only one equality test and rich comparisons need to be
able to find it and run it as fast as __cmp__.

2) Michael started an investigation into the costs of setting up new
stackframes.  Standing on his shoulders, Armin proposed the concept of a
Zombie frame which goes beyond freelisting and takes advantage of the
contents of older, discarded frames.  His implementation eliminated
freelists in favor a single captive frame per code object.  The timings
were impressive, but there were some negative consequences -- recursive
calls and multi-threaded apps needed more than one frame per code object
and they suffered miserably in the absence of a freelist -- also, there
was a ton of code run only once and not only doesn't benefit from the
zombie frame, it costs memory because the zombie is never gets freed.  I
and one other respondant independently suggested keeping the freelist
(which would need to become doubly linked) and using a weak reference
from the code object to see if the matching pre-made frame is still on
the freelist.  This would capture all the benefits and incur none of the
detriments.  AFAICT, the only potential downfall is that so little setup
time is saved by zombie frames that any significant tracking logic would
completely offset the gain.  IOW, if you do this, the mechanism needs to
be dirt simple and fast.

3) I had re-coded several modules in C with the goal of making pure
python more viable for certain high performance apps (for instance,
implementing heapq and bisect in C made them a more viable as components
of superfast code written in pure python).  If the threading module were
re-coded in C, then it would benefit a broad class of multi-threaded
applications making it more likely that Python becomes the language of
choice for those apps.

4) The peephole optimizer in compile.c is currently limited in scope
because it lacks a basic block checker and the ability to re-insert
opcodes of a different length than the original (which entails moving
all the jump targets and recomputing the line numbering).  I've already
written the basic block checker, have a jump retargeter, and have
several peephole optimizations thoroughly tested; however, they can't go
in until the line renumberer is done.  For example, the code a,b=b,a
currently runs slower than t=a;a=b;b=t but it could run several times
faster if the BUILD_TUPLE 2 UNPACK_SEQUENCE 2 were replaced by ROT_TWO.
This is an important optimization because it changes the way we write
pure python (avoiding multiple assignment because it is slower).

5) Python's startup time got much worse in Py2.3.  Several people took a
look into it but there were no clear-cut results.  This is an important
optimization because there are plenty of short running apps where python
is called to knock about a single basic task.  Especially in web apps,
frequent starts and exits can be the norm and Python's startup time can
dominate total resource utilization and response time.

6) The concept behind ceval.c 's goto_fast_nextopcode bypassing the
periodic checking for signals and occasional thread switching.  Taking
this concept to the limit, those operations could be pulled completely
out of the eval loop and only called by a handful of opcodes such as
CALL_FUNCTION (which can be of arbitrarily long duration) and
JUMP_ABSOLUTE (which appears in any non-recursive repeating code block).
The goal is not to make error checking or thread switching less
frequent; rather, the idea is to increase the number of things that can
be written in pure python without incurring tons of eval loop overhead
(I think it is enough check the first opcode, every function call, and
every loop iteration).  This will increase the viability of writing
algorithms in pure python.  Of course, this needs to be handled
carefully, many simple operations have the potential to trigger some
long running operations; however, some of those situations exist today
(i.e. map() can run a huge number of iterations with zero trips around
the eval loop for tasking switching or signal checking).

7) There are three PEPs for reducing the overhead associated with
attribute lookup.  This, IMO, is the potentially largest remaining
optimization.  More importantly, it will improve the way we code,
allowing more free use of method calls, smaller functions, and less need
to put frequent calls in a local variable (I truly hate seeing code
like:  _int=int # speed hack).


Aside from optimizations, there are some modules to be developed:

1) collections.heap(), a high performance Fibonacci heap implemented as
a type and taking sort style arguments (cmp=, key=, reverse=).

2) finish the statistics module (except for the docs, this is basically
done and awaiting collections.heap() to underlie nsmallest() and
nlargest()).

3) write a calculator module - done right, this will take a long time, a
lot of research, and require much public discussion (everyone will have
an opinion on this).

4) explore whether the StringIO module would perform better if based on
array objects rather than strings.



Also, there are a couple of bugfixes in the queue:

1) fix operator.isMappingType() to exclude anything that passes
operator.isSequenceType().

2) the logging module is in dire need of a quick start tutorial
(patterned after that for unittests which gives you the 90% of what you
need to know in one easy lesson) and a new method or two that
encapsulates standard usage patterns boiling it down to
start_logging_stuff_here() and okay_log_this().  Right now, the module
suffers from a near vertical learning curve and even the simplest
logging applications require more instructions than you would expect.
Otherwise, the module is extremely well engineered.  If the docs were
fixed up and a couple of methods added, it could be a runaway success
instead of a debacle.

3) the Standard Library Tour in the tutorial was wildly successful and
demands a sequel. A part II should be written that covers more of what
you need to know to get productive from the start.

4) eval() only takes real dictionaries as arguments.  A huge number of
possibilities open up if it could take dictionary look-alikes.  Previous
proposals were rejected because they cost a percentage point or two of
execution time -- why have everyone pay the price for this.  So, efforts
need to be directed at how to do this without a performance penalty -
several approaches were mentioned; none have been tried and proven.



Also, the generator expression patch is about ready for prime time.  I
have it as a priority to get it reviewed and to fixup any outstanding
issues.


Okay, that's Raymond's list.  Feel free to take one and run with it.



Raymond Hettinger


From raymond.hettinger at verizon.net  Fri Mar 19 00:45:49 2004
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Fri Mar 19 00:48:03 2004
Subject: [Python-Dev] Joys of Optimization
Message-ID: <009601c40d75$6f5395a0$a0ba2c81@oemcomputer>

[Extract from my earlier post]
>> C:\python24\python timedictiter.py
>>
>> C:\pydev>\python23\python timedictiter.py
>> 3.44381233343 items()
>> 
>> C:\pydev>\python22\python timedictiter.py
>> 2.99887443638 items()


[Brett]
> Interesting how items() slowed down between 2.2 and 2.3 
> but is now a sliver faster than 2.2 was.

Actually, items() stayed the same between 2.2 and 2.3, it was the
underlying dictionary that changed.  For 2.3, I made mid-sized
dictionaries more sparse.  There were fewer collisions, improved lookup
time, and fewer resize operations.  The trade-off for this tune-up was
increased memory utilization and slower dictionary iteration (see
Objects/dictnotes.txt for details).

When I optimized iteration for Py2.4, we got all the speed back without
having to trade-away the benefits of sparsity.  To see the improvement,
fill a dictionary with random keys and then time "for k in keylist: k in
d".  You'll see an improvement between Py2.2 and Py2.3.  Also, there is
a similar speedup in "for k in nonkeys: k in d".  IOW, failed searches
also run faster.


Raymond Hettinger


P.S.  If you're wondering what the length transparency checkins were all
about, look at these new timings:

C:\pydev>\python24\python timedictiter.py
0.384519519268 list(d.iterkeys())
0.377146784224 list(d.itervalues())
1.188094839 list(d.iteritems())

C:\pydev>\python23\python timedictiter.py
2.12341976902 list(d.iterkeys())
2.30316046196 list(d.itervalues())
3.10405387284 list(d.iteritems())


From python at rcn.com  Fri Mar 19 01:35:14 2004
From: python at rcn.com (Raymond Hettinger)
Date: Fri Mar 19 01:37:27 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEONJJAB.tim.one@comcast.net>
Message-ID: <00ef01c40d7c$56b31820$a0ba2c81@oemcomputer>

> [Michael Chermside]
> > ...
> > I could also point out that in those cases where it ISN'T
> > possible to use one word, the alllowercaseformat is a real
> > pain to read, especially for non-native english speakers.

[Tim]
> But all_lower_case_format isn't, and some claim it's easier for
non-native
> readers to read than stuffRammedTogetherLikeThis.  I believe it, in
part
> because it's easier for this native reader to read:  WordRamming
doesn't
> exist in real-life English.  In SmallDoses such affectations can be
> helpful
> to draw attention, and that's fine for class names, or Wiki page
titles.
> ButOveruseOf snyAffectation isWearying.

For the most part, the varying styles haven't been a problem for me.
The big exception is Py_RETURN_TRUE; I *never* get this one right
without seeing a nearby lookalike or looking it up in Misc/NEWS :-(


Raymond


From python at rcn.com  Fri Mar 19 01:21:09 2004
From: python at rcn.com (Raymond Hettinger)
Date: Fri Mar 19 03:32:29 2004
Subject: [Python-Dev] Rich comparisons  [Was] redefining is
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEAIJKAB.tim.one@comcast.net>
Message-ID: <00de01c40d7a$5ed36520$a0ba2c81@oemcomputer>

> [Guido]
> > (The best scheme is probably to use intern() but still use '==' for
> > comparisons; '==' is smart enough to skip comparing an object to
> > itself.)

[Tim]
> Well, string_richcompare() takes that shortcut, so the advice is good,
but
> PyObject_RichCompare() doesn't in general (PyObject_Compare() still
does,
> but that's not triggered by '==').

Hey, hey, this may be part of the answer to why my timings for equality
testing using rich comparisions run so much slower than they do with
PyObject_Compare().

Fixing this would entail a change in semantics but would be worth it if
we can all agree to it.  Essentially, I would like to insert the
following lines into PyObject_RichCompare():

	if (v == w) {
		if (op == Py_EQ)
			Py_RETURN_TRUE;
		else if (op == Py_NE)
			Py_RETURN_FALSE;
	}

The test suite runs fine, but it is possible that some existing class
defines equality in a way that sometimes returns False even when given
two identical objects.

I think the change is worth it -- tests for equality are ubiquitous (and
somewhat common) throughout the source.


Raymond


From python at rcn.com  Fri Mar 19 05:31:19 2004
From: python at rcn.com (Raymond Hettinger)
Date: Fri Mar 19 05:33:37 2004
Subject: [Python-Dev] A proposal has surfaced
	oncomp.lang.pythontoredefine"is"
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEELPJJAB.tim.one@comcast.net>
Message-ID: <004801c40d9d$5181bca0$a0ba2c81@oemcomputer>

[Tim]
> Someday Guido has to figure out a way to convice Python programmers
that
> they really are allowed to write their own functions when they find a
> describable behavior that isn't built in <wink>.

Would that also preclude a new decorator syntax when plain-ole function
notation and assignment already does the trick (and would have to be
kept around for backwards compatability)?


Raymond Hettinger


#################################################################
#################################################################
#################################################################
#####
#####
#####
#################################################################
#################################################################
#################################################################

#################################################################
#################################################################
#################################################################
#####
#####
#####
#################################################################
#################################################################
#################################################################

From mwh at python.net  Fri Mar 19 07:01:31 2004
From: mwh at python.net (Michael Hudson)
Date: Fri Mar 19 07:01:40 2004
Subject: [Python-Dev] Rich comparisons
In-Reply-To: <00de01c40d7a$5ed36520$a0ba2c81@oemcomputer> (Raymond
	Hettinger's message of "Fri, 19 Mar 2004 01:21:09 -0500")
References: <00de01c40d7a$5ed36520$a0ba2c81@oemcomputer>
Message-ID: <2mad2dnizo.fsf@starship.python.net>

"Raymond Hettinger" <python@rcn.com> writes:

>> [Guido]
>> > (The best scheme is probably to use intern() but still use '==' for
>> > comparisons; '==' is smart enough to skip comparing an object to
>> > itself.)
>
> [Tim]
>> Well, string_richcompare() takes that shortcut, so the advice is good,
> but
>> PyObject_RichCompare() doesn't in general (PyObject_Compare() still
> does,
>> but that's not triggered by '==').
>
> Hey, hey, this may be part of the answer to why my timings for equality
> testing using rich comparisions run so much slower than they do with
> PyObject_Compare().

How much is 'so much'?  When I recently implemented rich comparisons
for floats, there was no change in performance that I could discern
(apart from comparing floats to integers which sped up by ~30%).

> Fixing this would entail a change in semantics but would be worth it if
> we can all agree to it.

Bzzt!, then:

>>> float('nan')
nan
>>> _ == _
False

while this is well into the realm of 'platform specific accidents', it
is somewhat desirable.  And will happen in 2.4 on linux, OS X and
Windows as things currently stand (so at a guess 90%+ of Python
installs).

[...]

> I think the change is worth it -- tests for equality are ubiquitous
> (and somewhat common) throughout the source.

But how often are they between identical objects?

Cheers,
mwh

-- 
  If I had wanted your website to make noise I would have licked
  my finger and rubbed it across the monitor.
                           -- signature of "istartedi" on slashdot.org

From skip at pobox.com  Fri Mar 19 10:03:55 2004
From: skip at pobox.com (Skip Montanaro)
Date: Fri Mar 19 10:04:07 2004
Subject: [Python-Dev] unexpected reload() behavior
Message-ID: <16475.3035.435868.901154@montanaro.dyndns.org>


In working on the doc change for <http://python.org/sf/919099> I came across
this statement:

    The names in the module namespace are updated to point to any new or
    changed objects. Names of unchanged objects, or of objects no longer
    present in the new module, remain pointing at the old objects.

(The bit about names of unchanged objects still pointing to the old objects
is incorrect.  I'll fix that.)

Not believing that old objects remained after the reload() I wrote a short
test:

    a = 5
    b = 7
    c = (1,2,3)

imported it, modified it to

    a = 9
    c = (1,2,3)

then reloaded it.  I was surprised to find that reloadtst.b did indeed still
exist:

    >>> import reloadtst
    >>> dir(reloadtst)
    >>> dir(reloadtst)
    ['__builtins__', '__doc__', '__file__', '__name__', 'a', 'b', 'c']
    >>> # edit reloadtst.py
    ...
    >>> reload(reloadtst)
    <module 'reloadtst' from 'reloadtst.py'>
    >>> dir(reloadtst)
    ['__builtins__', '__doc__', '__file__', '__name__', 'a', 'b', 'c']

It seems counterintuitive to me that reloadtst.b should still be defined.
Is that behavior intention or accidental?

Skip

From pinard at iro.umontreal.ca  Fri Mar 19 10:07:19 2004
From: pinard at iro.umontreal.ca (=?iso-8859-1?Q?Fran=E7ois?= Pinard)
Date: Fri Mar 19 10:08:14 2004
Subject: [Python-Dev] (class) module names clarification
In-Reply-To: <200403190133.i2J1XG22004875@cosc353.cosc.canterbury.ac.nz>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CD@USAHM010.amer.corp.eds.com>
	<200403190133.i2J1XG22004875@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040319150719.GA32052@alcyon.progiciels-bpi.ca>

[Greg Ewing]
> "Jewett, Jim J" <jim.jewett@eds.com>:

> > 	from userlist import UserList
> > 	from stringio import StringIO

> That's actually an excellent argument for this convention.
> It would have been good to have socket.Socket, too.

All these would be good news indeed.

On the other hand, while I think Guido is right in saying it was a
mistake to use hairy capitalisation in module names, he could go a bit
further and declare that good module names should ideally _not_ be
simple English words, which users would likely keep for themselves,
as simple variable names.  In that vein, `userlist', `stringio' and
`heapq' are good names for a module, while `string' and `socket' are
less welcome.  It would be so nice writing `socket = [...].Socket()'!

-- 
Fran?ois Pinard   http://www.iro.umontreal.ca/~pinard

From FBatista at uniFON.com.ar  Fri Mar 19 10:13:04 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Fri Mar 19 10:14:35 2004
Subject: [Python-Dev] Changes to PEP 327: Decimal data type
Message-ID: <A128D751272CD411BC9200508BC2194D03383772@escpl.tcp.com.ar>

Guido van Rossum wrote:

#- We're not going to rename existing modules that violate PEP 8.

Of course we won't.

But somebody (with better english usage than I, :) must rewrite PEP 8 to be
sharp clear, concise. PEP 8 should encourage, but give a precise guide.

One example of parragraph to rewrite is the one that tell us how to write
method names. When I read that section, wasn't clear to me to use
Decimal.fromFloat() or Decimal.from_float() (this whole thread wouldn't been
so long otherwise).

.	Facundo




From jim.jewett at eds.com  Fri Mar 19 10:14:47 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 19 10:15:06 2004
Subject: [Python-Dev] (not) redefining is
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CF@USAHM010.amer.corp.eds.com>

(me):
>> There is an idiom (I've seen it more in Lisp than in python) 
>> of creating a fresh object to act as a sentinel.

Greg Ewing:
> In those rare cases where None is also a legitimate value, it's not
> hard to manufacture some unique object and test for it with 'is'. You
> just have to remember that strings and integers are not necessarily
> unique, which I don't think is too hard to keep in mind.

That is the (occasionally surprising) state today.

Since there is no intention of ever actually mutating the object,
it is common (if mistaken) to assume that any new object -- even
an immutable -- should work.

If we changed "is", it would be even more difficult to create 
a guaranteed unique object.  Fortunately, Guido rejected the 
wholesale change.

I'm still a bit worried about a comment (which I can no longer 
find) suggesting a change for {}, so that

	>>> {} is {}  
      True

I don't see this in 2.3, and would be very surprised (and 
disappointed) if it started to happen in 2.4, precisely because 
dictionaries are mutable objects that should work.  Even worse, 

	>>> [] is []
	False

was the recommended idiom; the difference between [] and {} as 
unique objects is far from clear.  I personally like (and have 
used) Barry's suggestion of a new named object().  Unfortunately, 
object() is longer to type, not as backwards compatible, and 
places too emphasis on what the sentinel *is* rather than what 
is *isn't*.

-jJ

From jim.jewett at eds.com  Fri Mar 19 10:21:48 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 19 10:22:06 2004
Subject: [Python-Dev] (old) module names
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3D0@USAHM010.amer.corp.eds.com>

Guido van Rossum:

>> Does that mean that SimpleHTTPServer would now be ...

> Creating a hierarchy for future stuff is a different discussion.
> We're not going to rename existing modules that violate PEP 8.

Obviously, backwards compatibility means that we shouldn't remove 
the current name any more than we should remove the string module.

There is an option to just expose the old interface (name) as an 
alias (like the string module does).

Would it be OK to move the existing modules if a forward was used,
or would it still be bad because it would encourage writing new
code that couldn't run under older python versions?

-jJ

From edloper at gradient.cis.upenn.edu  Fri Mar 19 10:29:40 2004
From: edloper at gradient.cis.upenn.edu (Edward Loper)
Date: Fri Mar 19 10:28:16 2004
Subject: [Python-Dev] Rich comparisons
In-Reply-To: <E1B4Iho-00040J-3X@mail.python.org>
References: <E1B4Iho-00040J-3X@mail.python.org>
Message-ID: <405B11E4.1040505@gradient.cis.upenn.edu>

Michael Hudson wrote:
>  >>> float('nan')
>  nan
>  >>> _ == _
>  False

This means that 'nan' is no longer a well-behaved dictionary key:

     >>> x = {float('nan'):0}
     >>> x[float('nan')] = 1
     >>> print x
     {nan: 0, nan: 1}

Even worse, we get different behavior if we use the "same copy" of nan:

     >>> nan = float('nan')
     >>> x = {nan:0}
     >>> x[nan] = 1
     >>> print x
     {nan: 1}

If we *really* want nan==nan to be false, then it seems like we have to 
say that nan is unhashable.  I'm also disturbed by the fact that cmp() 
has something different to say about their equality:

     >>> cmp(float('nan'), float('nan'))
     0

-Edward


From barry at python.org  Fri Mar 19 10:30:26 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar 19 10:30:37 2004
Subject: [Python-Dev] (old) module names
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3D0@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3D0@USAHM010.amer.corp.eds.com>
Message-ID: <1079710225.8897.32.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 10:21, Jewett, Jim J wrote:

> Would it be OK to move the existing modules if a forward was used,
> or would it still be bad because it would encourage writing new
> code that couldn't run under older python versions?

There's been talk for years about reorganizing the standard library,
with an eye toward ensuring backward compatibility.  I don't think any
of that has ever even gotten to the PEP stage, which would surely be
required.  Too bad there's no opportunity like an upcoming sprint or
something that could address this... oh wait. :)

Actually, I would guess that much of the standard library reorg would
really dovetail with an update of many of the existing modules.  Think
email-sig and the email package, and hopefully what the web-sig is
working on.  If those sigs take smaller bites out of the standard lib
reorg, then over time, we'll migrate the useful parts of the library to
the new naming conventions and utilize other PEPs for deprecating and
removing the old stuff eventually.

-Barry



From jim.jewett at eds.com  Fri Mar 19 10:33:12 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 19 10:34:05 2004
Subject: [Python-Dev] funcdef grammar production
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3D1@USAHM010.amer.corp.eds.com>

Greg Ewing:
> Jim Jewett:

>> There is value in keeping the grammar documentation as 
>> close as possible to the actual coded grammar.

> Yes, but the question is whether that value outweighs the 
> disadvantage of making the documented grammar hard to follow,

> ... use a higher level formalism. The ideas involved are not
> inherently difficult to formalise, it's just that BNF seems
> to be too low-level  ...

> Railroad diagrams might be just what we need here. 

> http://www.cosc.canterbury.ac.nz/~greg/python/paramlist.png

That works for me now that I already know what it should say.
I'm not sure it would have worked on my first reading.

I'm also not sure how I could verify my understanding.
I won't pretend that I have a BNF tool handy, but I 
understand, in priciple, how I could get one.  I do not
see any good way to verify a picture.  If the picture 
was generated based on some textual format -- maybe.

I certainly would not want to go to the effort of verifying
my understanding of the documented behavior only to discover
that the actual behavior was different, because they were
written separately.

-jJ

From vinay_sajip at red-dove.com  Fri Mar 19 10:54:02 2004
From: vinay_sajip at red-dove.com (Vinay Sajip)
Date: Fri Mar 19 10:51:41 2004
Subject: [Python-Dev] Some changes to logging
Message-ID: <008301c40dca$67e78ec0$652b6992@alpha>

I've had feedback from numerous sources that the logging package is harder
to configure than it needs to be, for the common use case of a simple script
which needs to log to a file. I propose to change the convenience function
basicConfig(), which is currently the one-shot convenience function for
simple scripts to use. The function signature change is backward-compatible.
The proposed new interface is:

def basicConfig(**kwargs):
    """
    Do basic configuration for the logging system.

    This function does nothing if the root logger already has handlers
    configured. It is a convenience method intended for use by simple
scripts
    to do one-shot configuration of the logging package.

    The default behaviour is to create a StreamHandler which writes to
    sys.stderr, set a formatter using the BASIC_FORMAT format string, and
    add the handler to the root logger.

    A number of optional keyword arguments may be specified, which can alter
    the default behaviour.

    filename  Specifies that a FileHandler be created, using the specified
              filename, rather than a StreamHandler.
    filemode  Specifies the mode to open the file, if filename is specified
              (if filemode is unspecified, it defaults to "a").
    format    Use the specified format string for the handler.
    level     Set the root logger level to the specified level.
    stream    Use the specified stream to initialize the StreamHandler. Note
              that this argument is incompatible with 'filename' - if both
              are present, 'stream' is ignored.

    Note that you could specify a stream created using open(filename, mode)
    rather than passing the filename and mode in. However, it should be
    remembered that StreamHandler does not close its stream (since it may be
    using sys.stdout or sys.stderr), whereas FileHandler closes its stream
    when the handler is closed.
    """

If any of you have comments/suggestions about this proposed change,
please let me know, otherwise I'll go ahead with this change.

Regards,

Vinay



From nas-python at python.ca  Fri Mar 19 11:52:48 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Fri Mar 19 11:52:53 2004
Subject: [Python-Dev] Rich comparisons
In-Reply-To: <2mad2dnizo.fsf@starship.python.net>
References: <00de01c40d7a$5ed36520$a0ba2c81@oemcomputer>
	<2mad2dnizo.fsf@starship.python.net>
Message-ID: <20040319165248.GA29631@mems-exchange.org>

On Fri, Mar 19, 2004 at 12:01:31PM +0000, Michael Hudson wrote:
> >>> float('nan')
> nan
> >>> _ == _
> False

Would it help if each NaN instance was a separate object?  The above
code would still return True but code like:

    compute_something() == compute_something_else('nan')

would always return False if they both returned NaN as a result of
math operations.

  Neil

From nas-python at python.ca  Fri Mar 19 11:54:38 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Fri Mar 19 11:54:41 2004
Subject: [Python-Dev] Rich comparisons
In-Reply-To: <20040319165248.GA29631@mems-exchange.org>
References: <00de01c40d7a$5ed36520$a0ba2c81@oemcomputer>
	<2mad2dnizo.fsf@starship.python.net>
	<20040319165248.GA29631@mems-exchange.org>
Message-ID: <20040319165438.GB29631@mems-exchange.org>

On Fri, Mar 19, 2004 at 11:52:48AM -0500, Neil Schemenauer wrote:
>     compute_something() == compute_something_else('nan')

Should be:

    compute_something() == compute_something_else()

Sorry.

  Neil

From ark-mlist at att.net  Fri Mar 19 12:19:11 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 12:19:21 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079635620.4350.2.camel@anthem.wooz.org>
Message-ID: <001d01c40dd6$4bc9df20$6402a8c0@arkdesktop>

> Sure, but it's long been recommended against doing stuff like:
> 
> try:
>   foo()
> except 'quit':
>   pass
> 
> def foo():
>   raise 'quit'
> 
> This doesn't seem like it's something broken that needs fixing.  Once
> you understand what's going on, it makes sense.

Indeed -- once you understand what's going on, it makes sense.

But here's a meta-comment, derived mostly from long experience with the C++
community.  There is lots of stuff in C++ that bites new users all the time:
They see a usage, make an incorrect guess about what it means, and their
programs don't work.  Or they don't realize they need to do something, and
their programs fail in ways that escape testing.

For example, every C++ programmer with more than a little experience knows
that if you're ever intending to inherit from a class, you had better give
that class a virtual destructor.  This statement is not always true, but
it's true enough of the time that there's no point in thinking about
exceptions to it.

Nevertheless, C++ programmers inherit from classes without virtual
destructors all the time.  The most benign problem that results in doing so
is a memory leak, and of course memory leaks go undetected most of the time.

Now, in the C++ world, I realize that there is no chance whatsoever of
changing the language either to make destructors virtual by default, or to
require that a base class have a virtual destructor.  One reason is that the
C++ standard library actually contains classes that inherit from others
without virtual destructors--because that part of the library makes use of
one of the exceptions.  Nevertheless, I believe that this particular C++
feature is defined in a particularly hazardous way, and if I had the power
to turn back the clock and define it differently, I would do so.


Analogously, I think that the way Python defines "is" is hazardous.  I do
not think it is nearly as big a hazard as C++'s treatment of virtual
destructors, but I still think it's a hazard.  The reason is similar:  Using
it makes it easy to write programs that appear to work by coincidence.  The
"raise 'quit'" example is a good one, because it will always work on an
implementation that interns string literals, so no amount of testing will
reveal the problem.

The reason for that, I think, is that there are really three kinds of
"equality" that make sense, and only two of them are reasonably available:

	Value equality (==, which might be user-defined, because
		the concept of "value" can be user-defined);

	Object identity ("is")

	Object equivalence (no easy way of expressing it today)

By "object equivalence", I mean mutual substitutability--which is the same
as identity for mutable objects, but not for immutable ones.

The hazard comes about because there are objects that are equivalent but not
identical, and there are contexts (such as the raise/except example above)
in which it is logical to ask for object equivalence, but what you actually
get is object identity.  Moreover, the way one asks about object identity is
so terse that one is tempted to use it even if one does not know exactly
what it means.

The original author of the proposal on comp.lang.python was clearly
disturbed at getting object identity when equivalence would have made more
sense, and was proposing to change things as follows:

	x == y		Value equality

	x is y		Object equivalence

	id(x) == id(y)	Object identity

I can completely understand why this change might be considered too large to
make to an existing language--just like changing the wan C++ handles virtual
destructors would be--but nevertheless I think it's interesting to consider
the abstract merits of an idea such as this one, perhaps for the mythical
Python 3.0.

To put it another way:  If the three operations above existed, I am finding
it hard to think of many cases in which the third of these operations would
be more useful than the second, which suggests that the second should be
easier to express.





PS:  Java has exactly the same problem, except that there, == means object
identity.  I believe that it is a common beginner's trap in Java to apply ==
to strings and expect a useful result.



From mcherm at mcherm.com  Fri Mar 19 12:35:20 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Fri Mar 19 12:35:59 2004
Subject: [Python-Dev] (old) module names
Message-ID: <1079717720.405b2f589c436@mcherm.com>

> Would it be OK to move the existing modules if a forward was used,
> or would it still be bad because it would encourage writing new
> code that couldn't run under older python versions?

I think it would be a bad idea. The profusion (batteries included!)
of modules in Python is already confusing enough; creating a significant
number of new modules which were all duplicates of existing modules
would just create mass confusion. And the only thing it would "fix"
would be a convention on capitalization - it's just not worth it!

On the other hand, the idea has been floated before that we should
reorganize the standard libraries into some sort of a hierarchy. If
someone were to come up with a useful plan for doing so (and that's
a BIG if), then renaming the modules to follow standard naming
conventions at the same time would be a good idea. But only because
we would be renaming them ANYWAY to create the hierarchy.

-- Michael Chermside


From ark-mlist at att.net  Fri Mar 19 12:42:39 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 12:42:35 2004
Subject: [Python-Dev] A proposal has surfaced on comp.lang.python to
	redefine"is"
In-Reply-To: <BB8A764C.795333E3@mail.google.com>
Message-ID: <001f01c40dd9$92dabe90$6402a8c0@arkdesktop>

> I would say that Python is served well by the two equality predicates
> it has, that it is impossible to please everyone, and that users
> should get used to writing the predicate they want if it is not one of
> the builtins.

Without disagreeing with your statement, I can also say this:

The fact that "is" is so easy to use encourages some programmers to use it
when they would be better off with a different predicate that is much less
readily available.  If "is" represented this other predicate instead, most
programs that use it would be better off. 


From barry at python.org  Fri Mar 19 12:55:25 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar 19 12:55:42 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <001d01c40dd6$4bc9df20$6402a8c0@arkdesktop>
References: <001d01c40dd6$4bc9df20$6402a8c0@arkdesktop>
Message-ID: <1079718923.8897.83.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 12:19, Andrew Koenig wrote:

> The original author of the proposal on comp.lang.python was clearly
> disturbed at getting object identity when equivalence would have made more
> sense, and was proposing to change things as follows:
> 
> 	x == y		Value equality
> 
> 	x is y		Object equivalence
> 
> 	id(x) == id(y)	Object identity
> 
> I can completely understand why this change might be considered too large to
> make to an existing language--just like changing the wan C++ handles virtual
> destructors would be--but nevertheless I think it's interesting to consider
> the abstract merits of an idea such as this one, perhaps for the mythical
> Python 3.0.

I understand your argument, and have some sympathy for it, but I don't
think that 'is' should be the way to spell object equivalence.  If
someone asks "Tim is Barry" I would expect they want an identity test,
not an equivalence test, otherwise they might not get the geezin' Python
hacker they wanted even though as bass players and Britney Spears
fanatics, we're completely substitutable.

IIRC, id() has problems on Jython also, so relying on that for identity
doesn't seem right either.

Also, since we don't really know what "object equivalence" means in a
practical sense, I don't know what we'd be spelling.  But if we do
figure it out (and can explain it in a way people can understand it),
and find that it's an important enough concept to want to express, we'd
need a Different Way to spell it in Python.  To me, "is" asks "are these
two things the same thing" not "could these two things serve the same
purpose".  An English word (not mathematical symbol) was chosen for this
concept for a reason.

Maybe we need a 'like' keyword-let to spell equivalence:

   >>> "hello" is like 'hello'
   True

-Barry

PS: 'raise "quit"' doesn't really enter into this for two reasons. 
First, string exceptions are going away.  Second, there's no 'is'
there.  If string exceptions were to stick around, then it might make
sense to redefine the match criteria in terms of object equivalence. 
But that's a moot point.



From fumanchu at amor.org  Fri Mar 19 12:55:42 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Fri Mar 19 12:57:27 2004
Subject: [Python-Dev] redefining is
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561F22@opus.amorhq.net>

Andrew Koenig wrote:
>8
> The original author of the proposal on comp.lang.python was clearly
> disturbed at getting object identity when equivalence would 
> have made more
> sense, and was proposing to change things as follows:
> 
> 	x == y		Value equality
> 
> 	x is y		Object equivalence
> 
> 	id(x) == id(y)	Object identity
> 
> I can completely understand why this change might be 
> considered too large to make to an existing language--
> just like changing the wan C++ handles virtual
> destructors would be--but nevertheless I think it's 
> interesting to consider the abstract merits of an
> idea such as this one, perhaps for the mythical
> Python 3.0.
> 
> To put it another way:  If the three operations above 
> existed, I am finding it hard to think of many cases
> in which the third of these operations would
> be more useful than the second, which suggests that the 
> second should be easier to express.

Wonderfully well-stated, Andrew; thanks for taking the time. You've
moved this topic out of the realm of "crackpot ideas" for me. ;)

Given the above, it seems reasonable to me that, rather than change the
behavior of "is", one should introduce a new operator (or function,
etc.). That is, instead of the above, I would expect:

    Value equality: x == y

    Object identity: x is y
                 or: id(x) == id(y)

    Object equivalence: x equiv y		
                    or: equiv(x, y)

The reasons being:

1. Backward compatibility, par for the course.

2. "A is B" in English is quite specifically about identity, not
equivalence. Can you think of a case where this is not so? If you can,
then I'd bet it's because "mutual substitutability" is always true for
identical objects--the equivalence is a side-effect of being identical.
Note that I'm not talking about "A is a B" or other statements. I also
assume, for Python, that if id(x) == id(y), then (x equiv y) would also
be True.

Now if we can only find a short English word that means "mutually
substitutable". ;)



Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From mcherm at mcherm.com  Fri Mar 19 12:59:00 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Fri Mar 19 12:59:10 2004
Subject: [Python-Dev] redefining is
Message-ID: <1079719140.405b34e4c67e9@mcherm.com>

Andrew Koenig writes:
> The reason for that, I think, is that there are really three kinds of
> "equality" that make sense, and only two of them are reasonably available:
> 
> 	Value equality (==, which might be user-defined, because
> 		the concept of "value" can be user-defined);
> 
> 	Object identity ("is")
> 
> 	Object equivalence (no easy way of expressing it today)
> 
> By "object equivalence", I mean mutual substitutability--which is the same
> as identity for mutable objects, but not for immutable ones.

You'll have to help me out here... I just don't get it.

Starting from your definition of object equivalence, let us divide
all objects into two clases: mutable and immutable. Actually, instead
I'm going to follow Martin v. L?wis' example[1] and divide all objects 
into identitiy objects, mutable values, and immutable values.

Considering the meaningful concepts for each category:

  Identity Objects can be (meaningfully) compared by:

    * Object Identity
        To see if these are "the same object". Also useful
        for low-level memory stuff according to Tim.
        (just use '==', although 'is' would work too)

  Mutable Values can be (meaningfully) compared by:

    * Value Equality
        To see if these represent the same value.
        (just use '==')
    * Object Identity
        Whether two references will STAY equal if one is changed.
        Same as "mutual substitutibility" or "object equivalence".
        According to Tim, this is also handy for low-level memory
        stuff.
        (just use 'is')

  Immutable Values can be (meaningfully) compared by:

    * Value Equality
        To see if these represent the same value. Same as
        "mutual substitutibility" or "object equivalence".
        (just use '==')
    * Object Identity
        Whether two objects are actually using the same memory
        location. Useful ONLY for low-level memory stuff a la Tim.
        (just use 'is')

Seems to me as if there are no more than TWO meanings for any given
type of object, and that we provide two comparison operators in each
case. So what's missing?

-- Michael Chermside

[1] http://mail.python.org/pipermail/python-dev/2004-February/042579.html


From ark-mlist at att.net  Fri Mar 19 13:03:24 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 13:03:20 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079718923.8897.83.camel@anthem.wooz.org>
Message-ID: <002501c40ddc$7933ca60$6402a8c0@arkdesktop>


> I understand your argument, and have some sympathy for it, but I don't
> think that 'is' should be the way to spell object equivalence.  If
> someone asks "Tim is Barry" I would expect they want an identity test,
> not an equivalence test, otherwise they might not get the geezin' Python
> hacker they wanted even though as bass players and Britney Spears
> fanatics, we're completely substitutable.

If Tim and Barry are mutable, then an identity test is what they get.

> IIRC, id() has problems on Jython also, so relying on that for identity
> doesn't seem right either.

Interesting.

> Also, since we don't really know what "object equivalence" means in a
> practical sense, I don't know what we'd be spelling.  But if we do
> figure it out (and can explain it in a way people can understand it),
> and find that it's an important enough concept to want to express, we'd
> need a Different Way to spell it in Python.

I think I know what object equivalence means in a practical sense:
It means the same thing as object identity unless both objects are
immutable, in which case it is true if and only if the objects have the same
type and all of their corresponding attributes are (recursively) equivalent.

> To me, "is" asks "are these
> two things the same thing" not "could these two things serve the same
> purpose".  An English word (not mathematical symbol) was chosen for this
> concept for a reason.

Understood.  The thing is that some people expect expressions such as

	"Hello" is "Hello"

to yield True under all circumstances, and they don't.

> Maybe we need a 'like' keyword-let to spell equivalence:
> 
>    >>> "hello" is like 'hello'
>    True

Maybe.

> PS: 'raise "quit"' doesn't really enter into this for two reasons.
> First, string exceptions are going away.  Second, there's no 'is'
> there.  If string exceptions were to stick around, then it might make
> sense to redefine the match criteria in terms of object equivalence.
> But that's a moot point.

Out of curiosity, why are string exceptions going away?  Just to insist that
people give explicit types to their exceptions?


From barry at python.org  Fri Mar 19 13:05:51 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar 19 13:06:04 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079719140.405b34e4c67e9@mcherm.com>
References: <1079719140.405b34e4c67e9@mcherm.com>
Message-ID: <1079719550.8897.88.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 12:59, Michael Chermside wrote:

>   Identity Objects can be (meaningfully) compared by:
> 
>     * Object Identity
>         To see if these are "the same object". Also useful
>         for low-level memory stuff according to Tim.
>         (just use '==', although 'is' would work too)

Using == for identity objects is the wrong thing.  We should discourage
tests like "if obj == None" in favor of "if obj is None".

-Barry



From ark-mlist at att.net  Fri Mar 19 13:28:39 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 13:28:36 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079719140.405b34e4c67e9@mcherm.com>
Message-ID: <003001c40de0$006d3950$6402a8c0@arkdesktop>


> You'll have to help me out here... I just don't get it.

Glad to oblige :-)

> Starting from your definition of object equivalence, let us divide
> all objects into two clases: mutable and immutable. Actually, instead
> I'm going to follow Martin v. L?wis' example[1] and divide all objects
> into identitiy objects, mutable values, and immutable values.

If I understand the example correctly, every mutable object is also an
identity object, because you can distinguish between two mutable objects by
changing one of them and seeing if the other changes.  So the interesting
category of identity objects may well be the immutable ones.

> Considering the meaningful concepts for each category:

>   Identity Objects can be (meaningfully) compared by:

>     * Object Identity
>         To see if these are "the same object". Also useful
>         for low-level memory stuff according to Tim.
>         (just use '==', although 'is' would work too)

Surely identity objects can meaningfully be compared by value as well.  If
you like, it makes sense to ask whether two objects, even if they are
identity objects, have the same state.

>   Mutable Values can be (meaningfully) compared by:
> 
>     * Value Equality
>         To see if these represent the same value.
>         (just use '==')
>     * Object Identity
>         Whether two references will STAY equal if one is changed.
>         Same as "mutual substitutibility" or "object equivalence".
>         According to Tim, this is also handy for low-level memory
>         stuff.
>         (just use 'is')

Yes.

>   Immutable Values can be (meaningfully) compared by:

>     * Value Equality
>         To see if these represent the same value. Same as
>         "mutual substitutibility" or "object equivalence".
>         (just use '==')

No, it's not the same as mutual substutibility:

	x = ([], [])
	y = ([], [])

Here, x and y are immutable objects that happen to have mutable attributes.
Those objects are equal, but not substitutable.

>     * Object Identity
>         Whether two objects are actually using the same memory
>         location. Useful ONLY for low-level memory stuff a la Tim.
>         (just use 'is')

Yes.

> Seems to me as if there are no more than TWO meanings for any given
> type of object, and that we provide two comparison operators in each
> case. So what's missing?

Consider a slight variation on the example above:

	a = []
	b = []
	x1 = (a, b)
	y1 = (a, b)

Now: x1 and y1 are mutually substitutable; x and y are equal but not
mutually substitutable, and x, y, x1, and y1 are all distinct objects.

So I think there are three kinds of comparison, not just two.


From ark-mlist at att.net  Fri Mar 19 13:30:09 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 13:30:06 2004
Subject: [Python-Dev] (not) redefining is
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CF@USAHM010.amer.corp.eds.com>
Message-ID: <003101c40de0$35ff0f80$6402a8c0@arkdesktop>

> If we changed "is", it would be even more difficult to create
> a guaranteed unique object.

Why?  object() would still do the trick.


From theller at python.net  Fri Mar 19 13:31:16 2004
From: theller at python.net (Thomas Heller)
Date: Fri Mar 19 13:31:29 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079719550.8897.88.camel@anthem.wooz.org> (Barry Warsaw's
	message of "Fri, 19 Mar 2004 13:05:51 -0500")
References: <1079719140.405b34e4c67e9@mcherm.com>
	<1079719550.8897.88.camel@anthem.wooz.org>
Message-ID: <brmsvgcr.fsf@python.net>

Barry Warsaw <barry@python.org> writes:

> On Fri, 2004-03-19 at 12:59, Michael Chermside wrote:
>
>>   Identity Objects can be (meaningfully) compared by:
>> 
>>     * Object Identity
>>         To see if these are "the same object". Also useful
>>         for low-level memory stuff according to Tim.
>>         (just use '==', although 'is' would work too)
>
> Using == for identity objects is the wrong thing.  We should discourage
> tests like "if obj == None" in favor of "if obj is None".

The problem (if there is any) with 'is' is that it exposes
implementation details, therefore it should not be used unless on really
knows what one is doing.

And 'if obj is None' gains performance by relying on one of these.

So I would consider 'if obj == None' correct, but unoptimized code.

Thomas


From casey at zope.com  Fri Mar 19 13:41:51 2004
From: casey at zope.com (Casey Duncan)
Date: Fri Mar 19 13:45:27 2004
Subject: [Python-Dev] Re: redefining is
References: <1079719140.405b34e4c67e9@mcherm.com>
	<003001c40de0$006d3950$6402a8c0@arkdesktop>
Message-ID: <20040319134151.42562989.casey@zope.com>

On Fri, 19 Mar 2004 13:28:39 -0500
"Andrew Koenig" <ark-mlist@att.net> wrote:
[..]
> > Seems to me as if there are no more than TWO meanings for any given
> > type of object, and that we provide two comparison operators in each
> > case. So what's missing?
> 
> Consider a slight variation on the example above:
> 
> 	a = []
> 	b = []
> 	x1 = (a, b)
> 	y1 = (a, b)
> 
> Now: x1 and y1 are mutually substitutable; x and y are equal but not
> mutually substitutable, and x, y, x1, and y1 are all distinct objects.
> 
> So I think there are three kinds of comparison, not just two.

So let's see if I can rephrase this:

Objects are equivilant if they are the same type and their states are
the same

Objects are interchangable if they are equivilant and their states will
always be the same (i.e., changes to one are always reflected in the
other) or if they are equivilant and immutable.

Objects are identical if they are physically the same (in the sense of
'is' currently)

There is also a fourth type:

Objects are equal if ob1.__eq__(ob2) returns true.

-Casey



From casey at zope.com  Fri Mar 19 13:54:07 2004
From: casey at zope.com (Casey Duncan)
Date: Fri Mar 19 13:57:13 2004
Subject: [Python-Dev] Re: redefining is
References: <1079719140.405b34e4c67e9@mcherm.com>
	<1079719550.8897.88.camel@anthem.wooz.org>
	<brmsvgcr.fsf@python.net>
Message-ID: <20040319135407.1348d00a.casey@zope.com>

On Fri, 19 Mar 2004 19:31:16 +0100
Thomas Heller <theller@python.net> wrote:

> Barry Warsaw <barry@python.org> writes:
> 
> > On Fri, 2004-03-19 at 12:59, Michael Chermside wrote:
> >
> >>   Identity Objects can be (meaningfully) compared by:
> >> 
> >>     * Object Identity
> >>         To see if these are "the same object". Also useful
> >>         for low-level memory stuff according to Tim.
> >>         (just use '==', although 'is' would work too)
> >
> > Using == for identity objects is the wrong thing.  We should
> > discourage tests like "if obj == None" in favor of "if obj is None".
> 
> The problem (if there is any) with 'is' is that it exposes
> implementation details, therefore it should not be used unless on
> really knows what one is doing.
> 
> And 'if obj is None' gains performance by relying on one of these.
> 
> So I would consider 'if obj == None' correct, but unoptimized code.

The problem is that 'obj == None' is not the assertion you want to make
usually. 'obj == None' means "obj claims it is equal to None", whereas
'obj is None' means 'obj is the None object'. The latter is a much more
stringent assertion than the former which relies on the particular
implementation of obj.

-Casey


From atul.kshirsagar at firstlogic.com  Fri Mar 19 12:35:17 2004
From: atul.kshirsagar at firstlogic.com (Atul Kshirsagar)
Date: Fri Mar 19 14:07:25 2004
Subject: [Python-Dev] Python 2.3 problem with extention-embedding in
	multi-threaded and multi sub-interpreter environment
Message-ID: <095A307E4B8A1541B29248EABD4D4D4005E1CDF8@exchange-lax1.firstlogic.com>

I am embedding python in my C++ application. I am using Python *2.3.2* with
a C++ extention DLL in multi-threaded environment. I am using SWIG-1.3.19 to
generate C++ to Python interface.

Now to explain it in details,
1. Python initialization [Py_Initialize()] and finalization [Py_Finalize()]
is done in the *main* thread.
2. For each new thread I create a separate sub-interpreter
[Py_NewInterpreter()]. 
3. Using PyRun_String("import myModule"...) before execution of python
script, extention module 
   is imported.
4. Each thread executes *multiple* python script using PyEval_EvalCode()
using the class objects in
   my extention DLL.
5. Each sub-interpreter is destroyed [Py_EndInterpreter()] at the end of
that particular thread.

I am observing that;
As explained above when multiple threads are running. And as one of these
threads finishes, in other running threads I start getting 
"TypeError: 'NoneType' object is not callable" error on the methods called
on class objects in extention module. 

The same code *works fine* with Python 2.2.2.

I have found these links more or less talking about the same problem
migrating from 2.2 to 2.3.
http://mail.python.org/pipermail/python-dev/2003-September/038237.html
http://mail.python.org/pipermail/python-list/2004-February/206851.html
http://mail.python.org/pipermail/python-list/2004-January/204040.html

I *guess* what is happening is global variables are zapped to "NoneType"
when one thread finishes and other thread trying to access them through the
Python script (step 4.) this error is generated. But it *works* sometimes
when(*guess*) the running thread is at step 3. and by importing the  module
the global variables are re-initialized for "Type" information.

I tried using reload(myModule) to solve the problem but that is generating
big memory leak every time it is called.

Is this a know issue with 2.3 (interpreter?) ? Or is there a change for 2.3
in the way embedding should be done in a multi-threaded and
multi-sub-interpreter environment ?

Anybody for help ?
Thanks,
Atul


From walter.doerwald at livinglogic.de  Fri Mar 19 13:52:54 2004
From: walter.doerwald at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Fri Mar 19 14:07:30 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079719140.405b34e4c67e9@mcherm.com>
References: <1079719140.405b34e4c67e9@mcherm.com>
Message-ID: <405B4186.9020800@livinglogic.de>

Michael Chermside wrote:

> Andrew Koenig writes:
> 
>>The reason for that, I think, is that there are really three kinds of
>>"equality" that make sense, and only two of them are reasonably available:
>>
>>	Value equality (==, which might be user-defined, because
>>		the concept of "value" can be user-defined);

And there might be more than one user.

>>	Object identity ("is")
>>
>>	Object equivalence (no easy way of expressing it today)
>>
>>By "object equivalence", I mean mutual substitutability--which is the same
>>as identity for mutable objects, but not for immutable ones.

But this mutual substitutability is context dependant.
"is" guarantees that both objects are mutual substitutable in
*any* context. "==" defines the mutual substitutability for the
"default" context. Any other context is application dependant,
so the comparison function should be defined by the application.

Bye,
    Walter D?rwald



From theller at python.net  Fri Mar 19 14:06:01 2004
From: theller at python.net (Thomas Heller)
Date: Fri Mar 19 14:09:56 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <20040319135407.1348d00a.casey@zope.com> (Casey Duncan's
	message of "Fri, 19 Mar 2004 13:54:07 -0500")
References: <1079719140.405b34e4c67e9@mcherm.com>
	<1079719550.8897.88.camel@anthem.wooz.org> <brmsvgcr.fsf@python.net>
	<20040319135407.1348d00a.casey@zope.com>
Message-ID: <d678odwm.fsf@python.net>

Casey Duncan <casey@zope.com> writes:

> On Fri, 19 Mar 2004 19:31:16 +0100
> Thomas Heller <theller@python.net> wrote:
>
>> Barry Warsaw <barry@python.org> writes:
>> 
>> > On Fri, 2004-03-19 at 12:59, Michael Chermside wrote:
>> >
>> >>   Identity Objects can be (meaningfully) compared by:
>> >> 
>> >>     * Object Identity
>> >>         To see if these are "the same object". Also useful
>> >>         for low-level memory stuff according to Tim.
>> >>         (just use '==', although 'is' would work too)
>> >
>> > Using == for identity objects is the wrong thing.  We should
>> > discourage tests like "if obj == None" in favor of "if obj is None".
>> 
>> The problem (if there is any) with 'is' is that it exposes
>> implementation details, therefore it should not be used unless on
>> really knows what one is doing.
>> 
>> And 'if obj is None' gains performance by relying on one of these.
>> 
>> So I would consider 'if obj == None' correct, but unoptimized code.
>
> The problem is that 'obj == None' is not the assertion you want to make
> usually. 'obj == None' means "obj claims it is equal to None", whereas
> 'obj is None' means 'obj is the None object'. The latter is a much more
> stringent assertion than the former which relies on the particular
> implementation of obj.

Ok, so I propose to add a isNone operator or builtin function =:)

Thomas


From ark-mlist at att.net  Fri Mar 19 14:17:08 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 14:17:05 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <20040319134151.42562989.casey@zope.com>
Message-ID: <000101c40de6$c655d6d0$6402a8c0@arkdesktop>

> Objects are equivilant if they are the same type and their states are
> the same
> 
> Objects are interchangable if they are equivilant and their states will
> always be the same (i.e., changes to one are always reflected in the
> other) or if they are equivilant and immutable.

I haven't distinguished between equivalent and interchangeable.  I think
that most people would consider "equivalent" and "interchangeable" to mean
the same thing.

> Objects are identical if they are physically the same (in the sense of
> 'is' currently)

Yes.

> There is also a fourth type:
> 
> Objects are equal if ob1.__eq__(ob2) returns true.

Right.


From barry at python.org  Fri Mar 19 14:16:52 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar 19 14:17:09 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <20040319135407.1348d00a.casey@zope.com>
References: <1079719140.405b34e4c67e9@mcherm.com>
	<1079719550.8897.88.camel@anthem.wooz.org> <brmsvgcr.fsf@python.net>
	<20040319135407.1348d00a.casey@zope.com>
Message-ID: <1079723811.8897.98.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 13:54, Casey Duncan wrote:

> The problem is that 'obj == None' is not the assertion you want to make
> usually. 'obj == None' means "obj claims it is equal to None", whereas
> 'obj is None' means 'obj is the None object'. The latter is a much more
> stringent assertion than the former which relies on the particular
> implementation of obj.

Exactly.

-Barry



From tjreedy at udel.edu  Fri Mar 19 14:18:30 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri Mar 19 14:18:48 2004
Subject: [Python-Dev] Re: A proposal has surfaced on comp.lang.python
	toredefine"is"
References: <BB8A764C.795333E3@mail.google.com>
	<001f01c40dd9$92dabe90$6402a8c0@arkdesktop>
Message-ID: <c3fh22$6ra$1@sea.gmane.org>


"Andrew Koenig" <ark-mlist@att.net> wrote in message
news:001f01c40dd9$92dabe90$6402a8c0@arkdesktop...
> > I would say that Python is served well by the two equality predicates
> > it has, that it is impossible to please everyone, and that users
> > should get used to writing the predicate they want if it is not one of
> > the builtins.

+1

> Without disagreeing with your statement, I can also say this:
>
> The fact that "is" is so easy to use encourages some programmers to use
it
> when they would be better off with a different predicate that is much
less
> readily available.  If "is" represented this other predicate instead,
most
> programs that use it would be better off.

Until someone writes, tests, and publishes an equiv function, and others
verify its usefulness, that strikes me as a speculative hypothesis.
Concrete code would also be clearer to me than the natural language
definitions I have seen.

Terry J. Reedy




From ark-mlist at att.net  Fri Mar 19 14:19:39 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 14:19:34 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <brmsvgcr.fsf@python.net>
Message-ID: <000201c40de7$207a50f0$6402a8c0@arkdesktop>


> The problem (if there is any) with 'is' is that it exposes
> implementation details, therefore it should not be used unless on really
> knows what one is doing.

Yes.  Also, I think that in general, operations that should be used only by
people who really know what they're doing should be harder to express.

> And 'if obj is None' gains performance by relying on one of these.

> So I would consider 'if obj == None' correct, but unoptimized code.

Actually, it's incorrect, because obj could be of a class that redefines ==
to yield True in that case.



From mcherm at mcherm.com  Fri Mar 19 14:25:47 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Fri Mar 19 14:25:53 2004
Subject: [Python-Dev] redefining is
Message-ID: <1079724347.405b493b61aab@mcherm.com>

Barry writes:
> Using == for identity objects is the wrong thing.  We should discourage
> tests like "if obj == None" in favor of "if obj is None".

I hear you, but I disagree. There is a good reason why the default
implementation of '==' compares by object identity.

I don't care much whether we use "if obj is None" rather than
"if obj == None", since the PARTICULAR case of comparing with None
is mostly just an idiom. But if you have two different identity
objects and want to compare them, then I would prefer

    if currentCustomer == desiredCustomer:

to

    if currentCustomer is desiredCustomer:

However, either one works, and I won't gripe about your using 'is'
if you prefer it. I _WOULD_ gripe if you overrode __eq__ to raise
an exception in order to force me to use 'is' not '==' on your
objects (not that you ever suggested doing so).

-- Michael Chermside


From barry at python.org  Fri Mar 19 14:39:41 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar 19 14:39:53 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079724347.405b493b61aab@mcherm.com>
References: <1079724347.405b493b61aab@mcherm.com>
Message-ID: <1079725180.8897.111.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 14:25, Michael Chermside wrote:

> I hear you, but I disagree. There is a good reason why the default
> implementation of '==' compares by object identity.
> 
> I don't care much whether we use "if obj is None" rather than
> "if obj == None", since the PARTICULAR case of comparing with None
> is mostly just an idiom. 

I think it's more than that.  Python says there's only one None object,
and when I'm testing for, e.g. the return value from dict.get(), I'm
specifically testing that the object returned is that singleton object. 
By using identity test, I'm avoiding any possibility that someone
slipped me a subversive (or buggy) object that happens to claim it's
equal to None when it is not None.

When I use "if obj is None" I really, truly want an identity test.  I
don't think == is appropriate for that.

> But if you have two different identity
> objects and want to compare them, then I would prefer
> 
>     if currentCustomer == desiredCustomer:
> 
> to
> 
>     if currentCustomer is desiredCustomer:

It depends on what you want compared.  If you're happy with any old
arbitrary claims of equality that the objects can make about themselves,
then fine.  If you're asking, "do I have the exact object that I want"
then it's not.

> However, either one works, and I won't gripe about your using 'is'
> if you prefer it. I _WOULD_ gripe if you overrode __eq__ to raise
> an exception in order to force me to use 'is' not '==' on your
> objects (not that you ever suggested doing so).

Hey, we're all consenting adults here (you high school kids, look away!
:).

-Barry



From nas-python at python.ca  Fri Mar 19 14:47:37 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Fri Mar 19 14:47:42 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <001d01c40dd6$4bc9df20$6402a8c0@arkdesktop>
References: <1079635620.4350.2.camel@anthem.wooz.org>
	<001d01c40dd6$4bc9df20$6402a8c0@arkdesktop>
Message-ID: <20040319194737.GA31012@mems-exchange.org>

On Fri, Mar 19, 2004 at 12:19:11PM -0500, Andrew Koenig wrote:
> Analogously, I think that the way Python defines "is" is hazardous.  I do
> not think it is nearly as big a hazard as C++'s treatment of virtual
> destructors, but I still think it's a hazard.  The reason is similar:  Using
> it makes it easy to write programs that appear to work by coincidence.

Making 'is' do something other than comparing object identity is not
going to help, IMHO.  If you don't understand Python's object model
then you are going to get into trouble anyhow.

> The "raise 'quit'" example is a good one, because it will always
> work on an implementation that interns string literals, so no
> amount of testing will reveal the problem.

Not a good example.  String exceptions are deprecated.

> By "object equivalence", I mean mutual substitutability--which is the same
> as identity for mutable objects, but not for immutable ones.

So what does it mean for immutable objects?

> The hazard comes about because there are objects that are equivalent but not
> identical, and there are contexts (such as the raise/except example above)
> in which it is logical to ask for object equivalence, but what you actually
> get is object identity.

Your raise/except example is bad.  Do you have another one in which
people could not just use '==' instead of 'is'?

  Neil

From barry at python.org  Fri Mar 19 14:48:54 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar 19 14:49:09 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <002501c40ddc$7933ca60$6402a8c0@arkdesktop>
References: <002501c40ddc$7933ca60$6402a8c0@arkdesktop>
Message-ID: <1079725734.8897.121.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 13:03, Andrew Koenig wrote:

> If Tim and Barry are mutable, then an identity test is what they get.

I won't speak for Tim, but my mutability is, er, mutable. :)

> I think I know what object equivalence means in a practical sense:
> It means the same thing as object identity unless both objects are
> immutable, in which case it is true if and only if the objects have the same
> type and all of their corresponding attributes are (recursively) equivalent.

I'd be really interested to hear what some of our teachers think about
this (not just the definition above, but the whole is-vs-== issue).  To
me, is vs. == is a very intuitive notion, but I'm wondering if this
trips up newbies, and whether something like the above distinction would
help.

> Understood.  The thing is that some people expect expressions such as
> 
> 	"Hello" is "Hello"
> 
> to yield True under all circumstances, and they don't.

Yep, and I don't mean to downplay that confusion, because teaching why
that isn't the correct idiom to use gets into somewhat deep issues.  Not
identity versus equality, but stuff like interning (what it is and why
we have it).

> > Maybe we need a 'like' keyword-let to spell equivalence:
> > 
> >    >>> "hello" is like 'hello'
> >    True
> 
> Maybe.

Someone other than me will have to write the PEP for this. :)

> > PS: 'raise "quit"' doesn't really enter into this for two reasons.
> > First, string exceptions are going away.  Second, there's no 'is'
> > there.  If string exceptions were to stick around, then it might make
> > sense to redefine the match criteria in terms of object equivalence.
> > But that's a moot point.
> 
> Out of curiosity, why are string exceptions going away?  Just to insist that
> people give explicit types to their exceptions?

Probably, originally, for many of the reasons stated here.  That people
expected equality tests in the except clause and were tripped up by the
identity test performed.  I don't remember if changing the except test
was ever suggested though.

-Barry



From casey at zope.com  Fri Mar 19 14:46:19 2004
From: casey at zope.com (Casey Duncan)
Date: Fri Mar 19 14:49:43 2004
Subject: [Python-Dev] Re: redefining is
References: <20040319134151.42562989.casey@zope.com>
	<000101c40de6$c655d6d0$6402a8c0@arkdesktop>
Message-ID: <20040319144619.1901e9ac.casey@zope.com>

On Fri, 19 Mar 2004 14:17:08 -0500
"Andrew Koenig" <ark-mlist@att.net> wrote:

> > Objects are equivilant if they are the same type and their states
> > are the same
> > 
> > Objects are interchangable if they are equivilant and their states
> > will always be the same (i.e., changes to one are always reflected
> > in the other) or if they are equivilant and immutable.
> 
> I haven't distinguished between equivalent and interchangeable.  I
> think that most people would consider "equivalent" and
> "interchangeable" to mean the same thing.

I hadn't either until I read your variation.

It seems to me that given:

a = ([], [])
b = ([], [])

That a and b are equivilant (their states are the same), but they are
not interchangeable since a may diverge from b by mutating one different
from the other.

The key of course it how you define state equality. In my view state
equality means that if you were to serialize the objects to byte
streams, which neutrally represented the entirety of the state,
equivilant objects would have identical serializations.

That says nothing about the relationship between a and b (they may be
the same objects or completely unrelated in any way). Interchangeable
says that the objects are equivilant now and are related in such a way
that they will always be equivilant.

Now whether a separation of equivilance and interchangability is useful
beyond mental masturbation remains to be seen. It seems to me that from
a pragmatic perspective, interchangeability is easier to implement
especially for mutable and composite objects. Of course for immutable
simple objects they mean the same thing, and equivilance is determined
by directly comparing values in memory.

-Casey


From mcherm at mcherm.com  Fri Mar 19 14:55:44 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Fri Mar 19 14:55:47 2004
Subject: [Python-Dev] redefining is
Message-ID: <1079726144.405b50407c85e@mcherm.com>

Andrew Koenig writes:
> If I understand the example correctly, every mutable object is also an
> identity object, because you can distinguish between two mutable objects by
> changing one of them and seeing if the other changes.

Nope, that's not what I mean. Check out 
http://mail.python.org/pipermail/python-dev/2004-February/042579.html
for Martin's explanation, which is probably better than mine, but here's
my simple summary:

  Identity Objects: Even if all fields were the same, two instances
    would be considered "different". With Employee objects, if two
    employees both have the name "John Smith", they're still different.

  Value Objects: Object identity is not really important for
    equality. The purpose of these objects is to represent particular
    values. So long as all fields are equal, we consider two such 
    values equal. So two lists are the same if both have the same
    contents. Some value objects (like lists) are mutable, in which
    case the identity MAY be important to distinguish whether the
    objects will remain equal if one is modified; others (like
    strings) are immutable, in which case identity becomes an
    implementation detail (and in some cases like small ints and
    interned strings, cpython takes advantage of this).


You demonstrate case 1:
> 	x = ([], [])
> 	y = ([], [])
> Here, x and y are immutable objects that happen to have mutable attributes.
> Those objects are equal, but not substitutable.

versus case 2:
> 	a = []
> 	b = []
> 	x1 = (a, b)
> 	y1 = (a, b)
> Now: x1 and y1 are mutually substitutable; x and y are equal but not
> mutually substitutable, and x, y, x1, and y1 are all distinct objects.

Interesting. The key here is that "mutable" is used two ways... to mean
that the "top-level-object" (the tuple in your example) is not mutable
and to mean the the entire structure is not mutable. So if I get this
right, you are saying that with these objects:

    >>> v = ([42],)
    >>> w = v
    >>> x = (v[0],)
    >>> y = ([42],)
    >>> z = ([999],)

The '==' operator is useful for distinguishing z from all the others.
The 'is' operator is useful because it distinguishes x from v and w
  (we usually don't care, but it DOES make a difference in memory use).
But no operator exists to distinguish y from v, w, and x. By your
  terminology, v, w, and x are "mutually substitutable", but y is not
  because while it is equal NOW, it might no longer be equal if we
  modified the list.

Whew.... I hadn't realized it was quite so complex. Looking closely
at this, I am beginning to fear that there are not merely _3_ meaningful
forms of equivalence, but far more than that. Of course, the paper
that Peter Norvig referred us to earlier
(http://www.nhplace.com/kent/PS/EQUAL.html) said exactly that. Well,
thanks for clarifying.

-- Michael Chermside


From nas-python at python.ca  Fri Mar 19 14:56:36 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Fri Mar 19 14:56:41 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <000101c40de6$c655d6d0$6402a8c0@arkdesktop>
References: <20040319134151.42562989.casey@zope.com>
	<000101c40de6$c655d6d0$6402a8c0@arkdesktop>
Message-ID: <20040319195636.GB31012@mems-exchange.org>

On Fri, Mar 19, 2004 at 02:17:08PM -0500, Andrew Koenig wrote:
> I haven't distinguished between equivalent and interchangeable.  I
> think that most people would consider "equivalent" and
> "interchangeable" to mean the same thing.

I have no idea what you mean by "equivalent" or "interchangeable".

  Neil

From ark-mlist at att.net  Fri Mar 19 15:02:54 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 15:02:49 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <405B4186.9020800@livinglogic.de>
Message-ID: <000201c40ded$2abe6eb0$6402a8c0@arkdesktop>

> But this mutual substitutability is context dependant.
> "is" guarantees that both objects are mutual substitutable in
> *any* context. "==" defines the mutual substitutability for the
> "default" context. Any other context is application dependant,
> so the comparison function should be defined by the application.

I disagree:  There are some contexts that are already written into the
Python language--a fact that gives those contexts a status beyond mere
application dependence.  In particular, the language says that when you use
a particular string literal in a program, you may or may not get the same
object, but if you get two different objects, there will be no way to
distinguish those objects from each other without examining their identity.

That is not a notion I made up, nor is it dependent on any particular
application.  It's already part of Python.

So what I'm claiming is that there should be a way of asking:  Given two
objects, is there any way to distinguish them aside from their identity?  I
further claim that in many cases, people mistakenly use "is" as a way of
asking this question, partly because there is no easy way to ask it now.


From ark-mlist at att.net  Fri Mar 19 15:03:39 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 15:03:35 2004
Subject: [Python-Dev] Re: A proposal has surfaced on
	comp.lang.pythontoredefine"is"
In-Reply-To: <c3fh22$6ra$1@sea.gmane.org>
Message-ID: <000301c40ded$46676d10$6402a8c0@arkdesktop>

> Until someone writes, tests, and publishes an equiv function, and others
> verify its usefulness, that strikes me as a speculative hypothesis.
> Concrete code would also be clearer to me than the natural language
> definitions I have seen.

I would do so if I knew an easy way to determine whether an object is
mutable.


From nas-python at python.ca  Fri Mar 19 15:04:30 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Fri Mar 19 15:04:37 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <002501c40ddc$7933ca60$6402a8c0@arkdesktop>
References: <1079718923.8897.83.camel@anthem.wooz.org>
	<002501c40ddc$7933ca60$6402a8c0@arkdesktop>
Message-ID: <20040319200430.GC31012@mems-exchange.org>

On Fri, Mar 19, 2004 at 01:03:24PM -0500, Andrew Koenig wrote:
> The thing is that some people expect expressions such as
> 
> 	"Hello" is "Hello"
> 
> to yield True under all circumstances, and they don't.

Is that really the case or do they use 'is' because they think that
it will make their program go faster?  I would expect that novice
users would envision that statement creating two strings.

  Neil

From nospam at barrys-emacs.org  Fri Mar 19 15:04:44 2004
From: nospam at barrys-emacs.org (Barry Scott)
Date: Fri Mar 19 15:06:07 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <20040319135407.1348d00a.casey@zope.com>
References: <1079719140.405b34e4c67e9@mcherm.com>
	<1079719550.8897.88.camel@anthem.wooz.org>
	<brmsvgcr.fsf@python.net> <20040319135407.1348d00a.casey@zope.com>
Message-ID: <6.0.3.0.2.20040319200009.023849a8@torment.chelsea.private>

This is why you cannot mess with "is". We replaced all the "== None"
with "is None" in our code to avoid tracebacks when the __eq__ method
fails on a None. yes there is a bug in the __eq__ code, but I don't
want the __eq__ code run at all in almost all cases.

Barry

At 19-03-2004 18:54, Casey Duncan wrote:
> > So I would consider 'if obj == None' correct, but unoptimized code.
>
>The problem is that 'obj == None' is not the assertion you want to make
>usually. 'obj == None' means "obj claims it is equal to None", whereas
>'obj is None' means 'obj is the None object'. The latter is a much more
>stringent assertion than the former which relies on the particular
>implementation of obj.
>
>-Casey



From ark-mlist at att.net  Fri Mar 19 15:08:45 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 15:08:43 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <20040319194737.GA31012@mems-exchange.org>
Message-ID: <000401c40ded$fc2463b0$6402a8c0@arkdesktop>

> > By "object equivalence", I mean mutual substitutability--which is the
> same
> > as identity for mutable objects, but not for immutable ones.
> 
> So what does it mean for immutable objects?

Two immutable objects are equivalent if and only if they have the same type
and all of their attributes are (recursively) equivalent.

> Your raise/except example is bad.  Do you have another one in which
> people could not just use '==' instead of 'is'?

x = ([], [])
y = ([], [])

Here, x == y is true, but x and y are not equivalent.  We can prove this
inequivalence by executing x[0].append(42) and noting that y[0] does not
change.

a = []
b = []
x = (a, b)
y = (a, b)

Here, x == y is true, and x and y are equivalent.


From aahz at pythoncraft.com  Fri Mar 19 15:12:52 2004
From: aahz at pythoncraft.com (Aahz)
Date: Fri Mar 19 15:12:57 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <006e01c40d72$1d729d60$a0ba2c81@oemcomputer>
References: <006e01c40d72$1d729d60$a0ba2c81@oemcomputer>
Message-ID: <20040319201252.GA27225@panix.com>

On Fri, Mar 19, 2004, Raymond Hettinger wrote:
>
> 3) I had re-coded several modules in C with the goal of making pure
> python more viable for certain high performance apps (for instance,
> implementing heapq and bisect in C made them a more viable as components
> of superfast code written in pure python).  If the threading module were
> re-coded in C, then it would benefit a broad class of multi-threaded
> applications making it more likely that Python becomes the language of
> choice for those apps.

The last time you brought this up, there was considerable disagreement
about how appropriate this would be in general.  I am not at all sure,
for example, that coding ``threading`` in C would bring enough
improvement to justify the loss of readability.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From nas-python at python.ca  Fri Mar 19 15:15:05 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Fri Mar 19 15:15:32 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000201c40ded$2abe6eb0$6402a8c0@arkdesktop>
References: <405B4186.9020800@livinglogic.de>
	<000201c40ded$2abe6eb0$6402a8c0@arkdesktop>
Message-ID: <20040319201505.GD31012@mems-exchange.org>

On Fri, Mar 19, 2004 at 03:02:54PM -0500, Andrew Koenig wrote:
> I disagree:  There are some contexts that are already written into the
> Python language--a fact that gives those contexts a status beyond mere
> application dependence.  In particular, the language says that when you use
> a particular string literal in a program, you may or may not get the same
> object, but if you get two different objects, there will be no way to
> distinguish those objects from each other without examining their identity.

sys.getrefcount() :-)

> That is not a notion I made up, nor is it dependent on any particular
> application.  It's already part of Python.

Okay, but I don't see why that implementation detail is important.

> So what I'm claiming is that there should be a way of asking:  Given two
> objects, is there any way to distinguish them aside from their identity?

Why do you need to ask that question?  Further more, why is it
important enough to require a builtin operator?

  Neil

From ark-mlist at att.net  Fri Mar 19 15:17:50 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 15:17:47 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079726144.405b50407c85e@mcherm.com>
Message-ID: <000501c40def$409fdd20$6402a8c0@arkdesktop>

> Andrew Koenig writes:
> > If I understand the example correctly, every mutable object is also an
> > identity object, because you can distinguish between two mutable objects
> by
> > changing one of them and seeing if the other changes.

> Nope, that's not what I mean. Check out
> http://mail.python.org/pipermail/python-dev/2004-February/042579.html
> for Martin's explanation, which is probably better than mine, but here's
> my simple summary:
> 
>   Identity Objects: Even if all fields were the same, two instances
>     would be considered "different". With Employee objects, if two
>     employees both have the name "John Smith", they're still different.

In which case every mutable object is always an identity object.  I did not
say that every identity object is mutable.  Where is my misunderstanding?

>   Value Objects: Object identity is not really important for
>     equality. The purpose of these objects is to represent particular
>     values. So long as all fields are equal, we consider two such
>     values equal. So two lists are the same if both have the same
>     contents. Some value objects (like lists) are mutable, in which
>     case the identity MAY be important to distinguish whether the
>     objects will remain equal if one is modified; others (like
>     strings) are immutable, in which case identity becomes an
>     implementation detail (and in some cases like small ints and
>     interned strings, cpython takes advantage of this).

Exactly.  When a value object is immutable, identity is an implementation
detail.  If two value objects have the same value (which I can define more
rigorously if needed), they are equivalent, and identity is an
implementation detail.

What I'm saying, I think, is that it would be nice to have a comparison
similar to "is" that ignores identity when it is an implementation detail
but not otherwise.

> You demonstrate case 1:
> > 	x = ([], [])
> > 	y = ([], [])
> > Here, x and y are immutable objects that happen to have mutable
> attributes.
> > Those objects are equal, but not substitutable.
> 
> versus case 2:
> > 	a = []
> > 	b = []
> > 	x1 = (a, b)
> > 	y1 = (a, b)
> > Now: x1 and y1 are mutually substitutable; x and y are equal but not
> > mutually substitutable, and x, y, x1, and y1 are all distinct objects.
> 
> Interesting. The key here is that "mutable" is used two ways... to mean
> that the "top-level-object" (the tuple in your example) is not mutable
> and to mean the the entire structure is not mutable. So if I get this
> right, you are saying that with these objects:
> 
>     >>> v = ([42],)
>     >>> w = v
>     >>> x = (v[0],)
>     >>> y = ([42],)
>     >>> z = ([999],)

Whenever I have used "immutable" in this discussion, I have meant "top-level
immutable".  So I consider all tuples to be immutable, even though they may
refer to mutable objects.

> The '==' operator is useful for distinguishing z from all the others.

Yes, for the current values of v, w, x, y, and z.

> The 'is' operator is useful because it distinguishes x from v and w
>   (we usually don't care, but it DOES make a difference in memory use).

Yes.  It is also potentially dangerous because it distinguishes x from v and
w.

> But no operator exists to distinguish y from v, w, and x. By your
>   terminology, v, w, and x are "mutually substitutable", but y is not
>   because while it is equal NOW, it might no longer be equal if we
>   modified the list.

Correct.

> Whew.... I hadn't realized it was quite so complex. Looking closely
> at this, I am beginning to fear that there are not merely _3_ meaningful
> forms of equivalence, but far more than that. Of course, the paper
> that Peter Norvig referred us to earlier
> (http://www.nhplace.com/kent/PS/EQUAL.html) said exactly that. Well,
> thanks for clarifying.

There are certainly more than three meaningful forms of equivalence.  I
claim, however, that these three forms already have special status in the
language, because it is implementation-defined whether two occurrences of
the same string literal refer to the same object.



From jeremy at alum.mit.edu  Fri Mar 19 15:16:34 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Fri Mar 19 15:19:17 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <20040319201252.GA27225@panix.com>
Message-ID: <IOEAJCCLBLFDOCOBOEGNOEFNCKAA.jeremy@alum.mit.edu>

> On Fri, Mar 19, 2004, Raymond Hettinger wrote:
> >
> > 3) I had re-coded several modules in C with the goal of making pure
> > python more viable for certain high performance apps (for instance,
> > implementing heapq and bisect in C made them a more viable as components
> > of superfast code written in pure python).  If the threading module were
> > re-coded in C, then it would benefit a broad class of multi-threaded
> > applications making it more likely that Python becomes the language of
> > choice for those apps.
>
> The last time you brought this up, there was considerable disagreement
> about how appropriate this would be in general.  I am not at all sure,
> for example, that coding ``threading`` in C would bring enough
> improvement to justify the loss of readability.

+1

Jeremy


From ark-mlist at att.net  Fri Mar 19 15:20:12 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 15:20:52 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <20040319195636.GB31012@mems-exchange.org>
Message-ID: <000601c40def$957a1f90$6402a8c0@arkdesktop>


> > I haven't distinguished between equivalent and interchangeable.  I
> > think that most people would consider "equivalent" and
> > "interchangeable" to mean the same thing.
> 
> I have no idea what you mean by "equivalent" or "interchangeable".

In general, when I say that x and y are equivalent (or interchangeable), I
mean that in any context in which x appears can use y instead with the same
effect.

In the context of this particular discussion, I am talking about two objects
being equivalent (in the sense above) in any program that does not directly
determine the identity of those objects.


From ark-mlist at att.net  Fri Mar 19 15:22:25 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 15:22:21 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <6.0.3.0.2.20040319200009.023849a8@torment.chelsea.private>
Message-ID: <000701c40def$e4fe4370$6402a8c0@arkdesktop>

> This is why you cannot mess with "is". We replaced all the "== None"
> with "is None" in our code to avoid tracebacks when the __eq__ method
> fails on a None. yes there is a bug in the __eq__ code, but I don't
> want the __eq__ code run at all in almost all cases.

The proposal we're talking about would not change the meaning of
"x is None".  For that matter, it would not change the meaning of "x is y"
for any singleton y.


From pinard at iro.umontreal.ca  Fri Mar 19 15:23:19 2004
From: pinard at iro.umontreal.ca (=?iso-8859-1?Q?Fran=E7ois?= Pinard)
Date: Fri Mar 19 15:24:17 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <1079725734.8897.121.camel@anthem.wooz.org>
References: <002501c40ddc$7933ca60$6402a8c0@arkdesktop>
	<1079725734.8897.121.camel@anthem.wooz.org>
Message-ID: <20040319202319.GA3140@alcyon.progiciels-bpi.ca>

[Barry Warsaw]

> > > Maybe we need a 'like' keyword-let to spell equivalence:

> > >    >>> "hello" is like 'hello'
> > >    True

> > Maybe.

> Someone other than me will have to write the PEP for this. :)

"for this" ?

You mean, adding the singleton Maybe between True and False? :-)

-- 
Fran?ois Pinard   http://www.iro.umontreal.ca/~pinard

From barry at python.org  Fri Mar 19 15:29:24 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar 19 15:30:56 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <20040319202319.GA3140@alcyon.progiciels-bpi.ca>
References: <002501c40ddc$7933ca60$6402a8c0@arkdesktop>
	<1079725734.8897.121.camel@anthem.wooz.org>
	<20040319202319.GA3140@alcyon.progiciels-bpi.ca>
Message-ID: <1079728163.8897.131.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 15:23, Fran?ois Pinard wrote:
> [Barry Warsaw]
> 
> > > > Maybe we need a 'like' keyword-let to spell equivalence:
> 
> > > >    >>> "hello" is like 'hello'
> > > >    True
> 
> > > Maybe.
> 
> > Someone other than me will have to write the PEP for this. :)
> 
> "for this" ?
> 
> You mean, adding the singleton Maybe between True and False? :-)

:)

-Barry



From ark-mlist at att.net  Fri Mar 19 15:33:41 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 15:33:36 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <20040319201505.GD31012@mems-exchange.org>
Message-ID: <000801c40df1$77be5be0$6402a8c0@arkdesktop>

> Okay, but I don't see why that implementation detail is important.

It is important because it causes programs to behave differently on
different implementations.

> > So what I'm claiming is that there should be a way of asking:  Given two
> > objects, is there any way to distinguish them aside from their identity?

> Why do you need to ask that question?  Further more, why is it
> important enough to require a builtin operator?

It certainly doesn't *require* a builtin operator.  I do think, however,
that the proposed comparison is more useful than "is" in most contexts in
which programmers use "is" today.  In particular, programs that use "is" can
easily contain bugs that testing on a particular implementation can never
reveal, and using the proposed comparison instead makes such bugs much less
likely (and perhaps even impossible).


From ark-mlist at att.net  Fri Mar 19 15:34:49 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 15:34:45 2004
Subject: [Python-Dev] Rich comparisons  [Was] redefining is
In-Reply-To: <00de01c40d7a$5ed36520$a0ba2c81@oemcomputer>
Message-ID: <000901c40df1$a0761870$6402a8c0@arkdesktop>

> The test suite runs fine, but it is possible that some existing class
> defines equality in a way that sometimes returns False even when given
> two identical objects.

I believe that IEEE floating-point requires x == x to be False when x is
NaN.



From jim.jewett at eds.com  Fri Mar 19 15:39:01 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 19 15:40:08 2004
Subject: [Python-Dev] todo (was: Joys of Optimization)
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3D8@USAHM010.amer.corp.eds.com>


Is there some central location where these ideas (and any known
problems) are collected?  Perhaps the Request For Enhancement
tracker at sourceforge?

I have a feeling that most of my questions have been asked
before, but I don't know which ones, or where to find the
answers.

> If the threading module were re-coded in C, 

Does the python portion of threading really add much overhead?  
The obvious costs are for thread switches and calls back and 
forth between python and C.  Changing threading code to C might 
just move that cost from threading to the actual thread, without 
actually speeding up the program much.

> The peephole optimizer in compile.c ... however, they can't go
> in until the line renumberer is done.  For example, the code a,b=b,a
> currently runs slower than t=a;a=b;b=t but it could run several times
> faster if the BUILD_TUPLE 2 UNPACK_SEQUENCE 2 were replaced by ROT_TWO.

What is the cheapest fake NOOP?  Define a pass function at compile time
and call it for a NOOP?  Jump/jump-back?  Or are those just too ugly
to use?

> 5) Python's startup time got much worse in Py2.3.

Is there an easy way to create a daemon python, (or a not-quite-
unloaded library like MS Office), so that calls to python after
the first can just start a new thread in the already-loaded 
interpreter?  I think one of the Apache plugins does something
like this, but perhaps it has other problems that keeps it out of CVS?

> 4) eval() only takes real dictionaries as arguments.  ...  Previous
> proposals were rejected because they cost a percentage point or two of
> execution time 

Is it reasonable to make the funky environment pay a penalty?  For
instance, "process()" or "evaluate()" instead of eval.  These could
even be in a module that needs to be imported.  But if you're changing
dictionaries, you may also want to change the reader, or ensure that
the implementation does not add builtins, or ... so it might be worth
a PEP rather than just a weekend coding.

-jJ

From jcarlson at uci.edu  Fri Mar 19 15:38:45 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri Mar 19 15:43:08 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <000101c40de6$c655d6d0$6402a8c0@arkdesktop>
References: <20040319134151.42562989.casey@zope.com>
	<000101c40de6$c655d6d0$6402a8c0@arkdesktop>
Message-ID: <20040319120617.FB67.JCARLSON@uci.edu>

> I haven't distinguished between equivalent and interchangeable.  I think
> that most people would consider "equivalent" and "interchangeable" to mean
> the same thing.

No.

Equivalency at this point in time is different from being equivalent at
all points in time in the future.

Equivalent now, is something we can test easily with the current '=='.

Equivalent at all times in the future (what I would consider
interchangeable) requires minimally testing object identities.  This
seems to be going a bit beyond what 'is' is doing right now, as shown
by:

>>> a = []
>>> b = []
>>> x1 = (a,b)
>>> y1 = (a,b)
>>> x1 is y1
False

'is' seems to be doing a shallow check of object identity (I may be
wrong).

Testing whether or not two objects will be identical in the future would
likely require a single-level id check along with being able to test
whether an object is mutable or not.  For built-in structures (lists,
tuples, dictionaries, strings...), this isn't so bad, we know what is
immutable or not.  Seemingly a reasonable algorithm for determining
whether two objects are interchangeable is as follows...

#list possibly not complete.
_immutables = (int, str, float, tuple, type(None))
immutables = dict([i, None for i in _immutables])
def interchangeable(a,b):
    if type(a) not in immutables or type(b) not in immutables:
        return False
    if type(a) is type(b) and type(a) is tuple:
        #are immutable, but may contain objects whose
        #identity may be important for the future
        ia = map(id, a)
        ib = map(id, b)
        return ia == ib
    #are immutable but aren't a container
    return a == b


For user-defined classes, unless an interface is designed so that
classes can say, "I am mutable" or "I am immutable", and there are ways
to get their 'contents' (if such a thing makes sense), then testing
potential interchangeability is a moot point, and is (currently)
impossible.

 - Josiah


From ark-mlist at att.net  Fri Mar 19 15:55:44 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 15:57:07 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <20040319120617.FB67.JCARLSON@uci.edu>
Message-ID: <000b01c40df4$8c9569c0$6402a8c0@arkdesktop>

> Equivalency at this point in time is different from being equivalent at
> all points in time in the future.

If you like.

> Equivalent now, is something we can test easily with the current '=='.

Not really, at least not in the presence of user-define ==.

> Equivalent at all times in the future (what I would consider
> interchangeable) requires minimally testing object identities.  This
> seems to be going a bit beyond what 'is' is doing right now, as shown
> by:
> 
> >>> a = []
> >>> b = []
> >>> x1 = (a,b)
> >>> y1 = (a,b)
> >>> x1 is y1
> False
> 
> 'is' seems to be doing a shallow check of object identity (I may be
> wrong).

No, you're right.

> Testing whether or not two objects will be identical in the future would
> likely require a single-level id check along with being able to test
> whether an object is mutable or not.  For built-in structures (lists,
> tuples, dictionaries, strings...), this isn't so bad, we know what is
> immutable or not.  Seemingly a reasonable algorithm for determining
> whether two objects are interchangeable is as follows...
> 
> #list possibly not complete.
> _immutables = (int, str, float, tuple, type(None))

Certainly not complete, as it doen't include long or bool.

> immutables = dict([i, None for i in _immutables])
> def interchangeable(a,b):
>     if type(a) not in immutables or type(b) not in immutables:
>         return False
>     if type(a) is type(b) and type(a) is tuple:
>         #are immutable, but may contain objects whose
>         #identity may be important for the future
>         ia = map(id, a)
>         ib = map(id, b)
>         return ia == ib

That's not quite what I had in mind: You're testing for element identity
rather than interchangeability.  Perhaps something more like this:

		if len(a) != len(b):
			return False
		for x, y in zip(a, b):
			if !interchangeable(x, y):
				return False
		return True

>     #are immutable but aren't a container
>     return a == b

Also, this code does not take ImmutableSet into account.

> For user-defined classes, unless an interface is designed so that
> classes can say, "I am mutable" or "I am immutable", and there are ways
> to get their 'contents' (if such a thing makes sense), then testing
> potential interchangeability is a moot point, and is (currently)
> impossible.

Correct.  As things stand today, two distinct objects of user-defined
classes should never be considered interchangeable.



From fdrake at acm.org  Fri Mar 19 15:56:41 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri Mar 19 15:57:12 2004
Subject: [Python-Dev] new Python CVS committer
Message-ID: <200403191556.41199.fdrake@acm.org>

I've added Phillip Eby to the list of Python developers at SourceForge so he 
can add some interesting distutils extensions to the sandbox.  We'll be 
looking at using at least some of these as a basis for improving distutils 
itself.

Welcome, Phillip!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From pje at telecommunity.com  Fri Mar 19 16:16:20 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri Mar 19 16:10:48 2004
Subject: [Python-Dev] new Python CVS committer
In-Reply-To: <200403191556.41199.fdrake@acm.org>
Message-ID: <5.1.0.14.0.20040319161118.03949ec0@mail.telecommunity.com>

At 03:56 PM 3/19/04 -0500, Fred L. Drake, Jr. wrote:
>I've added Phillip Eby to the list of Python developers at SourceForge so he
>can add some interesting distutils extensions to the sandbox.  We'll be
>looking at using at least some of these as a basis for improving distutils
>itself.
>
>Welcome, Phillip!

Thanks, Fred.  'setuptools' is now checked in under 
python/nondist/sandbox/setuptools; you can install it and run its test 
suite via 'python setup.py test'.

Also, if nobody has any objections to my patch for SF bug #743267, maybe I 
should apply that too, at least on the head.  I'm not sure my CVS-Zen is 
strong enough yet to try backporting to the 2.3 bugfix branch, though.  :)


From mcherm at mcherm.com  Fri Mar 19 16:20:02 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Fri Mar 19 16:20:06 2004
Subject: [Python-Dev] redefining is
Message-ID: <1079731202.405b64023b078@mcherm.com>

I (Michael Chermside) wrote:
>   Identity Objects: Even if all fields were the same, two instances
>     would be considered "different". With Employee objects, if two
>     employees both have the name "John Smith", they're still different.
>
>   Value Objects: Object identity is not really important for
>     equality. The purpose of these objects is to represent particular
>     values. So long as all fields are equal, we consider two such
>     values equal. So two lists are the same if both have the same
>     contents. Some value objects (like lists) are mutable, in which
>     case the identity MAY be important to distinguish whether the
>     objects will remain equal if one is modified; others (like
>     strings) are immutable, in which case identity becomes an
>     implementation detail (and in some cases like small ints and
>     interned strings, cpython takes advantage of this).

Andrew Koenig replies:
> In which case every mutable object is always an identity object.  I did not
> say that every identity object is mutable.  Where is my misunderstanding?

An example of a mutable object which is NOT an identity object is list.

If we have two lists:

    >>> a = [1, 2, 3]
    >>> b = [1, 2, 3]

We want this:

    >>> a == b
    True

to return True. That's because when testing the "equality" of two lists
we only care about their VALUES. It is true that a cannot be substituted
for b in all situations because if we change a the change won't be
reflected in b. But for NOW, they're still equal, because their VALUES
are equal.

An example of an identity object would be an Employee object in a payroll
system.

    >>> class Employee(object):
    ...     def __init__(self, name, department, salary):
    ...         self.name = name
    ...         self.department = department
    ...         self.salary = salary
    ...
    >>> myCubemate = Employee('John Smith', 'IT', 60000)
    >>> guyAcrossHall = Employee('John Smith', 'IT', 60000)
    >>> myCubemate == guyAcrossHall
    False

Despite the fact that my cubemate has the same name as the guy
across the hall... despite the fact that the two objects have
EXACTLY the same value for EACH AND EVERY field, we STILL want
to have the '==' test return False. Because for Employee objects,
two objects are only considered equal if they are the exact same
object. Experienced database designers know whenever they see
an Identity Object to create an "employee_id" field (or its
equivalent) which is guaranteed to be unique, in order to
disambiguate. But in object systems we often just use the object
identity to provide that "identity" which relational databases
lack.

Once again, thanks to Martin for introducing me to this very
useful distinction (although I've done other reading about it
since then) in a discussion over appropriate definitions for 
__eq__ and __hash__.

-- Michael Chermside


From mcherm at mcherm.com  Fri Mar 19 16:20:10 2004
From: mcherm at mcherm.com (Michael Chermside)
Date: Fri Mar 19 16:20:11 2004
Subject: [Python-Dev] redefining is
Message-ID: <1079731210.405b640a800b9@mcherm.com>

I (Michael Chermside) wrote:
>   Identity Objects: Even if all fields were the same, two instances
>     would be considered "different". With Employee objects, if two
>     employees both have the name "John Smith", they're still different.
>
>   Value Objects: Object identity is not really important for
>     equality. The purpose of these objects is to represent particular
>     values. So long as all fields are equal, we consider two such
>     values equal. So two lists are the same if both have the same
>     contents. Some value objects (like lists) are mutable, in which
>     case the identity MAY be important to distinguish whether the
>     objects will remain equal if one is modified; others (like
>     strings) are immutable, in which case identity becomes an
>     implementation detail (and in some cases like small ints and
>     interned strings, cpython takes advantage of this).

Andrew Koenig replies:
> In which case every mutable object is always an identity object.  I did not
> say that every identity object is mutable.  Where is my misunderstanding?

An example of a mutable object which is NOT an identity object is list.

If we have two lists:

    >>> a = [1, 2, 3]
    >>> b = [1, 2, 3]

We want this:

    >>> a == b
    True

to return True. That's because when testing the "equality" of two lists
we only care about their VALUES. It is true that a cannot be substituted
for b in all situations because if we change a the change won't be
reflected in b. But for NOW, they're still equal, because their VALUES
are equal.

An example of an identity object would be an Employee object in a payroll
system.

    >>> class Employee(object):
    ...     def __init__(self, name, department, salary):
    ...         self.name = name
    ...         self.department = department
    ...         self.salary = salary
    ...
    >>> myCubemate = Employee('John Smith', 'IT', 60000)
    >>> guyAcrossHall = Employee('John Smith', 'IT', 60000)
    >>> myCubemate == guyAcrossHall
    False

Despite the fact that my cubemate has the same name as the guy
across the hall... despite the fact that the two objects have
EXACTLY the same value for EACH AND EVERY field, we STILL want
to have the '==' test return False. Because for Employee objects,
two objects are only considered equal if they are the exact same
object. Experienced database designers know whenever they see
an Identity Object to create an "employee_id" field (or its
equivalent) which is guaranteed to be unique, in order to
disambiguate. But in object systems we often just use the object
identity to provide that "identity" which relational databases
lack.

Once again, thanks to Martin for introducing me to this very
useful distinction (although I've done other reading about it
since then) in a discussion over appropriate definitions for 
__eq__ and __hash__.

-- Michael Chermside


From bob at redivi.com  Fri Mar 19 16:24:01 2004
From: bob at redivi.com (Bob Ippolito)
Date: Fri Mar 19 16:20:30 2004
Subject: [Python-Dev] Re: A proposal has surfaced on
	comp.lang.pythontoredefine"is"
In-Reply-To: <000301c40ded$46676d10$6402a8c0@arkdesktop>
References: <000301c40ded$46676d10$6402a8c0@arkdesktop>
Message-ID: <BDF19BFE-79EB-11D8-B3A6-000A95686CD8@redivi.com>


On Mar 19, 2004, at 3:03 PM, Andrew Koenig wrote:

>> Until someone writes, tests, and publishes an equiv function, and 
>> others
>> verify its usefulness, that strikes me as a speculative hypothesis.
>> Concrete code would also be clearer to me than the natural language
>> definitions I have seen.
>
> I would do so if I knew an easy way to determine whether an object is
> mutable.

If something like PEP 246 (Object Adaptation) were adopted with an 
implementation such as PyProtocols, it would be rather easy to tag 
types or objects as mutable or immutable.

untested example


# mutability.py
#
import protocols

__all__ = ['IImmutable', 'is_mutable']

class IImmutable(protocols.Interface):
     pass

def is_mutable(obj, _mutable_sentinel = object()):
     return protocols.adapt(obj, IImmutable, default=_mutable_sentinel) 
is _mutable_sentinel

def _setup():
     # don't pollute the module namespace
     for typ in (int, long, str, unicode, frozenset):
         protocols.declareImplementation(typ, 
instancesProvide=IImmutable)
_setup()



Of course, with this particular implementation, new immutable types 
would declare their immutability explicitly, and subtypes of immutable 
types that are now mutable would explicitly declare that they do not 
provide immutability.  Also note that something like this requires no 
changes to existing code at all.

-bob


From jim.jewett at eds.com  Fri Mar 19 13:42:16 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 19 16:36:51 2004
Subject: [Python-Dev] redefining is
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3D3@USAHM010.amer.corp.eds.com>


Robert Brewer:

>    Value equality: x == y

>    Object equivalence: x equiv y		
>                    or: equiv(x, y)

>    Object identity: x is y
>                 or: id(x) == id(y)

> Now if we can only find a short English word that means 
> "mutually substitutable". ;)

So identity means the same object, and implies equivalence.

Equivalence means they will always have the same value, even
if you do something to one and not the other.  This implies
(current value) equality.

equality just means that they have the same value *now*.

Since equivalence is a stronger form of equality, why not
just use "===".  

Anything that is === will also be ==, but the extra character
will mark it as special.  If the mark isn't strong enough,
perhaps "=~="; in math the ~ often modifies equality to mean
"not identical, but close enough".

equal_forever would also work, and be explicit.  equal_forever
has the additional advantage that it could be written as a 
function rather than an operator during a trial period.

-jJ

From ark-mlist at att.net  Fri Mar 19 16:50:01 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 16:49:56 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3D3@USAHM010.amer.corp.eds.com>
Message-ID: <000301c40dfc$21f65d10$6402a8c0@arkdesktop>

> So identity means the same object, and implies equivalence.

Yes.

> Equivalence means they will always have the same value, even
> if you do something to one and not the other.  This implies
> (current value) equality.

Yes, unless equality is defined weirdly.  For example, IEEE floating-point
NaN is supposed to be unequal to itself.

> equality just means that they have the same value *now*.

I'm not sure the implication goes the other way.  Consider a string-like
type for which == ignores case.  Then equivalence implies identity, but not
the other way.

> Since equivalence is a stronger form of equality, why not
> just use "===".

Seems plausible.

> Anything that is === will also be ==, but the extra character
> will mark it as special.  If the mark isn't strong enough,
> perhaps "=~="; in math the ~ often modifies equality to mean
> "not identical, but close enough".

I wish.  But unfortunately sometimes x == x is False.

> equal_forever would also work, and be explicit.  equal_forever
> has the additional advantage that it could be written as a
> function rather than an operator during a trial period.

Indeed.  I'd prefer something shorter, though, such as equiv.


From casey at zope.com  Fri Mar 19 16:58:35 2004
From: casey at zope.com (Casey Duncan)
Date: Fri Mar 19 17:01:55 2004
Subject: [Python-Dev] Re: redefining is
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3D3@USAHM010.amer.corp.eds.com>
	<000301c40dfc$21f65d10$6402a8c0@arkdesktop>
Message-ID: <20040319165835.683290f9.casey@zope.com>

On Fri, 19 Mar 2004 16:50:01 -0500
"Andrew Koenig" <ark-mlist@att.net> wrote:

> > So identity means the same object, and implies equivalence.
> 
> Yes.
> 
> > Equivalence means they will always have the same value, even
> > if you do something to one and not the other.  This implies
> > (current value) equality.
> 
> Yes, unless equality is defined weirdly.  For example, IEEE
> floating-point NaN is supposed to be unequal to itself.
> 
> > equality just means that they have the same value *now*.
> 
> I'm not sure the implication goes the other way.  Consider a
> string-like type for which == ignores case.  Then equivalence implies
> identity, but not the other way.
> 
> > Since equivalence is a stronger form of equality, why not
> > just use "===".
> 
> Seems plausible.
> 
> > Anything that is === will also be ==, but the extra character
> > will mark it as special.  If the mark isn't strong enough,
> > perhaps "=~="; in math the ~ often modifies equality to mean
> > "not identical, but close enough".
> 
> I wish.  But unfortunately sometimes x == x is False.

Right, == is application defined. Whether 'a == b' has no bearing on
whether 'a is b' or whether a and b are equivilant. On that grounds I
think '===' might be misleading. I'm not convinced that equivilance
needs an operator at all, although I know Andrew would prefer that it be
as easy to use as 'is'. I think 'equivilant(a, b)' or somesuch would
suffice for me.

-Casey


From pedronis at bluewin.ch  Fri Mar 19 17:07:30 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Fri Mar 19 17:03:02 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000801c40df1$77be5be0$6402a8c0@arkdesktop>
References: <20040319201505.GD31012@mems-exchange.org>
Message-ID: <5.2.1.1.0.20040319230147.02860ff0@pop.bluewin.ch>

At 15:33 19.03.2004 -0500, Andrew Koenig wrote:
> > Okay, but I don't see why that implementation detail is important.
>
>It is important because it causes programs to behave differently on
>different implementations.
>
> > > So what I'm claiming is that there should be a way of asking:  Given two
> > > objects, is there any way to distinguish them aside from their identity?
>
> > Why do you need to ask that question?  Further more, why is it
> > important enough to require a builtin operator?
>
>It certainly doesn't *require* a builtin operator.  I do think, however,
>that the proposed comparison is more useful than "is" in most contexts in
>which programmers use "is" today.  In particular, programs that use "is" can
>easily contain bugs that testing on a particular implementation can never
>reveal, and using the proposed comparison instead makes such bugs much less
>likely (and perhaps even impossible).

maybe, OTOH I suspect that people most puzzled by

 >>> a = 1
 >>> b = 1
 >>> a is b
True
 >>> a = 99
 >>> b = 99
 >>> a is b
True
 >>> a = 101
 >>> b = 101
 >>> a is b
False

really simply expect there to be just one 1 and 99 and 101, not an 
half-a-page definition of 'is' or some such involving the notion of 
(im)mutability.












From ark-mlist at att.net  Fri Mar 19 17:09:50 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 17:09:45 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <5.2.1.1.0.20040319230147.02860ff0@pop.bluewin.ch>
Message-ID: <000401c40dfe$e680ee00$6402a8c0@arkdesktop>

> >It certainly doesn't *require* a builtin operator.  I do think, however,
> >that the proposed comparison is more useful than "is" in most contexts in
> >which programmers use "is" today.  In particular, programs that use "is"
> >caneasily contain bugs that testing on a particular implementation can
> >never reveal, and using the proposed comparison instead makes such bugs
> much less likely (and perhaps even impossible).


> maybe, OTOH I suspect that people most puzzled by

>  >>> a = 1
>  >>> b = 1
>  >>> a is b
> True
>  >>> a = 99
>  >>> b = 99
>  >>> a is b
> True
>  >>> a = 101
>  >>> b = 101
>  >>> a is b
> False
> 
> really simply expect there to be just one 1 and 99 and 101, not an
> half-a-page definition of 'is' or some such involving the notion of
> (im)mutability.

I think we are agreeing with each other.  Because if "is" were redefined in
the way that was proposed, we would have the following behavior on every
implementation:

	>>> a = 1
	>>> b = 1
	>>> a is b
	True
	>>> a = 99
	>>> b = 99
	>>> a is b
	True
	>>> a = 101
	>>> b = 101
	>>> a is b
	True

regardless of whether id(a)==id(b).


From ark-mlist at att.net  Fri Mar 19 17:13:59 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 17:13:54 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <200403181708.i2IH8Cj10283@guido.python.org>
Message-ID: <000701c40dff$7ab6d300$6402a8c0@arkdesktop>

> Sorry, if you're usign *any* immutable value there and expecting it to
> be a unique object, you're cruisin' for a bruisin', so to speak.  The
> language spec explicitly *allows* but does not *require* the
> implementation to cache and reuse immutable values.

Ay, there's the rub.

Aren't you saying that using "is" to compare immutables is always broken,
unless you know that the immutable values are singletons?



From pedronis at bluewin.ch  Fri Mar 19 17:29:50 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Fri Mar 19 17:25:07 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000401c40dfe$e680ee00$6402a8c0@arkdesktop>
References: <5.2.1.1.0.20040319230147.02860ff0@pop.bluewin.ch>
Message-ID: <5.2.1.1.0.20040319231747.00a64d68@pop.bluewin.ch>

At 17:09 19.03.2004 -0500, Andrew Koenig wrote:
> > >It certainly doesn't *require* a builtin operator.  I do think, however,
> > >that the proposed comparison is more useful than "is" in most contexts in
> > >which programmers use "is" today.  In particular, programs that use "is"
> > >caneasily contain bugs that testing on a particular implementation can
> > >never reveal, and using the proposed comparison instead makes such bugs
> > much less likely (and perhaps even impossible).
>
>
> > maybe, OTOH I suspect that people most puzzled by
>
> >  >>> a = 1
> >  >>> b = 1
> >  >>> a is b
> > True
> >  >>> a = 99
> >  >>> b = 99
> >  >>> a is b
> > True
> >  >>> a = 101
> >  >>> b = 101
> >  >>> a is b
> > False
> >
> > really simply expect there to be just one 1 and 99 and 101, not an
> > half-a-page definition of 'is' or some such involving the notion of
> > (im)mutability.
>
>I think we are agreeing with each other.

nope, my point is that the friendly "fit everybody brain" solution would be 
to keep
the definition of 'is' as is, but force some classes of value objects to 
force singletons for each value, but this cannot be made to work. So we 
cannot win.

So we remain with the gain of killing potential bugs related to naive usage 
of 'is', but I don't think that's worth the spec complexity.


>Because if "is" were redefined in
>the way that was proposed, we would have the following behavior on every
>implementation:
>
>         >>> a = 1
>         >>> b = 1
>         >>> a is b
>         True
>         >>> a = 99
>         >>> b = 99
>         >>> a is b
>         True
>         >>> a = 101
>         >>> b = 101
>         >>> a is b
>         True

yes, but through a far more complex model, and still I don't know then 
whether they expect there to be just e.g. one (1,1) or many.




From nas-python at python.ca  Fri Mar 19 17:28:39 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Fri Mar 19 17:28:47 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000801c40df1$77be5be0$6402a8c0@arkdesktop>
References: <20040319201505.GD31012@mems-exchange.org>
	<000801c40df1$77be5be0$6402a8c0@arkdesktop>
Message-ID: <20040319222839.GF31012@mems-exchange.org>

On Fri, Mar 19, 2004 at 03:33:41PM -0500, Andrew Koenig wrote:
> It certainly doesn't *require* a builtin operator.  I do think,
> however, that the proposed comparison is more useful than "is" in
> most contexts in which programmers use "is" today.

Are you saying that most instances of "is" in current Python code
are incorrect?  If not, what do you mean by more useful?

  Neil

From jim.jewett at eds.com  Fri Mar 19 17:30:50 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 19 17:31:54 2004
Subject: [Python-Dev] why string exceptions are bad
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3DC@USAHM010.amer.corp.eds.com>


>> Out of curiosity, why are string exceptions going away?  

> Probably, originally, for many of the reasons stated here.  
> That people expected equality tests in the except clause
> and were tripped up by the identity test performed. 

Trying to catch (some, but not all) exceptions from another 
module can be difficult if the module raises strings.  At
best, you catch everything, test with == (even though "is" 
seemed to work in your tests), and maybe reraise.

If they start raising

	"Error:  You requested 7 but the quota is 6"

you have to regex match.

If the "same" error is raised several places (or by several
different modules), there will eventually be a typo.

	"Error: That was not a string"
	"Error: That was not a string."
	"Error: That was mot a string."
	"Error  That was not a string."
	"Error: That wasn't a string"
	"ERROR: That was not a string"

A class can be defined in a single place (and imported); 
a typo in the 47th raise statement will show up as a syntax
error instead of a runtime bug.

-jJ

From ark-mlist at att.net  Fri Mar 19 17:40:57 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Fri Mar 19 17:40:51 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <20040319222839.GF31012@mems-exchange.org>
Message-ID: <000e01c40e03$3f0d6630$6402a8c0@arkdesktop>

> > It certainly doesn't *require* a builtin operator.  I do think,
> > however, that the proposed comparison is more useful than "is" in
> > most contexts in which programmers use "is" today.

> Are you saying that most instances of "is" in current Python code
> are incorrect?  If not, what do you mean by more useful?

I strongly suspect that most instances of "is" in current Python code would
not change their meaning, because most instances of "is" use either
singletons or mutable objects.  However, I also think that a number of uses
of "is" that are currently incorrect, sometimes in subtle ways, would become
useful.

As things stand, I think I reluctantly agree that it's too big a change to
consider, because I can certainly imagine programs that might break.
Nevertheless, I still wish that expressions such as "x is 'foo'" did not
silently differ in outcome from one implementation to another.


From barry at python.org  Fri Mar 19 17:41:54 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar 19 17:42:07 2004
Subject: [Python-Dev] why string exceptions are bad
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3DC@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3DC@USAHM010.amer.corp.eds.com>
Message-ID: <1079736113.11701.14.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 17:30, Jewett, Jim J wrote:

> A class can be defined in a single place (and imported); 
> a typo in the 47th raise statement will show up as a syntax
> error instead of a runtime bug.

Yes, but I'll just note that this is "safe":

foo.py
------
Error = 'Bogus Error'

def foo():
	...
	raise Error


bar.py
------
import foo

try:
	foo.foo()
except foo.Error:
	# oops!


That works because you're raising and catching the same object.  String
exceptions still suck though. :)

-Barry



From pje at telecommunity.com  Fri Mar 19 17:57:09 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri Mar 19 17:51:28 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000e01c40e03$3f0d6630$6402a8c0@arkdesktop>
References: <20040319222839.GF31012@mems-exchange.org>
Message-ID: <5.1.0.14.0.20040319174828.03692430@mail.telecommunity.com>

At 05:40 PM 3/19/04 -0500, Andrew Koenig wrote:

>Nevertheless, I still wish that expressions such as "x is 'foo'" did not
>silently differ in outcome from one implementation to another.

The part that drives me nuts about this discussion is that in my view, "x 
is 'foo'" has the *same* outcome on all implementations.  That is, it's 
true if x refers to that exact string object.

The thing that's different from one implementation to the next is whether 
there's any chance in hell of x being that same 'foo' string.  But to me, 
that 'foo' string looks like a *newly created* string, so to the naive 
glance there's no possible way that it could be the same object.  In other 
words, it looks like a bad expression to use in the first place: one that's 
guaranteed to be false, except by accident of implementation.

So, I have trouble understanding how it is that somebody could get to a 
place where they think that using 'is' for strings and numbers is a good 
idea in the first place.  But then, I read the entire Python language 
reference (and a good chunk of the library reference) before I tried 
writing even a single line of Python code, so I can imagine that my 
perspective on this might not be the most common one.  :)


From barry at python.org  Fri Mar 19 18:04:19 2004
From: barry at python.org (Barry Warsaw)
Date: Fri Mar 19 18:04:58 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <5.1.0.14.0.20040319174828.03692430@mail.telecommunity.com>
References: <20040319222839.GF31012@mems-exchange.org>
	<5.1.0.14.0.20040319174828.03692430@mail.telecommunity.com>
Message-ID: <1079737458.11701.37.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 17:57, Phillip J. Eby wrote:
> At 05:40 PM 3/19/04 -0500, Andrew Koenig wrote:
> 
> >Nevertheless, I still wish that expressions such as "x is 'foo'" did not
> >silently differ in outcome from one implementation to another.
> 
> The part that drives me nuts about this discussion is that in my view, "x 
> is 'foo'" has the *same* outcome on all implementations.  That is, it's 
> true if x refers to that exact string object.
> 
> The thing that's different from one implementation to the next is whether 
> there's any chance in hell of x being that same 'foo' string.  But to me, 
> that 'foo' string looks like a *newly created* string, so to the naive 
> glance there's no possible way that it could be the same object.  In other 
> words, it looks like a bad expression to use in the first place: one that's 
> guaranteed to be false, except by accident of implementation.
> 
> So, I have trouble understanding how it is that somebody could get to a 
> place where they think that using 'is' for strings and numbers is a good 
> idea in the first place.

Thanks Phillip.  My sentiments exactly.

-Barry



From tim.one at comcast.net  Fri Mar 19 18:14:14 2004
From: tim.one at comcast.net (Tim Peters)
Date: Fri Mar 19 18:14:17 2004
Subject: [Python-Dev] Rich comparisons  [Was] redefining is
In-Reply-To: <000901c40df1$a0761870$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEIKJKAB.tim.one@comcast.net>

[Andrew Koenig]
> I believe that IEEE floating-point requires x == x to be False when x
> is NaN.

I'll just repeat a reply I made on comp.lang.python in a related thread:


[Andrew Koenig]
>> I'm pretty sure that IEEE floating point requires NaN to be defined
>> in such a way that if x is NaN, then x == x must yield False.

Yes, but ... a conforming implementation of the standard requires all sorts
of comparison operators, but it's not a language standard and doesn't have
anything to say about how languages spell access to any of its facilities.
So, e.g., there must be *some* way to spell an equality operator that
doesn't consider a NaN to "be equal" to itself, but that doesn't need to be
spelled "==" (not even in a language that has an "==" comparison operator).
For example, it's fine if it needs to be spelled

    ieee.equal_considering_nans_unequal_and_unexceptional(x, y)

[Erik Max Francis]
> That's correct.  Python itself doesn't do this, though:

All Python behavior wrt IEEE gimmicks is a platform-dependent accident --
nothing is promised.

> Python 2.3.3 (#1, Dec 22 2003, 23:44:26) [GCC 3.2.3] on linux2
> >>> a = 1e3000
> >>> a
> inf
> >>> a/a
> nan
> >>> n = a/a
> >>> n == n
> True

While on Windows 2.3.3, the output looks very different, and the final
result is False.

For Python 2.4, Michael Hudson has patched things so that Python's 6 float
relationals deliver the same results the platform C compiler delivers for
the spelled-the-same-way-in-C float relationals.  There's still no
cross-platform guarantee about what those will do (because C makes no
guarantees here either), but at least NaN == NaN will be false on both gcc
and MSVC7-based Pythons (MSVC6-based Pythons will say NaN == NaN is true
then, but right now there are no plans to distribute an MSVC6-based Python
2.4).


From jcarlson at uci.edu  Fri Mar 19 18:41:00 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri Mar 19 18:44:52 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <000b01c40df4$8c9569c0$6402a8c0@arkdesktop>
References: <20040319120617.FB67.JCARLSON@uci.edu>
	<000b01c40df4$8c9569c0$6402a8c0@arkdesktop>
Message-ID: <20040319143356.FB6A.JCARLSON@uci.edu>

> > Equivalency at this point in time is different from being equivalent at
> > all points in time in the future.
> 
> If you like.

I guess it is all how we interpret the word "equivalent", but then again,
that is what the remainder of the message was about.  Differentiating
between being equivalent now, and always being equivalent, and how we
can (relatively) easily implement 'always equivalent'.


> > Equivalent now, is something we can test easily with the current '=='.
> 
> Not really, at least not in the presence of user-define ==.

I should have said that I was only really referring to built-in types. 
I mistakenly believed my later comment regarding mutable and immutable
user defined classes being impossible to deal with clarified this.  I'll
be more explicit next time.


[snip]
> > Testing whether or not two objects will be identical in the future would
> > likely require a single-level id check along with being able to test
> > whether an object is mutable or not.  For built-in structures (lists,
> > tuples, dictionaries, strings...), this isn't so bad, we know what is
> > immutable or not.  Seemingly a reasonable algorithm for determining
> > whether two objects are interchangeable is as follows...
> Certainly not complete, as it doen't include long or bool.

Taking your changes into account, and fixing a case when two mutable
objects are passed, but are the same object (also removing the
requirement for a sane id())...

#list possibly not complete.
_immutables = (int, str, float, tuple, long, bool, complex, type(None))
immutables = dict([i, None for i in _immutables])
def interchangeable(a,b):
    if type(a) not in immutables or type(b) not in immutables:
        return a is b
    if type(a) is type(b) and type(a) is tuple:
        #are immutable, but may contain objects whose
        #identity may be important for the future
        if len(a) != len(b):
            return False
        for x,y in zip(a,b):
            if not interchangeable(a,b):
                return False
        return True
    #are immutable but aren't a container
    return a == b

> That's not quite what I had in mind: You're testing for element identity
> rather than interchangeability.  Perhaps something more like this:

> 		if len(a) != len(b):
> 			return False
> 		for x, y in zip(a, b):
> 			if !interchangeable(x, y):
> 				return False
> 		return True

This extends the interchangeable sufficiently such that you test
equality (==) for those things that are not containers, test identity
for those that are mutable, and only do full interchangeable tests for
those that are tuples.


> Also, this code does not take ImmutableSet into account.
...
> Correct.  As things stand today, two distinct objects of user-defined
> classes should never be considered interchangeable.

Technically, ImmutableSet is a user-defined class.  Considering that we
can __setstate__ on it, further makes it /far from/ immutable, and
shouldn't even be handled by interchangeable, though the new version
tests for identity using 'is' for mutable objects, which I believe is
sufficient.

 - Josiah


From tim.one at comcast.net  Sat Mar 20 00:35:06 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sat Mar 20 00:35:09 2004
Subject: [Python-Dev] RE: Rich comparisons  [Was] redefining is
In-Reply-To: <00de01c40d7a$5ed36520$a0ba2c81@oemcomputer>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEJJJKAB.tim.one@comcast.net>

[Tim]
>> Well, string_richcompare() takes that shortcut, so the advice is
>> good, but PyObject_RichCompare() doesn't in general
>> (PyObject_Compare() still does, but that's not triggered by '==').

[Raymond Hettinger]
> Hey, hey, this may be part of the answer to why my timings for
> equality testing using rich comparisions run so much slower than they
> do with PyObject_Compare().

Except I expect that depends on exactly what your timing tests compare.  A
special case for object identity isn't a pure win, since it slows every
non-identical case (by the time to determine they're not identical, and
branch over the early exit).

For strings specifically it appeared to be a win, because Python's
implementation works hard to create interned strings in common cases, and
does so largely so that this trick *can* pay off.

But for floats it's hard to imagine this special case wouldn't be a net
oss  -- Python almost never reuses float objects.

>>> float(0) is float(0)
False
>>>

> Fixing this would entail a change in semantics but would be worth it
> if we can all agree to it.
>
> Essentially, I would like to insert the following lines into
> PyObject_RichCompare():
>
> 	if (v == w) {
> 		if (op == Py_EQ)
> 			Py_RETURN_TRUE;
> 		else if (op == Py_NE)
> 			Py_RETURN_FALSE;
> 	}

Then, e.g., you make virtually all real-life float comparisons slower.

> The test suite runs fine, but it is possible that some existing class
> defines equality in a way that sometimes returns False even when given
> two identical objects.
>
> I think the change is worth it -- tests for equality are ubiquitous
> (and somewhat common) throughout the source.

It's not the number of equality tests that matter, it's the ratio of the
number of "==" and "!=" tests that actually compare identical comparands, to
the total number of all rich comparisons executed.  Besides the change to
semantics, the patch would be a speed loss for everything in the latter set
minus the former.


From tim.one at comcast.net  Sat Mar 20 00:40:00 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sat Mar 20 00:40:24 2004
Subject: [Python-Dev] (not) redefining is
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CF@USAHM010.amer.corp.eds.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEJKJKAB.tim.one@comcast.net>

[Jewett, Jim J]
> ...
> I'm still a bit worried about a comment (which I can no longer
> find) suggesting a change for {}, so that
>
> 	>>> {} is {}
>       True
>
> I don't see this in 2.3, and would be very surprised (and
> disappointed) if it started to happen in 2.4, precisely because
> dictionaries are mutable objects that should work.

Won't happen.  Indeed, the language Reference manual guarantees this (and
always has).


From tim.one at comcast.net  Sat Mar 20 01:20:18 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sat Mar 20 01:20:21 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000201c40de7$207a50f0$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEJMJKAB.tim.one@comcast.net>

[Andrew Koenig]
> ...
> Also, I think that in general, operations that should be used only
> by people who really know what they're doing should be harder to
> express.

Yet another argument for removing floating point <0.898 wink>.

While "is" serves a useful purpose in Python, I wouldn't mind if it were
harder to spell.  There have indeed been serious bugs due to misuse of "is",
even in Python's standard library.  The most notorious is that, until rev
1.15, asynchat.py used

    buf is ''

in a conditional and got away with it for 3 years.  This one is especially
sucky because I'm sure the author knew better, but was obsessed with speed
so talked himself into believing that CPython represents the empty string
uniquely.  It doesn't -- but it's not easy to find a counterexample!

>>> a = ""
>>> b = ""
>>> a is b
True
>>> "abc"[:0] is "def"[:0] is a is "" is str(u"") is ""+"" is ""*0
True
>>> import sys
>>> sys.stdin.read(0) is a
True
>>> import array
>>> buf = array.array('c', "")
>>> buf.tostring() is a
True
>>>

If you give up <wink>, see the checkin comment for asynchat.py rev 1.15.


From ark-mlist at att.net  Sat Mar 20 01:35:22 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Sat Mar 20 01:35:17 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEJMJKAB.tim.one@comcast.net>
Message-ID: <000001c40e45$8591e580$6402a8c0@arkdesktop>

> While "is" serves a useful purpose in Python, I wouldn't mind if it were
> harder to spell.

... and I wouldn't mind it if there were a way of testing for
substitutability that were as easy to spell as "is" -- in fact, I wouldn't
mind if it were spelled "is" even though I realize it's probably impractical
to do so.


[and I agree with you that floating-point arithmetic is intrinsically
hazardous]


From tim.one at comcast.net  Sat Mar 20 01:39:42 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sat Mar 20 01:39:44 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000701c40dff$7ab6d300$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJNJKAB.tim.one@comcast.net>

[Guido]
>> Sorry, if you're usign *any* immutable value there and expecting it
>> to be a unique object, you're cruisin' for a bruisin', so to speak.
>> The language spec explicitly *allows* but does not *require* the
>> implementation to cache and reuse immutable values.

[Andrew]
> Ay, there's the rub.
>
> Aren't you saying that using "is" to compare immutables is always
> broken, unless you know that the immutable values are singletons?

No, because it's still <wink> the purpose of "is" to expose the
implementation.  Examples of system-level uses were given earlier; for
another, if I'm doing a crawl over gc.get_referrers(x) trying to account for
all the references to the object currently bound to x (this is something I
do often when trying to track suspected memory leaks), object identity is
the only notion of equivalence that's relevant, and it doesn't matter
whether that object is a singleton, or mutable, neither any other property
of x's type or behavior -- the only thing about x that matters in this
context is its indenty, and locating which other objects refer to exactly
it.  If I know that references to the string "<oid at 00000001>" are growing
over time, I'll in fact add an "is" test against that string to the guts of
Zope's test.py leak-detection scaffolding.  It's necessary to be able to
spell this in Python -- although it's not necessary (IMO) to be able to
spell it with two letters.


From agthorr at barsoom.org  Sat Mar 20 02:38:58 2004
From: agthorr at barsoom.org (Agthorr)
Date: Sat Mar 20 02:39:03 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <006e01c40d72$1d729d60$a0ba2c81@oemcomputer>
References: <006e01c40d72$1d729d60$a0ba2c81@oemcomputer>
Message-ID: <20040320073858.GA6748@barsoom.org>

On Fri, Mar 19, 2004 at 12:21:33AM -0500, Raymond Hettinger wrote:
> 1) collections.heap(), a high performance Fibonacci heap implemented as
> a type and taking sort style arguments (cmp=, key=, reverse=).

I'd be willing to implement a high-performance heap, although I'd like
to start a discussion about the right sort of heap to implement.
Fibonacci heaps have good worst-case amortized behavior, but they are
complicated to implement and have large hidden constants.

Searching the research literature for a few hours, Pairing Heaps seem
like the way to go.  They are very fast in practice, and their
amortized asymptotic bounds are nearly as good as the Fibonacci Heap.
The only difference is in the Decrease Key operation, where the
Pairing Heaps bound is somewhere between Theta(log log n) and O(log
n)--the tight bound is still an open research problem.

The interface to any high-performance heap would be the same, so it
makes sense to implement a reasonably straightforward heap now.
The implementation could always be changed to Fibonacci Heaps later if
it turns out that there's a pressing need for everyone to have a heap
implementation that has better performance when you're performing
millions of Decrease Key operations...

If there's rough concensus for this, I'd start by making a Python
implementation, test code, and documentation.  Once that's done, I'd
write the high-performance C implementation.

-- Dan Stutzbach

From claird at lairds.com  Fri Mar 19 22:04:49 2004
From: claird at lairds.com (Cameron Laird)
Date: Sat Mar 20 06:56:40 2004
Subject: [Python-Dev] Portability:  update and oblique apology
Message-ID: <E1B4Wn3-00022a-00@ns.purdue.org>

Back in January, I instigated a rather long thread <URL:
http://mail.python.org/pipermail/python-dev/2004-January/041607.html >
that, inevitably, touched on several topics:  keeping Python going on
minor platforms, how to patch, threading varieties, and so on.  I wrote
'bout how I really, really wanted to "normalize" HP-UX, Irix, and a few
other Unixes.

Then I disappeared.

I know what *I* think of people who promise and don't deliver.  Life
became busy and ... well, here we are now.  I still have an interest
in HP-UX et al., I'll have good access to them at least through the
end of the year, and I understand Python generation better than be-
fore:  these are all good things.  I've got too much going on to lead
any effort, but I'll pitch in when I can make a difference.

I wonder, though, what *does* make a difference.  While I have all
sorts of ideas about how to smooth out rough edges, maybe HP-UX is
already good enough.  Maybe zero more newcomers will ever care about
HP-UX, or old HP-UX releases.

So, this is what I want:  be aware that I'm still motivated to help.
Accept my apology for the long silence.  And please let me know if a
specific need for portability improvements turns up (as in, "IBM 
offered XXX if we just do a better job of supporting AIX", or, "I've
got a customer that likes our product, but we need help assuring
them it'll have a trouble-free future under OpenBSD" or ...).

It'll be an indeterminate interval 'fore I turn up here again.

From barry at python.org  Sat Mar 20 08:30:13 2004
From: barry at python.org (Barry Warsaw)
Date: Sat Mar 20 08:30:28 2004
Subject: [Python-Dev] Some changes to logging
In-Reply-To: <008301c40dca$67e78ec0$652b6992@alpha>
References: <008301c40dca$67e78ec0$652b6992@alpha>
Message-ID: <1079789412.13834.0.camel@anthem.wooz.org>

On Fri, 2004-03-19 at 10:54, Vinay Sajip wrote:

>     stream    Use the specified stream to initialize the StreamHandler. Note
>               that this argument is incompatible with 'filename' - if both
>               are present, 'stream' is ignored.

Perhaps an exception should be raised instead?

Other than that +1.

-Barry



From pinard at iro.umontreal.ca  Sat Mar 20 08:45:07 2004
From: pinard at iro.umontreal.ca (=?iso-8859-1?Q?Fran=E7ois?= Pinard)
Date: Sat Mar 20 09:59:46 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEJMJKAB.tim.one@comcast.net>
References: <000201c40de7$207a50f0$6402a8c0@arkdesktop>
	<LNBBLJKPBEHFEDALKOLCGEJMJKAB.tim.one@comcast.net>
Message-ID: <20040320134507.GB2320@titan.progiciels-bpi.ca>

[Tim Peters]

>     buf is ''

There is question which traversed my mind recently.  While I would
never use `is' between ordinary strings, I might be tempted to use `is'
between explicitely interned strings, under the hypothesis that for
example:

    a = intern('a b')
    b = intern('a b')
    a is b

dependably prints `True'.  However, I remember a thread in this forum by
which strings might be un-interned on the fly -- but do not remember the
details; I presume it was for strings which the garbage collector may
not reach anymore.

There are a few real-life cases where speed considerations would invite
programmers to use `is' over `==' for strings, given they all get
interned to start with so the speed-up could be gleaned later.  The fact
that `is' exposes the implementation is quite welcome in such cases.

-- 
Fran?ois Pinard   http://www.iro.umontreal.ca/~pinard

From pinard at iro.umontreal.ca  Sat Mar 20 08:32:22 2004
From: pinard at iro.umontreal.ca (=?iso-8859-1?Q?Fran=E7ois?= Pinard)
Date: Sat Mar 20 09:59:51 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEJMJKAB.tim.one@comcast.net>
References: <000201c40de7$207a50f0$6402a8c0@arkdesktop>
	<LNBBLJKPBEHFEDALKOLCGEJMJKAB.tim.one@comcast.net>
Message-ID: <20040320133222.GA2320@titan.progiciels-bpi.ca>

[Tim Peters]

> While "is" serves a useful purpose in Python, I wouldn't mind if it were
> harder to spell.

I think I would mind is `is' was harder to spell. `is' and `is not'
are perfect as they stand, for me at least, and I do not remember ever
having problems with these.  The notation is clear, short and sweet.  I
mainly use `is' or `is not' for my own built objects, and systematically
for the `None' object (but never for `False' or `True').  I know what is
going on then.  My feeling is that `is' only gets problematic when one
tries to abuse it, so the problem might be more a question of attitude,
than design.  You cannot always force an attitude through design.

The user-aimed documentation often makes all the difference.  When the
documentation tells me from the start what I should know, and helps me
at sorting good and bad attitudes, it saves me from trouble and grief.

-- 
Fran?ois Pinard   http://www.iro.umontreal.ca/~pinard

From aleksander.piotrowski at nic.com.pl  Sat Mar 20 10:40:44 2004
From: aleksander.piotrowski at nic.com.pl (Aleksander Piotrowski)
Date: Sat Mar 20 10:40:29 2004
Subject: [Python-Dev] struct.pack() on 64bit machine
Message-ID: <20040320154036.GC6046@metawers.nic.com.pl>

Hi

I'm running python on OpenBSD/sparc64 (SUN Ultra 10) and have question
about struct module.

On this machine struct.pack() gives me:

>>> struct.pack('l', 1)
'\x00\x00\x00\x00\x00\x00\x00\x01'
>>> struct.pack('i', 1)
'\x00\x00\x00\x01'

On i386 box I have:

>>> struct.pack('l', 1)
'\x01\x00\x00\x00'
>>> struct.pack('i', 1)
'\x01\x00\x00\x00'

Because of this, OpenBSD port have following patch:

$OpenBSD: patch-Lib_test_test_fcntl_py,v 1.1.1.1 2003/12/31 17:38:33 sturm Exp $
--- Lib/test/test_fcntl.py.orig	2003-12-31 12:13:00.000000000 +0100
+++ Lib/test/test_fcntl.py	2003-12-31 12:14:14.000000000 +0100
@@ -22,9 +22,13 @@ if sys.platform.startswith('atheos'):
 
 if sys.platform in ('netbsd1', 'Darwin1.2', 'darwin',
                     'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5',
-                    'bsdos2', 'bsdos3', 'bsdos4',
-                    'openbsd', 'openbsd2', 'openbsd3'):
+                    'bsdos2', 'bsdos3', 'bsdos4'):
     lockdata = struct.pack('lxxxxlxxxxlhh', 0, 0, 0, fcntl.F_WRLCK, 0)
+elif sys.platform in ['openbsd', 'openbsd2', 'openbsd3']:
+    if sys.maxint == 2147483647:
+        lockdata = struct.pack('lxxxxlxxxxlhh', 0, 0, 0, fcntl.F_WRLCK, 0)
+    else:
+        lockdata = struct.pack('ixxxxixxxxihh', 0, 0, 0, fcntl.F_WRLCK, 0)
 elif sys.platform in ['aix3', 'aix4', 'hp-uxB', 'unixware7']:
     lockdata = struct.pack('hhlllii', fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0)
 elif sys.platform in ['os2emx']:


Is it OK? It should work like this on 64bit machine, right?

Thanks,

Alek
-- 
- Spokojnie, Petro. Odzyskam nasze dzieci - obieca? Groszek. - I zabij?
Achillesa przy okazji. Nied?ugo, skarbie. Zanim umr?.
- To ?wietnie - pochwali?a go. - Bo potem by?oby ci o wiele trudniej.
 -- Orson Scott Card, Teatr Cieni

From bac at OCF.Berkeley.EDU  Sat Mar 20 10:49:21 2004
From: bac at OCF.Berkeley.EDU (Brett Cannon)
Date: Sat Mar 20 10:49:29 2004
Subject: [Python-Dev] python-dev Summary for 2004-03-01 through 2004-03-15
	[rough draft]
Message-ID: <Pine.SOL.4.58.0403200747340.23908@death.OCF.Berkeley.EDU>

I am hoping to send this summary out on the first day of PyCon (Wed.).
Hopefully I wasn't too tired when I wrote this on the airplane.


-----------------------------------


=====================
Summary Announcements
=====================
Still looking for a summer job or internship programming.  If you know of
one, please let me know.

Ever since I first had to type Martin v. L|o_diaeresis|wis' name, I have
had issues with Unicode in the summary.  When I realized there was a
problem I thought it was Vim changing my Unicode in some way since I would
notice problems when I reopened the file in TextEdit, OS X's included text
editor that I have always used for writing the summaries (and no, I am not
about to use Vim to do this nor Emacs; spoiled by real-time spelling and
it is just the way I do it).  Well, I was wrong.  Should have known Vim
was not the issue.

Turned out that TextEdit was opening the text files later assuming the
wrong character encoding.  When I forced it to open all files as UTF-8 I
no longer had problems.  This also explains the weird MIME-quoted issues I
had earlier that Aahz pointed out to me since I was just copying from
TextEdit into Thunderbird_, my email client, without realizing TextEdit
was not reading the text properly.  So I thought I finally solved my
problem.  Ha!  Not quite.

Turned out to be a slight issue on the generation of the email based on
the tool chain for how we maintain the python.org web site.  This is in no
way the web team's fault since I have unique requirements for the
Summaries.  But without having to do some recoding of ht2html_ in order to
specify the text encoding, I wasn't sure how I should handle this.

Docutils to the rescue.  Turns out that there is a directive_ for handling
Unicode specifically.  That is why you see ``|o_diaeresis|`` in the reST
version of the summary but the HTML version shows |o_diaeresis| (for
people reading this in reST, I realize it looks the same for you; just
look at the HTML to see it).  For those of you wondering what the text is
meant to represent, it is an "o" with a diaeresis.  In simple terms it is
an "o" with two dots on top of it going horizontally and it represented by
0x00F6 in Unicode (while a program in OS X may have helped cause this
headache, the OS's extensive Unicode support, specifically its Special
Characters menu option and bonus Unicode info is very well done and a
great help with getting the Unicode-specific info for the character).

I am planning to consistently do this in the Summaries.  While it might
make certain names harder to read for the people who read this in reST, it
doesn't butcher people's names regardless of what version you use and I
take that as a higher precedent.

And here is a question of people who read the Summaries on a regular
basis: would you get any benefit in having new summaries announced in the
`python.org RSS feed`_?  Since this entails one extra, small step in each
summary I am asking people to email me to let me know if this would in any
way make their lives easier.  So please let me know if knowing when a new
summary is out by way of the RSS feed would be beneficial to you or if
just finding from `comp.lang.python`_ or `comp.lang.python.announce`_ is
enough.

I actually wrote this entire summary either in the airport or on the
flight to DC for PyCon (thank goodness for emergency aisles; my 6'6" frame
would be in much more pain than it is otherwise) and thus on Spring Break!
I am hoping to use this as a turning point in doing the Summaries on a
semi-monthly basis again.  We will see if Spring quarter lets me stick to
that (expecting a lighter load with less stress next quarter).

.. _Thunderbird: http://www.mozilla.org/projects/thunderbird/
.. _ht2html: http://ht2html.sf.net/
.. _directive: http://docutils.sourceforge.net/spec/rst/directives.html
.. _python.org RSS feed: http://www.python.org/channews.rdf
.. _PyCon: http://www.pycon.org/

=========
Summaries
=========
--------
PEP news
--------
PEP 309 ("Partial Function Application") has been rewritten.

PEP 318 got a ton of discussion, to the point of warranting its own
summary: `PEP 318 and the discussion that will never end`_.
PE 327, which is the spec for the Decimal module in the CVS sandbox, got
an update.

Contributing threads:
  - `PEP 309 re-written
<http://mail.python.org/pipermail/python-dev/2004-March/042865.html>`__
  - `Changes to PEP 327: Decimal data type
<http://mail.python.org/pipermail/python-dev/2004-March/043155.html>`__


-----------------------------------------------
Playing willy-nilly with stack frames for speed
-----------------------------------------------
A patch to clean up the allocation and remove the freelist (stack frames
not in use that could be used for something else) was proposed.  Of course
it would have been applied immediately if there wasn't a catch: recursive
functions slowed down by around 20%.

A way to get around this was proposed, but it would clutter up the code
which was being simplified in the first place.  Guido said he would rather
have that than have recursive calls take a hit.

Then a flurry of posts came about discussing other ways to try to speed up
stack allocation.

Contributing threads:
  - `scary frame speed hacks
<http://mail.python.org/pipermail/python-dev/2004-March/042871.html>`__
  - `reusing frames
<http://mail.python.org/pipermail/python-dev/2004-March/042914.html>`__

----------------------------------------------
PEP 318 and the discussion that will never end
----------------------------------------------
Just looking at the number of contributing threads to this summary should
give you an indication of how talked about this PEP became.  In case you
don't remember the discussion `last time`_, this PEP covers
function/method(/class?) decorators: having this::

  def foo() [decorate, me]: pass

be equivalent to::

  def foo(): pass
  foo = decorate(me(foo))

What most of the discussion came down to was syntax and the order of
application.  As of this moment it has come down to either the syntax used
above or putting the brackets between the function/method name and the
parameters.  Guido spoke up and said he liked the latter syntax (which is
used by Quixote_).  People, though, pointed out that while the syntax
works for a single argument, adding a long list starts to separate the
parameter tuple farther and farther from the function/method name.  There
was at least one other syntax proposal but it was shot down quickly.

Order of application has not been settled either.  Some want the order to
be like in the example: the way you see it is how it would look if you
wrote out the code yourself.  Others, though, want the reverse order:
``me(decorate(foo))``.  This would put the application order in the order
the values in the brackets are listed.

In the end it was agreed the PEP needed to be thoroughly rewritten which
has not occurred yet.

.. _last time:
2004-02-01_2004-02-29.html#function-decoration-and-all-that-jazz
.. _Quixote: http://www.mems-exchange.org/software/quixote/

Contributing threads:
  - `Pep 318 - new syntax for wrappers
<http://mail.python.org/pipermail/python-dev/2004-March/042902.html>`__
  - `new syntax (esp for wrappers)
<http://mail.python.org/pipermail/python-dev/2004-March/042906.html>`__
  - `PEP 318 - function/method/class decoration
<http://mail.python.org/pipermail/python-dev/2004-March/042917.html>`__
  - `(Specific syntax of) PEP 318 - function/method/class
<http://mail.python.org/pipermail/python-dev/2004-March/042947.html>`__
  - `PEP 318 - generality of list; restrictions on elements
<http://mail.python.org/pipermail/python-dev/2004-March/042951.html>`__
  - `PEP 318 needs a rewrite
<http://mail.python.org/pipermail/python-dev/2004-March/042975.html>`__
  - `Python-Dev Digest, Vol 8, Issue 20
<http://mail.python.org/pipermail/python-dev/2004-March/043033.html>`__
  - `PEP 318 trial balloon (wrappers)
<http://mail.python.org/pipermail/python-dev/2004-March/043076.html>`__
  - `funcdef grammar production
<http://mail.python.org/pipermail/python-dev/2004-March/043142.html>`__


----------------------------------------
Compiler optimization flags for the core
----------------------------------------
The topic of compiler flags that are optimal for Python came up when
Raymond Hettinger announced his new LIST_APPEND opcode (discussed later in
`Optimizing: Raymond Hettinger's personal crack`_).  This stemmed from the
fact that the bytecode has not been touched in a while.  This generated a
short discussion on the magic that is caches and how the eval loop always
throws a fit when it gets played with.  One suggestion was to rework some
opcodes to use other opcodes instead in order to remove the original
opcodes entirely from the eval loop.  But it was pointed out it would be
better to just factor out the C code to functions so that they are just
brought into the cache less often instead of incurring the overhead of
more loops through the eval loop.

This then led to AM Kuchling to state that he was planning in giving a
lightning talk at PyCon_ about various compiler optimization flags he
tried out on Python.  Looks like that compiling Python/ceval.c with -Os
(optimizes for space) w/ everything else using -O3 gave the best results
using gcc 3.  This sparked the idea of more architecture-dependent
compiler optimizations which would be set when 'configure' was run and
detected the hardware of the system.

In the end no code was changed in terms of the compiler optimizations.

Contributing threads:
  - `New opcode to simplifiy/speedup list comprehensions
<http://mail.python.org/pipermail/python-dev/2004-March/042924.html>`__
  - `Who cares about the performance of these opcodes?
<http://mail.python.org/pipermail/python-dev/2004-March/042990.html>`__


------------------------------------------------------
Take using Python as a calculator to a whole new level
------------------------------------------------------
I remember once there was a thread on `comp.lang.python`_ about how to
tell when you had an addiction to Python.  One of the points was when you
start to use Python as your calculator (something I admit to openly; using
the 'sum' built-in is wonderful for quick addition when I would have used
a Scheme interpreter).  Well, Raymond Hettinger had the idea of adding a
'calculator' module that would provide a ""pretty good" implementations of
things found on low to mid-range calculators like my personal favorite,
the hp32sII student scientific calculator".  He then listed a bunch of
functionality the HP calculator has that he would like to see as a module.

Beyond sparking some waxing about calculators, and the HP 32sII especially
(I used a TI-82 back in high school and junior college so I won't even
both summarizing the nostalgic daydreaming on HP calculators), the
discussion focused mainly on what functionality to provide and the
accuracy of the calculations.  The former topic focused on what would be
reasonable and relatively easy to implement without requiring a
mathematician to write in order to be correct or fast.

The topic of accuracy, though, was not as clear-cut.  First the issue of
whether to use the in-development Decimal module would be the smart thing
to do.  The consensus was to use Decimal since floating-point, even with
IEEE 754 in place, is not accurate enough for something that wants to be
as accurate as an actual calculator.  Then discussions on the precision of
accuracy came up.  It seemed like it would be important to have a level of
precision kept above the expected output precision to make sure any
rounding errors and such would be kept to a minimum.

Raymond is going to write a PEP outlining the module.

Contributing threads:
  - `calculator module
<http://mail.python.org/pipermail/python-dev/2004-March/043003.html>`__


------------------------
dateutil module proposed
------------------------
Gustavo Niemeyer offered to integrate his dateutil_ module into the
stdlib.  Discussion of how it should tie into datetime and whether all of
it or only some of its functionality should be brought in was transpired.
As of right now the discussion is still going on.

.. _dateutil: https://moin.conectiva.com.br/DateUtil

Contributing threads:
  - `dateutil
<http://mail.python.org/pipermail/python-dev/2004-March/043054.html>`__


----------------------------------------------
Optimizing: Raymond Hettinger's personal crack
----------------------------------------------
Raymond Hettinger, the speed nut that he is, added a new opcode to Python
to speed up list comprehensions by around 35%.  But his addiction didn't
stop there.

Being the dealer of his own drug of choice, Raymond got his next fix by
improving on iterations for dictionaries (this is, of course, after all of
his work on the list internals).  As always, thanks goes to Raymond for
putting in the work to make sure the CPython interpreter beats the Parrot_
interpreter by that much more come `OSCON 2004`_ and the `Pie-thon`_
contest.

And, at Hye-Shik Chang's request, Raymond listed off his list of things to
do to feed his addiction so he doesn't go into withdrawls any time in the
future.  Most of them are nice and involved that would make great
personal/research projects.

.. _Parrot: http://www.parrotcode.org/
.. _OSCON 2004: http://conferences.oreillynet.com/os2004/
.. _Pie-thon: http://www.hole.fi/jajvirta/weblog/20040108T2001.html

Contributing threads:
  - `New opcode to simplifiy/speedup list comprehensions
<http://mail.python.org/pipermail/python-dev/2004-March/042924.html>`__
  - `Joys of Optimization
<http://mail.python.org/pipermail/python-dev/2004-March/043097.html>`__


From ark-mlist at att.net  Sat Mar 20 10:56:54 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Sat Mar 20 10:56:50 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJNJKAB.tim.one@comcast.net>
Message-ID: <001401c40e93$f7c0de10$6402a8c0@arkdesktop>

> No, because it's still <wink> the purpose of "is" to expose the
> implementation.

There I go again, thinking like a user instead of an implementor :-)

> It's necessary to be able to spell this in Python -- although it's
> not necessary (IMO) to be able to spell it with two letters.

I think we agree.


From bac at OCF.Berkeley.EDU  Sat Mar 20 11:09:16 2004
From: bac at OCF.Berkeley.EDU (Brett Cannon)
Date: Sat Mar 20 11:09:24 2004
Subject: [Python-Dev] Two possible fixes for isinstance nested tuples
Message-ID: <Pine.SOL.4.58.0403200753010.24101@death.OCF.Berkeley.EDU>

Bug #858016 brought up the issue of deeply nested tuples of classes
segfaulting isinstance and issubclass.  There is a patch with the bug
report that adds a check to make sure the tuple is completely shallow.

For Python 2.3, though, that doesn't work since it breaks code.  So I have
coded up something that calls Py_GetRecursionLimit and makes sure the
depth never goes past that.

The question is which solution to use for  Python 2.4 .  I did some
benchmarking on as-is and with the recursion check and the performance
difference is no worse than 2% (granted this was before I made the
recursion functions static, but that should  just help performance).
Would people rather do the recursion check or force people to use a
shallow tuple in 2.4?

-Brett

From python at rcn.com  Sat Mar 20 12:14:51 2004
From: python at rcn.com (Raymond Hettinger)
Date: Sat Mar 20 12:17:16 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <20040320073858.GA6748@barsoom.org>
Message-ID: <003c01c40e9e$e4734e00$a0b52c81@oemcomputer>

[Agthorr]
> Searching the research literature for a few hours, Pairing Heaps seem
> like the way to go.  They are very fast in practice, and their
> amortized asymptotic bounds are nearly as good as the Fibonacci Heap.
> The only difference is in the Decrease Key operation, where the
> Pairing Heaps bound is somewhere between Theta(log log n) and O(log
> n)--the tight bound is still an open research problem.
> 
> The interface to any high-performance heap would be the same, so it
> makes sense to implement a reasonably straightforward heap now.
> The implementation could always be changed to Fibonacci Heaps later if
> it turns out that there's a pressing need for everyone to have a heap
> implementation that has better performance when you're performing
> millions of Decrease Key operations...

That is reasonable.


> If there's rough concensus for this, I'd start by making a Python
> implementation, test code, and documentation.  Once that's done, I'd
> write the high-performance C implementation.

Along the way, either send me period code updates and I'll load them
into the sandbox.  That way, everyone can experiment with it and make
course corrections as necessary.


Raymond Hettinger



From perky at i18n.org  Sat Mar 20 13:05:09 2004
From: perky at i18n.org (Hye-Shik Chang)
Date: Sat Mar 20 13:05:17 2004
Subject: [Python-Dev] Joys of Optimization
In-Reply-To: <20040320073858.GA6748@barsoom.org>
References: <006e01c40d72$1d729d60$a0ba2c81@oemcomputer>
	<20040320073858.GA6748@barsoom.org>
Message-ID: <20040320180509.GA31222@i18n.org>

On Fri, Mar 19, 2004 at 11:38:58PM -0800, Agthorr wrote:
> On Fri, Mar 19, 2004 at 12:21:33AM -0500, Raymond Hettinger wrote:
> > 1) collections.heap(), a high performance Fibonacci heap implemented as
> > a type and taking sort style arguments (cmp=, key=, reverse=).
> 
> I'd be willing to implement a high-performance heap, although I'd like
> to start a discussion about the right sort of heap to implement.
> Fibonacci heaps have good worst-case amortized behavior, but they are
> complicated to implement and have large hidden constants.
> 
[snip]
> 
> If there's rough concensus for this, I'd start by making a Python
> implementation, test code, and documentation.  Once that's done, I'd
> write the high-performance C implementation.
> 

JFYI, I wrote a simple python wrapper[1] for a BSD-licensed fibonacci
library[2] about 2 years ago.  My wrapper implementation isn't good
enough to adopt it to collections module but it may be useful to
hack around for testing object interface and flow designs for our new
collections.heap().  :)

[1] http://people.linuxkorea.co.kr/~perky/fibheap-0.2.tar.gz
    (This wrapper package bundled C library source of [2].)
[2] http://resnet.uoregon.edu/~gurney_j/jmpc/fib.html

Cheers,
Hye-Shik

From kbk at shore.net  Sat Mar 20 16:08:35 2004
From: kbk at shore.net (Kurt B. Kaiser)
Date: Sat Mar 20 16:08:42 2004
Subject: [Python-Dev] Weekly Python Bug/Patch Summary
Message-ID: <200403202108.i2KL8ZxZ021428@hydra.localdomain>


Patch / Bug Summary
___________________

Patches :  250 open ( -1) /  2340 closed ( +8) /  2590 total ( +7)
Bugs    :  746 open ( +7) /  3927 closed ( +6) /  4673 total (+13)
RFE     :  128 open ( +0) /   123 closed ( +0) /   251 total ( +0)

New / Reopened Patches
______________________

tarfile.py enhancements  (2004-03-17)
       http://python.org/sf/918101  opened by  Lars Gustäbel 

XHTML support for webchecker.py  (2004-03-17)
       http://python.org/sf/918212  opened by  Chris Herborth 

simple "is" speedup   (2004-03-17)
CLOSED http://python.org/sf/918462  opened by  Skip Montanaro 

PEP-008: Update Naming Conventions  (2004-03-18)
CLOSED http://python.org/sf/919256  opened by  Kurt B. Kaiser 

API Ref: PyErr_SetInterrupt not Obsolete  (2004-03-19)
       http://python.org/sf/919299  opened by  Kurt B. Kaiser 

email/__init__.py: Parse headersonly  (2004-03-20)
       http://python.org/sf/920037  opened by  Chris Mayo 

Patch submission for #876533 (potential leak in ensure_froml  (2004-03-20)
       http://python.org/sf/920211  opened by  Brian Leair 

Patches Closed
______________

dict type concat function  (2004-03-16)
       http://python.org/sf/917095  closed by  troy_melhase

list.__setitem__(slice) behavior  (2004-01-08)
       http://python.org/sf/873305  closed by  jbrandmeyer

Create a freelist for dictionaries  (2004-03-14)
       http://python.org/sf/916251  closed by  rhettinger

PEP-008: Update Naming Conventions  (2004-03-18)
       http://python.org/sf/919256  closed by  kbk

license inconsistency  (2003-08-26)
       http://python.org/sf/795531  closed by  tim_one

simple   (2004-03-17)
       http://python.org/sf/918462  closed by  rhettinger

hmac.HMAC.copy() speedup  (2004-02-12)
       http://python.org/sf/895445  closed by  tim_one

Add a 'isotime' format to standard logging  (2003-07-08)
       http://python.org/sf/767600  closed by  tim_one

New / Reopened Bugs
___________________

imaplib: incorrect quoting in commands  (2004-03-16)
       http://python.org/sf/917120  opened by  Tony Meyer 

Allow HTMLParser to continue after a parse error  (2004-03-16)
CLOSED http://python.org/sf/917188  opened by  Frank Vorstenbosch 

urllib doesn't correct server returned urls  (2004-03-17)
       http://python.org/sf/918368  opened by  Rob Probin 

hasattr()'s return type  (2004-03-18)
CLOSED http://python.org/sf/918371  opened by  [N/A] 

popen2 returns (out, in) not (in, out)  (2004-03-18)
       http://python.org/sf/918710  opened by  Yotam Medini 

shutil.move can destroy files   (2004-03-18)
       http://python.org/sf/919012  opened by  Jeff Epler 

Error with random module  (2004-03-18)
CLOSED http://python.org/sf/919014  opened by  Gaëtan 

Update docs on Reload()  (2004-03-18)
CLOSED http://python.org/sf/919099  opened by  David MacQuigg 

Math Division Problem?  (2004-03-19)
CLOSED http://python.org/sf/919498  opened by  GrASP 

os.rename() silently overwrites files  (2004-03-19)
       http://python.org/sf/919605  opened by  Jozef Behran 

Python configured with --disable-unicode fails tests, more  (2004-03-19)
       http://python.org/sf/919614  opened by  Fred L. Drake, Jr. 

Can't create Carbon.File.FSSpec for non-existent file  (2004-03-19)
       http://python.org/sf/919776  opened by  has 

Bugs Closed
___________

Allow HTMLParser to continue after a parse error  (2004-03-16)
       http://python.org/sf/917188  closed by  doerwalter

All URL funcs mishandle file://localhost  (2002-12-07)
       http://python.org/sf/649962  closed by  mike_j_brown

hasattr()'s return type  (2004-03-17)
       http://python.org/sf/918371  closed by  rhettinger

Error with random module  (2004-03-18)
       http://python.org/sf/919014  closed by  rhettinger

Update docs on Reload()  (2004-03-18)
       http://python.org/sf/919099  closed by  montanaro

Math Division Problem?  (2004-03-19)
       http://python.org/sf/919498  closed by  sjoerd


From python at rcn.com  Sat Mar 20 16:14:58 2004
From: python at rcn.com (Raymond Hettinger)
Date: Sat Mar 20 16:17:12 2004
Subject: [Python-Dev] RE: [Python-checkins] python/dist/src/Python ceval.c,
	2.383, 2.384
In-Reply-To: <E1B4mgi-0004zY-2P@sc8-pr-cvs1.sourceforge.net>
Message-ID: <000201c40ec0$671ed4c0$e229c797@oemcomputer>

> Modified Files:
> 	ceval.c
> Log Message:
> A 2% speed improvement with gcc on low-endian machines.  My guess is
that
> this
> new pattern for NEXTARG() is detected and optimized as a single
(*short)
> loading.

It is possible to verify that guess by looking at the generated
assembler.

There are other possible reasons.  One is that the negative array
offsets don't compile well into a native addressing mode of
base+offset*wordsize.  I have seen and proven that is the case in other
parts of the code base.  The other reason for the speedup is that
pre-incrementing the pointer prevented the lookup from being done in
parallel (i.e. a sequential dependency was present).

If the latter reason is a true cause, then part of the checkin is
counter-productive.  The change to PREDICTED_WITH_ARG introduces a
pre-increment in addition to the post-increment.  Please run another
timing with and without the change to PREDICTED_WITH_ARG.  I suspect the
old way ran faster.  Also, the old way will always be faster on
big-endian machines and would be faster on machines with less
sophisticated compilers (and possibly slower on MSVC++ if it doesn't
automatically generate a load short).  Another consideration is that
loading a short may perform much differently on other architectures
because even alignment only occurs half of the time.

Summary:  +1 on the changes to NEXT_ARG and EXTENDED_ARG; 
          -1 on the change to PREDICTED_WITH_ARG.


Raymond Hettinger



>   #define PREDICTED(op)		PRED_##op: next_instr++
> ! #define PREDICTED_WITH_ARG(op)	PRED_##op: oparg =
(next_instr[2]<<8)
> + \
> ! 				next_instr[1]; next_instr += 3
> 
>   /* Stack manipulation macros */
> --- 660,664 ----
> 
>   #define PREDICTED(op)		PRED_##op: next_instr++
> ! #define PREDICTED_WITH_ARG(op)	PRED_##op: next_instr++; oparg =
> OPARG(); next_instr += OPARG_SIZE


From guido at python.org  Sat Mar 20 16:18:15 2004
From: guido at python.org (Guido van Rossum)
Date: Sat Mar 20 16:18:23 2004
Subject: [Python-Dev] Two possible fixes for isinstance nested tuples
In-Reply-To: Your message of "Sat, 20 Mar 2004 08:09:16 PST."
	<Pine.SOL.4.58.0403200753010.24101@death.OCF.Berkeley.EDU> 
References: <Pine.SOL.4.58.0403200753010.24101@death.OCF.Berkeley.EDU> 
Message-ID: <200403202118.i2KLIFA19152@guido.python.org>

> Bug #858016 brought up the issue of deeply nested tuples of classes
> segfaulting isinstance and issubclass.  There is a patch with the bug
> report that adds a check to make sure the tuple is completely shallow.
> 
> For Python 2.3, though, that doesn't work since it breaks code.  So I have
> coded up something that calls Py_GetRecursionLimit and makes sure the
> depth never goes past that.
> 
> The question is which solution to use for  Python 2.4 .  I did some
> benchmarking on as-is and with the recursion check and the performance
> difference is no worse than 2% (granted this was before I made the
> recursion functions static, but that should  just help performance).
> Would people rather do the recursion check or force people to use a
> shallow tuple in 2.4?

I mentioned in person to Brett to use the same solution in 2.4 as in
2.3.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From python at rcn.com  Sat Mar 20 16:26:51 2004
From: python at rcn.com (Raymond Hettinger)
Date: Sat Mar 20 16:29:04 2004
Subject: [Python-Dev] Timings [Was: Python-checkins] python/dist/src/Python
	ceval.c, 2.383, 2.384
In-Reply-To: <E1B4mgi-0004zY-2P@sc8-pr-cvs1.sourceforge.net>
Message-ID: <000501c40ec2$0fef45c0$e229c797@oemcomputer>

> A 2% speed improvement with gcc on low-endian machines.  My guess is
that
> this
> new pattern for NEXTARG() is detected and optimized as a single
(*short)
> loading.

One other thought:  When testing these kind of optimizations, do not use
PyStone!

Unfortunately, part of its design is to time a long for-loop and the net
that timing out of the total.  At best, that means that improvements to
for-loops don't showup on PyStone.

At worst, PyStone gives results *opposite* of reality.  Because of cache
effects, empty for-loops run a tad faster than for-loops that do
something.


Raymond Hettinger


From guido at python.org  Sat Mar 20 16:29:35 2004
From: guido at python.org (Guido van Rossum)
Date: Sat Mar 20 16:29:43 2004
Subject: [Python-Dev] unexpected reload() behavior
In-Reply-To: Your message of "Fri, 19 Mar 2004 09:03:55 CST."
	<16475.3035.435868.901154@montanaro.dyndns.org> 
References: <16475.3035.435868.901154@montanaro.dyndns.org> 
Message-ID: <200403202129.i2KLTZI19271@guido.python.org>

> Not believing that old objects remained after the reload() I wrote a short
> test:
> 
>     a = 5
>     b = 7
>     c = (1,2,3)
> 
> imported it, modified it to
> 
>     a = 9
>     c = (1,2,3)
> 
> then reloaded it.  I was surprised to find that reloadtst.b did indeed still
> exist:
> 
>     >>> import reloadtst
>     >>> dir(reloadtst)
>     >>> dir(reloadtst)
>     ['__builtins__', '__doc__', '__file__', '__name__', 'a', 'b', 'c']
>     >>> # edit reloadtst.py
>     ...
>     >>> reload(reloadtst)
>     <module 'reloadtst' from 'reloadtst.py'>
>     >>> dir(reloadtst)
>     ['__builtins__', '__doc__', '__file__', '__name__', 'a', 'b', 'c']
> 
> It seems counterintuitive to me that reloadtst.b should still be defined.
> Is that behavior intention or accidental?

Intentional.  A module's __dict__ is not emptied when the reloaded
module is executed.  This allows code like this (which I have written)
that preserves a cache across relaod() calls:

    try:
        cache
    except NameError:
        cache = {}

--Guido van Rossum (home page: http://www.python.org/~guido/)

From python at rcn.com  Sat Mar 20 16:36:20 2004
From: python at rcn.com (Raymond Hettinger)
Date: Sat Mar 20 16:38:34 2004
Subject: [Python-Dev] RE: [Python-checkins] python/dist/src/Objects
	frameobject.c, 2.77, 2.78
In-Reply-To: <E1B4njh-00085q-JG@sc8-pr-cvs1.sourceforge.net>
Message-ID: <000601c40ec3$631a5c20$e229c797@oemcomputer>

> memset() with small memory sizes just kill us.

If you know what the break-even point is on both GCC and MSVC++, then
also see if the same improvement would apply to the dictionary
initialization code macro called by PyDict_New().  If it looks
favorable, be sure to time it on Brett's MAC as well as a PC.


Raymond Hettinger


From guido at python.org  Sat Mar 20 16:43:44 2004
From: guido at python.org (Guido van Rossum)
Date: Sat Mar 20 16:43:51 2004
Subject: [Python-Dev] why string exceptions are bad
In-Reply-To: Your message of "Fri, 19 Mar 2004 17:30:50 EST."
	<B8CDFB11BB44D411B8E600508BDF076C1E96D3DC@USAHM010.amer.corp.eds.com> 
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3DC@USAHM010.amer.corp.eds.com>
Message-ID: <200403202143.i2KLhii19394@guido.python.org>

> Trying to catch (some, but not all) exceptions from another 
> module can be difficult if the module raises strings.  At
> best, you catch everything, test with == (even though "is" 
> seemed to work in your tests), and maybe reraise.
> 
> If they start raising
> 
> 	"Error:  You requested 7 but the quota is 6"
> 
> you have to regex match.
> 
> If the "same" error is raised several places (or by several
> different modules), there will eventually be a typo.
> 
> 	"Error: That was not a string"
> 	"Error: That was not a string."
> 	"Error: That was mot a string."
> 	"Error  That was not a string."
> 	"Error: That wasn't a string"
> 	"ERROR: That was not a string"
> 
> A class can be defined in a single place (and imported); 
> a typo in the 47th raise statement will show up as a syntax
> error instead of a runtime bug.

There's a serious lack of understanding here of how string exceptions
were supposed to be used originally.

String exceptions are compared by object identity.  This was done by
design to force you to give the exception a name, as in:

  Error = "That was not a string"

  ...

  raise Error

  ...

And in another module:

  try:
      ...
  except mymod.Error:
      ...

In addition, the recommended convention was to use "modname.excname"
for the string, e.g.

  Error = "mymod.Error"

But this is all moot now that they are deprecated.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From arigo at tunes.org  Sat Mar 20 16:57:48 2004
From: arigo at tunes.org (Armin Rigo)
Date: Sat Mar 20 17:01:04 2004
Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Python ceval.c,
	2.383, 2.384
In-Reply-To: <000201c40ec0$671ed4c0$e229c797@oemcomputer>
References: <E1B4mgi-0004zY-2P@sc8-pr-cvs1.sourceforge.net>
	<000201c40ec0$671ed4c0$e229c797@oemcomputer>
Message-ID: <20040320215748.GA20718@vicky.ecs.soton.ac.uk>

Hello Raymond,

On Sat, Mar 20, 2004 at 04:14:58PM -0500, Raymond Hettinger wrote:
> > A 2% speed improvement with gcc on low-endian machines.  My guess is
> that this new pattern for NEXTARG() is detected and optimized as a single
> (*short) loading.
> 
> It is possible to verify that guess by looking at the generated
> assembler.

I was too hasty in checking this in.  That guess is actually false.

The speed-up was due (as usual) to random changes in performance when anything
around there changes, and cannot be reliably reproduced.  I cancelled that
change altogether.


Armin


From barry at python.org  Sat Mar 20 19:09:24 2004
From: barry at python.org (barry@python.org)
Date: Sat Mar 20 19:11:06 2004
Subject: [Python-Dev] Encrypted document
Message-ID: <wwfjntnqofvsydfzwke@python.org>

An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20040320/1d1e8587/attachment-0001.html
From pete at shinners.org  Sat Mar 20 20:50:52 2004
From: pete at shinners.org (Pete Shinners)
Date: Sat Mar 20 20:51:20 2004
Subject: [Python-Dev] Re: bundle pychecker with python [was "Re: A proposal
	has surfaced..."]
In-Reply-To: <405904D8.2000203@vanderbilt.edu>
References: <E1B3mQy-0006ty-Hc@mail.python.org>
	<405904D8.2000203@vanderbilt.edu>
Message-ID: <c3isdt$pd$1@sea.gmane.org>

Doug Holton wrote:
> A. Add a command line switch for the python executable that enables 
> pychecker.  The next time someone complains about some error that the 
> python compiler didn't catch, just tell them to run "python -strict" or 
> something to that effect.

I hope someone is implementing this. While they are at it, consider adding a 
"-check" flag to python as well. This would run like normal pychecker does 
now, but without requiring the extra "stub" executable. On the other hand, 
perhaps this is "overloading" the python executable too much?

I find pychecker invaluable, especially when helping newbies get onto their 
python legs. My one hope was that pychecker would be able to check code 
without actually "importing" (aka, executing) the checked file. This would 
also help running pychecker on scripts that have a "shebang" but no ".py"

But I'd rather see it in the standard distribution as-is now, then wait 
another release to rework it.


From tim.one at comcast.net  Sun Mar 21 01:26:10 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sun Mar 21 01:26:11 2004
Subject: [Python-Dev] RE: Rich comparisons  [Was] redefining is
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEJJJKAB.tim.one@comcast.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENOJKAB.tim.one@comcast.net>

[Raymond Hettinger]
> ...
> Essentially, I would like to insert the following lines into
> PyObject_RichCompare():
>
> 	if (v == w) {
> 		if (op == Py_EQ)
> 			Py_RETURN_TRUE;
> 		else if (op == Py_NE)
> 			Py_RETURN_FALSE;
> 	}

At the sprint today, Armin Rigo pointed out that this specific kind of
change is a non-starter:  rich comparisons aren't constrained to return
bools.  In fact, that's primarily why they were introduced, so that the
array extensions to Python could have (e.g.)

    array1 == array2  # assuming same shapes

return an array of true/false values (recording elementwise equality of
corresponding array elements).  "array1 is array2" can't cause a special
case returning a scalar then.

So that's right out.

OTOH, the changes we've made to recursive compares in 2.4 are causing Armin
some grief, particularly wrt dicts that may be reachable from themselves.
Doing, e.g.,

>>> d = {}
>>> d[1] = d
>>> d == d

returns True in 2.3.3, but raises

    RuntimeError: maximum recursion depth exceeded in cmp

in current CVS.

While PyObject_RichCompare() can't make any assumption about the return
type, PyObject_RichCompareBool() can, and inserting

    x is y
implies
    (x == y) is True

in the latter remains possible.

Armin and Guido and I agreed it's OK if

>>> x = some_NaN
>>> x == x
True
>>>

happened as a result.  Also that we're *willing* to insist that "x is y"
implies "x == y" regardless of type in PyObject_RichCompareBool().  But
since "x == x" doesn't normally invoke PyObject_RichCompareBool, the
willingness to infer equality from identity in the latter may have no effect
on NaN comparisons.

WRT the NaN business, Python doesn't have a real 754 story now, and what
we've done so far in 2.4 to back into a teensy part of one by contrived
accident isn't worth preserving.  If we were serious about this, I'd argue
until everyone gave up <0.4 wink> that "naive comparisons" involving NaNs
should raise an exception, and that the 32(*) possible 754 comparisons be
supplied via a wordier spelling (there are 16 depending on which subset of
{less, equal, greater, unordered} you're asking about, and then each of
those comes in two flavors, one that raises an exception if at least one
comparand is a NaN, and another that doesn't -- "==", "<", "<=", ..., would
then map to 6 of the first flavor, leaving 26 that could only be gotten via
a wordier spelling).

I'd like Armin to share enough details here about the cases causing him
grief now so that we can all guess whether that kind of change to
PyObject_RichCompareBool would be enough to make life happy again.



(*) Note that not all 32 possibilities are actually useful; e.g., one
    of them always returns True, another always False.  Similarly,
    when unordered isn't a possible outcome, there are 6 non-empty
    proper subsets of {less, equal, greater}, and that's where the 6
    comparison operators "we're used to" come from (the {less, greater}
    subset being the "not equal" operator, which is screamingly
    obvious from its deprecated "<>" spelling).


From just at letterror.com  Sun Mar 21 02:58:36 2004
From: just at letterror.com (Just van Rossum)
Date: Sun Mar 21 02:58:04 2004
Subject: [Python-Dev] 
	Re: [Python-checkins] python/dist/src/Lib site.py, 1.58, 1.59
In-Reply-To: <E1B4o47-0000Y3-6C@sc8-pr-cvs1.sourceforge.net>
Message-ID: <r01050400-1026-8F9CB3987B0D11D8AC32003065D5E7E4@[10.0.0.23]>

This checkin is not correct: the change behavior from 2.2 to 2.3 was
done for a reason (PEP 302), and should not be reversed. See also
http://python.org/sf/693255

Just

bcannon@users.sourceforge.net wrote:

> Update of /cvsroot/python/python/dist/src/Lib
> In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv2040/Lib
> 
> Modified Files:
>   site.py 
> Log Message:
> Remove non-existent paths.
> 
> 
> Index: site.py
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Lib/site.py,v
> retrieving revision 1.58
> retrieving revision 1.59
> diff -C2 -d -r1.58 -r1.59
> *** site.py   20 Mar 2004 21:08:17 -0000  1.58
> --- site.py   20 Mar 2004 21:31:32 -0000  1.59
> ***************
> *** 80,84 ****
>       # paths.
>       dir, dircase = makepath(dir)
> !     if not dircase in _dirs_in_sys_path:
>           L.append(dir)
>           _dirs_in_sys_path[dircase] = 1
> --- 80,84 ----
>       # paths.
>       dir, dircase = makepath(dir)
> !     if not dircase in _dirs_in_sys_path and os.path.exists(dir):
>           L.append(dir)
>           _dirs_in_sys_path[dircase] = 1
> 
> 
> _______________________________________________
> Python-checkins mailing list
> Python-checkins@python.org
> http://mail.python.org/mailman/listinfo/python-checkins
> 

From skip at pobox.com  Sun Mar 21 11:16:36 2004
From: skip at pobox.com (Skip Montanaro)
Date: Sun Mar 21 11:16:43 2004
Subject: [Python-Dev] unexpected reload() behavior
In-Reply-To: <200403202129.i2KLTZI19271@guido.python.org>
References: <16475.3035.435868.901154@montanaro.dyndns.org>
	<200403202129.i2KLTZI19271@guido.python.org>
Message-ID: <16477.49124.217064.997041@montanaro.dyndns.org>


    >> It seems counterintuitive to me that reloadtst.b should still be
    >> defined.  Is that behavior intention or accidental?

    Guido> Intentional.  A module's __dict__ is not emptied when the
    Guido> reloaded module is executed.  This allows code like this (which I
    Guido> have written) that preserves a cache across relaod() calls:

    Guido>     try:
    Guido>         cache
    Guido>     except NameError:
    Guido>         cache = {}

Thanks.  I saw that in the doc shortly after posting.  I hope you don't mind
that I just added that example to the doc.

Skip


From skip at pobox.com  Sun Mar 21 11:19:24 2004
From: skip at pobox.com (Skip Montanaro)
Date: Sun Mar 21 11:19:33 2004
Subject: [Python-Dev] 
	Re: [Python-checkins] python/dist/src/Misc NEWS, 1.949, 1.950
In-Reply-To: <E1B4o47-0000Y6-Cf@sc8-pr-cvs1.sourceforge.net>
References: <E1B4o47-0000Y6-Cf@sc8-pr-cvs1.sourceforge.net>
Message-ID: <16477.49292.556398.536042@montanaro.dyndns.org>


    bcannon> + - site.py now removes paths that do not exist.
    bcannon> + 

I thought someone (Just?) mentioned a reason to leave the pythonNM.zip file
in sys.path.

Skip


From skip at pobox.com  Sun Mar 21 11:20:39 2004
From: skip at pobox.com (Skip Montanaro)
Date: Sun Mar 21 11:20:49 2004
Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Objects
	tupleobject.c, 2.86, 2.87
In-Reply-To: <E1B4o7c-0000ls-2V@sc8-pr-cvs1.sourceforge.net>
References: <E1B4o7c-0000ls-2V@sc8-pr-cvs1.sourceforge.net>
Message-ID: <16477.49367.403242.229270@montanaro.dyndns.org>


    arigo> memset() hunt continuing.  This is a net win.

Maybe we should cast this as a _Py_MEMSET macro to make it more obvious
what's been done?

Skip

From skip at pobox.com  Sun Mar 21 11:25:58 2004
From: skip at pobox.com (Skip Montanaro)
Date: Sun Mar 21 11:26:12 2004
Subject: [Python-Dev] Re: bundle pychecker with python [was "Re: A proposal
	has surfaced..."]
In-Reply-To: <c3isdt$pd$1@sea.gmane.org>
References: <E1B3mQy-0006ty-Hc@mail.python.org>
	<405904D8.2000203@vanderbilt.edu> <c3isdt$pd$1@sea.gmane.org>
Message-ID: <16477.49686.905266.214075@montanaro.dyndns.org>


    Pete> My one hope was that pychecker would be able to check code without
    Pete> actually "importing" (aka, executing) the checked file. This would
    Pete> also help running pychecker on scripts that have a "shebang" but
    Pete> no ".py"

Look at PyChecker 2 in the PyChecker CVS.

Skip

From greg at cosc.canterbury.ac.nz  Sun Mar 21 17:07:13 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 21 17:07:36 2004
Subject: [Python-Dev] (not) redefining is
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3CF@USAHM010.amer.corp.eds.com>
Message-ID: <200403212207.i2LM7DN5012181@cosc353.cosc.canterbury.ac.nz>

"Jewett, Jim J" <jim.jewett@eds.com>:

> I personally like (and have used) Barry's suggestion of a new named
> object().  Unfortunately, object() is longer to type, not as backwards
> compatible, and places too emphasis on what the sentinel *is* rather
> than what is *isn't*.

Perhaps there should be a built-in type called 'sentinel'
for people to create unique objects from?

The implementation could be very simple:

  sentinel = object

:-)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From tdelaney at avaya.com  Sun Mar 21 17:18:21 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Sun Mar 21 17:18:29 2004
Subject: [Python-Dev] Rich comparisons
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE014C6E13@au3010avexu1.global.avaya.com>

> From: Edward Loper
> 
> If we *really* want nan==nan to be false, then it seems like 
> we have to 
> say that nan is unhashable.

Hmm - I suppose we could make a subclass of float specifically for NaN, and have that be a singleton. The float constructor could check for a NaN result, and return this singleton.

I'm worried that there might be a slowdown though.

> I'm also disturbed by the fact 
> that cmp() 
> has something different to say about their equality:
> 
>      >>> cmp(float('nan'), float('nan'))
>      0

Hmm - I guess it should technically throw an exception. However, I'd be happy if it returned -1. Would this affect the stability of sorting?

Tim Delaney

From greg at cosc.canterbury.ac.nz  Sun Mar 21 17:54:38 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 21 18:06:44 2004
Subject: [Python-Dev] Re: redefining is
In-Reply-To: <20040319144619.1901e9ac.casey@zope.com>
Message-ID: <200403212254.i2LMscJY012247@cosc353.cosc.canterbury.ac.nz>

Casey Duncan <casey@zope.com>:

> The key of course it how you define state equality. In my view state
> equality means that if you were to serialize the objects to byte
> streams, which neutrally represented the entirety of the state,
> equivilant objects would have identical serializations.

But you need to decide what you mean by the object's "state"
before you can decide how to serialize it. So appealing to
serialization doesn't help here at all.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Sun Mar 21 19:22:41 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 21 19:23:07 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000501c40def$409fdd20$6402a8c0@arkdesktop>
Message-ID: <200403220022.i2M0MfKi012398@cosc353.cosc.canterbury.ac.nz>

Andrew Koenig <ark-mlist@att.net>:

> There are certainly more than three meaningful forms of equivalence.  I
> claim, however, that these three forms already have special status in the
> language, because it is implementation-defined whether two occurrences of
> the same string literal refer to the same object.

But this special status is only spelled out for certain
fully-immutable objects, for which your "substitutability" relation is
equivalent to "==".

In all of this, I've yet to see a single use case put forward where it
would be *wrong* to use "==" instead of "substitutability", let alone
one frequently enough encountered to be worth adding a new operator or
changing the semantics of an existing one.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Sun Mar 21 19:26:45 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 21 19:26:58 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <001d01c40dd6$4bc9df20$6402a8c0@arkdesktop>
Message-ID: <200403220026.i2M0QjvG012409@cosc353.cosc.canterbury.ac.nz>

Andrew Koenig <ark-mlist@att.net>:

> Nevertheless, I believe that this particular C++
> feature is defined in a particularly hazardous way

C++ is chock-full of features like that. Sometimes I think
it's constructed *entirely* from features that are hazardous
in some way... :-(

If-you're-planning-a-career-in-C++-programming-consider-
chainsaw-juggling-instead-ly,

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Sun Mar 21 19:36:46 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 21 19:36:55 2004
Subject: [Python-Dev] funcdef grammar production
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3D1@USAHM010.amer.corp.eds.com>
Message-ID: <200403220036.i2M0akMC012432@cosc353.cosc.canterbury.ac.nz>

"Jewett, Jim J" <jim.jewett@eds.com>:

> That works for me now that I already know what it should say.
> I'm not sure it would have worked on my first reading.

The question is whether it would work any better than an equivalently
complete and accurate BNF definition.

My main point is that railroad diagrams are more powerful than BNF
productions, because they're not constrained to fitting everything
into a hierarchy -- effectively they can contain 'gotos'.

Also I think there's something appealingly intuitive about them --
there's very little meta-notation that has to be explained, and kept
separated somehow from the non-meta-notation that's being described.

> I do not see any good way to verify a picture.  If the picture was
> generated based on some textual format -- maybe.

The diagram could be generated from a textual formalism, which I
suppose could be verified against something else. But is that really a
big issue? The Language Reference contains heaps and heaps of English
prose that nobody complains can't be automatically verified. Putting
some of that into diagrams instead of words can't make things any
worse.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From jiwon at softwise.co.kr  Sun Mar 21 21:00:20 2004
From: jiwon at softwise.co.kr (jiwon)
Date: Sun Mar 21 21:01:47 2004
Subject: [Python-Dev] An issue recently brought up in patch #872326
	(generator expression)
Message-ID: <026901c40fb1$824ed910$d70aa8c0@jiwon>

IIRC, most people agreed on that iterables in the generator expression need
to be pre-computed (including myself). But there's a recent issue about
that.

First, if we precompute iterables in the generator expression, the behavior
of it inevitably becomes different from generator expression. For instance,
compare followings.

 >>> list((x,y) for x in iter('abcd') for y in iter('abcd'))
[('a', 'a'), ('a', 'b'), ('a', 'c'), ('a', 'd')]

>>> [(x,y) for x in iter('abcd') for y in iter('abcd')]
[('a', 'a'), ('a', 'b'), ('a', 'c'), ('a', 'd'), ('b', 'a'), ('b', 'b'),
('b', 'c'), ('b', 'd'), ('c', 'a'), ('c', 'b'), ('c', 'c'), ('c', 'd'),
('d', 'a'), ('d', 'b'), ('d', 'c'), ('d', 'd')]

c.f)
>>> a = iter("abcd"); b = iter("abcd"); [(x,y) for x in a for y in b]
[('a', 'a'), ('a', 'b'), ('a', 'c'), ('a', 'd')]

Also, like arigo commented in the sf patch, the things we are looping over
may depend on other stuff from the generator expression itself.

>>> [x for l in [[1,2],[3,4]] for x in l]
[1, 2, 3, 4]
>>> (y for m in [[1,2],[3,4]] for y in m)
NameError: name 'm' is not defined

More comments are in sf,
http://sourceforge.net/tracker/?func=detail&aid=872326&group_id=5470&atid=305470
Comments after 2004-03-19 22:37 (posted by quiver) are about that issue.
Do we need to drop precomputation of iterables?

Regards, Jiwon.


From gmccaughan at synaptics-uk.com  Mon Mar 22 04:35:59 2004
From: gmccaughan at synaptics-uk.com (Gareth McCaughan)
Date: Mon Mar 22 04:37:50 2004
Subject: [Python-Dev] python-dev Summary for 2004-03-01 through 2004-03-15
	[rough draft]
In-Reply-To: <Pine.SOL.4.58.0403200747340.23908@death.OCF.Berkeley.EDU>
References: <Pine.SOL.4.58.0403200747340.23908@death.OCF.Berkeley.EDU>
Message-ID: <200403220935.59606.gmccaughan@synaptics-uk.com>

On Saturday 2004-03-20, Brett Cannon wrote:

> ----------------------------------------------
> PEP 318 and the discussion that will never end
> ----------------------------------------------
> Just looking at the number of contributing threads to this summary should
> give you an indication of how talked about this PEP became.  In case you
> don't remember the discussion `last time`_, this PEP covers
> function/method(/class?) decorators: having this::
> 
>   def foo() [decorate, me]: pass
> 
> be equivalent to::
> 
>   def foo(): pass
>   foo = decorate(me(foo))

<nitpick>
Although, as you say, there was some disagreement about what order
of application is best, I think there was a very strong preference
for the opposite order to the one you've given here.
</nitpick>

> ------------------------------------------------------
> Take using Python as a calculator to a whole new level
> ------------------------------------------------------
...
> The topic of accuracy, though, was not as clear-cut.  First the issue of
> whether to use the in-development Decimal module would be the smart thing
> to do.  The consensus was to use Decimal since floating-point, even with
> IEEE 754 in place, is not accurate enough for something that wants to be
> as accurate as an actual calculator.  Then discussions on the precision of
> accuracy came up.  It seemed like it would be important to have a level of
> precision kept above the expected output precision to make sure any
> rounding errors and such would be kept to a minimum.

<nitpick>
I didn't see any consensus that Decimal should be used. "Ordinary"
operations (arithmetic, cos, exp, etc) in IEEE 754 double-precision
are a lot more accurate than the displayed precision, or even the
internal precision, on typical calculators. (It's possible that
some such calculators do their internal calculations in IEEE doubles
these days; I don't know.)
</nitpick>

-- 
g


From guido at python.org  Mon Mar 22 17:37:25 2004
From: guido at python.org (Guido van Rossum)
Date: Mon Mar 22 17:37:36 2004
Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Doc/lib
	libprofile.tex, 1.43, 1.44
In-Reply-To: Your message of "Mon, 22 Mar 2004 12:13:27 PST."
	<E1B5Vnb-0003M2-Gr@sc8-pr-cvs1.sourceforge.net> 
References: <E1B5Vnb-0003M2-Gr@sc8-pr-cvs1.sourceforge.net> 
Message-ID: <200403222237.i2MMbP609122@guido.python.org>

Thanks!

Another convenience would be nice: if profile.py had command line
options to either specify the output file (-f ...) or to change the
sort order.  The default sort order isn't very useful...

--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg at cosc.canterbury.ac.nz  Mon Mar 22 18:50:48 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 22 18:51:12 2004
Subject: [Python-Dev] An issue recently brought up in patch #872326
	(generator expression)
In-Reply-To: <026901c40fb1$824ed910$d70aa8c0@jiwon>
Message-ID: <200403222350.i2MNomqO015374@cosc353.cosc.canterbury.ac.nz>

jiwon <jiwon@softwise.co.kr>:

> IIRC, most people agreed on that iterables in the generator expression need
> to be pre-computed (including myself). But there's a recent issue about
> that.

If we're capturing the value of free variables, is there
any need to pre-compute the iterators?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From guido at python.org  Mon Mar 22 19:33:36 2004
From: guido at python.org (Guido van Rossum)
Date: Mon Mar 22 19:33:44 2004
Subject: [Python-Dev] An issue recently brought up in patch #872326
	(generator expression)
In-Reply-To: Your message of "Tue, 23 Mar 2004 11:50:48 +1200."
	<200403222350.i2MNomqO015374@cosc353.cosc.canterbury.ac.nz> 
References: <200403222350.i2MNomqO015374@cosc353.cosc.canterbury.ac.nz> 
Message-ID: <200403230033.i2N0Xaw09792@guido.python.org>

> > IIRC, most people agreed on that iterables in the generator expression need
> > to be pre-computed (including myself). But there's a recent issue about
> > that.
> 
> If we're capturing the value of free variables, is there
> any need to pre-compute the iterators?

A free variable might be a function that itself references globals (or
at least nonlocals) that might change.

There is also the issue of exceptions: I'd expect exceptions in the
toplevel (first) iterable to be raised immediately, and not when the
first element of the iterator is requested.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg at cosc.canterbury.ac.nz  Mon Mar 22 20:19:30 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 22 20:22:31 2004
Subject: [Python-Dev] An issue recently brought up in patch #872326
	(generator expression)
In-Reply-To: <200403230033.i2N0Xaw09792@guido.python.org>
Message-ID: <200403230119.i2N1JU6N015538@cosc353.cosc.canterbury.ac.nz>

Guido:

> A free variable might be a function that itself references globals (or
> at least nonlocals) that might change.
> 
> There is also the issue of exceptions:

Oviously there will be semantic differences, but the question is
whether the consequences of them are serious enough to be worth
treating the outermost iterator differently from the others.

I'm disturbed by the number of special rules we seem to be needing to
make up in the name of getting generator expressions to DWIM. First we
have free variables getting captured, which is unprecedented anywhere
else; now we have some iterators being treated more equally than
others.

I'm getting an "architecture smell" here. Something is wrong
somewhere, and I don't think we're tinkering in the right place to fix
it properly.

I think I have an idea where the right place is, but I'll leave
that to another post. I suspect the idea isn't going to go down
well...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From pje at telecommunity.com  Mon Mar 22 21:13:31 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon Mar 22 21:08:07 2004
Subject: [Python-Dev] An issue recently brought up in patch #872326
	(generator expression)
In-Reply-To: <200403230119.i2N1JU6N015538@cosc353.cosc.canterbury.ac.nz>
References: <200403230033.i2N0Xaw09792@guido.python.org>
Message-ID: <5.1.0.14.0.20040322210424.01eb02b0@mail.telecommunity.com>

At 01:19 PM 3/23/04 +1200, Greg Ewing wrote:
>Guido:
>
> > A free variable might be a function that itself references globals (or
> > at least nonlocals) that might change.
> >
> > There is also the issue of exceptions:
>
>Oviously there will be semantic differences, but the question is
>whether the consequences of them are serious enough to be worth
>treating the outermost iterator differently from the others.
>
>I'm disturbed by the number of special rules we seem to be needing to
>make up in the name of getting generator expressions to DWIM. First we
>have free variables getting captured, which is unprecedented anywhere
>else; now we have some iterators being treated more equally than
>others.

Maybe I'm confused, but isn't this special treatment of iterators being 
done *instead of* free variable capture?  My understanding of the original 
proposal was that semantically, this:

     gen = (x for x in y)

should be translated as if you had written this instead:

    def gen(y=y) [lambda x:x()]:
        for x in y:
            yield x

Now, if y is itself a generator expression, e.g.:

    gen = (x for x in (y for y in z))

then a nested translation logically reads reads:

    def g1(z=z) [lambda x:x()]:
        for y in z:
            yield y

    def gen(_1=g1):
        for x in _1:
            yield x

Meanwhile, for multi-generator expressions, it seems the translation from:

    gen = (x,y for x in a for y in b)

should be:

    def gen(a=a,b=b):
        for x in a:
            for y in b:
                yield x,y

All of these read to me like a straight conversion of the generator 
expression to a generator function that is immediately invoked.


From guido at python.org  Mon Mar 22 21:48:25 2004
From: guido at python.org (Guido van Rossum)
Date: Mon Mar 22 21:48:38 2004
Subject: [Python-Dev] An issue recently brought up in patch #872326
	(generator expression)
In-Reply-To: Your message of "Tue, 23 Mar 2004 13:19:30 +1200."
	<200403230119.i2N1JU6N015538@cosc353.cosc.canterbury.ac.nz> 
References: <200403230119.i2N1JU6N015538@cosc353.cosc.canterbury.ac.nz> 
Message-ID: <200403230248.i2N2mPp10048@guido.python.org>

> Guido:
> > A free variable might be a function that itself references globals (or
> > at least nonlocals) that might change.
> > 
> > There is also the issue of exceptions:

Greg:
> Oviously there will be semantic differences, but the question is
> whether the consequences of them are serious enough to be worth
> treating the outermost iterator differently from the others.
> 
> I'm disturbed by the number of special rules we seem to be needing to
> make up in the name of getting generator expressions to DWIM. First we
> have free variables getting captured, which is unprecedented anywhere
> else; now we have some iterators being treated more equally than
> others.
> 
> I'm getting an "architecture smell" here. Something is wrong
> somewhere, and I don't think we're tinkering in the right place to fix
> it properly.

I'm not disagreeing -- I was originally vehemently against the idea of
capturing free variables, but Tim gave an overwhelming argument that
whenever it made a difference that was the desired semantics.

But I always assumed that the toplevel iterable would be different.

> I think I have an idea where the right place is, but I'll leave
> that to another post. I suspect the idea isn't going to go down
> well...

--Guido van Rossum (home page: http://www.python.org/~guido/)

From tim.one at comcast.net  Mon Mar 22 22:54:14 2004
From: tim.one at comcast.net (Tim Peters)
Date: Mon Mar 22 22:54:12 2004
Subject: [Python-Dev] An issue recently brought up in patch
	#872326(generator expression)
In-Reply-To: <200403230248.i2N2mPp10048@guido.python.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEHCJLAB.tim.one@comcast.net>

[Greg Ewing]
>> ...
>> I'm disturbed by the number of special rules we seem to be needing to
>> make up in the name of getting generator expressions to DWIM.

Yup.

>> First we have free variables getting captured, which is unprecedented
>> anywhere else;

Not sure that's it.  In some sense it's also arbitrary that Python decides
in

    def f(x=DEFAULT_X_VALUE):
        ...

to capture the binding of DEFAULT_X_VALUE (which is a free vrbl so far as
the defn of f is concerned) at the time f is defined and reuse it on each
call; it would have been possible to define the language to use whatever
binding is current each time f is called.

Outside of explicitly burying something inside a lambda body, there's no
precendent in Python for "delaying" evaluation of anything in a Python
expression either.  So generator expressions aren't like any non-lambda
expressions in Python either:  the time at which their guts get evaluated
can be arbitrarily far removed from the time the statement holding the guts
gets executed.

>> now we have some iterators being treated more equally
>> than others.
>>
>> I'm getting an "architecture smell" here. Something is wrong
>> somewhere, and I don't think we're tinkering in the right place to
>> fix it properly.

When scopes and lifetimes get intricate, it's easier to say what you mean in
Scheme (which is a good argument for not letting scopes and lifetimes get
intricate <wink>).

[Guido]
> I'm not disagreeing -- I was originally vehemently against the idea of
> capturing free variables, but Tim gave an overwhelming argument that
> whenever it made a difference that was the desired semantics.

Na, and that's a big part of the problem we're having here:  I didn't make
an argument, I just trotted out non-insane examples.  They all turned out to
suck under "delay evaluation" semantics, to such an extent that wholly
delayed evaluation appeared to be a downright foolish choice.  But decisions
driven *purely* by use cases can be mysterious.  I was hoping to see many
more examples, on the chance that a clarifying principle would reveal
itself.

> But I always assumed that the toplevel iterable would be different.

Jiwon's example certainly suggests that it must be.  But why <0.7 wink>?


From tim.one at comcast.net  Tue Mar 23 00:28:29 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 23 00:28:29 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000001c40e45$8591e580$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEHJJLAB.tim.one@comcast.net>

[Andrew Koenig]
> ... and I wouldn't mind it if there were a way of testing for
> substitutability that were as easy to spell as "is" -- in fact, I
> wouldn't mind if it were spelled "is" even though I realize it's
> probably impractical to do so.

It's definitely impractical to do so in the 2.3 line.  Looking beyond that,
I'm not sure we have real use cases for substitutability.  It seemed to boil
down to:

    x substitutable y =
        x is y  if x is mutable, or a mutable object is reachable
                from x, or y is mutable, or a mutable object is
                reachable from y
        x == y  otherwise (x and y are immutable, and only immutable
                objects are reachable from them)

Is that right?  It's possible I'd find that indispensable if I already had
it, but it doesn't seem likely.  What would you use it for?


From sc0rp at hot.pl  Tue Mar 23 09:54:08 2004
From: sc0rp at hot.pl (Jacek Trzmiel)
Date: Tue Mar 23 09:54:16 2004
Subject: [Python-Dev] Chaining seq1.pop().extend(seq2) does give wrong result
Message-ID: <40604F90.C54B4AA0@hot.pl>


Hi,

$ python
Python 2.3.2 (#1, Dec  5 2003, 03:04:50) 
[GCC 3.3.3 [FreeBSD] 20031106] on freebsd5
Type "help", "copyright", "credits" or "license" for more information.
>>> 
>>> stack = [[1], [2]]
>>> fields = [3]
>>> out = stack.pop()
>>> out.extend(fields)
>>> print out
[2, 3]
>>> 
>>> stack = [[1], [2]]
>>> fields = [3]
>>> out = stack.pop().extend(fields)
>>> print out
None
>>> 

Shouldn't those two give identical result?

Best regards,
Jacek.


From Paul.Moore at atosorigin.com  Tue Mar 23 10:00:07 2004
From: Paul.Moore at atosorigin.com (Moore, Paul)
Date: Tue Mar 23 10:00:34 2004
Subject: [Python-Dev] Chaining seq1.pop().extend(seq2) does give wrong
	result
Message-ID: <16E1010E4581B049ABC51D4975CEDB8802C09DCA@UKDCX001.uk.int.atosorigin.com>

From: Jacek Trzmiel

> Shouldn't those two give identical result?

No, because extend() does not return the modified list, but instead returns None (as you see). This question is actually more appropriate for python-list@python.org (also available as the newsgroup comp.lang.python). The python-dev list is for discussion of changes to the Python language.

If you find a genuine bug, you are best reporting it via the SourceForge bug tracker, as simply posting to the mailing list can result in it being lost. In this case, however, there is no bug, so that is not appropriate.

Regards,
Paul.

From sjoerd at acm.org  Tue Mar 23 10:04:34 2004
From: sjoerd at acm.org (Sjoerd Mullender)
Date: Tue Mar 23 10:04:40 2004
Subject: [Python-Dev] Chaining seq1.pop().extend(seq2) does give wrong
	result
In-Reply-To: <40604F90.C54B4AA0@hot.pl>
References: <40604F90.C54B4AA0@hot.pl>
Message-ID: <40605202.2030302@acm.org>

Jacek Trzmiel wrote:
> Hi,
> 
> $ python
> Python 2.3.2 (#1, Dec  5 2003, 03:04:50) 
> [GCC 3.3.3 [FreeBSD] 20031106] on freebsd5
> Type "help", "copyright", "credits" or "license" for more information.
> 
>>>>stack = [[1], [2]]
>>>>fields = [3]
>>>>out = stack.pop()
>>>>out.extend(fields)
>>>>print out
> 
> [2, 3]
> 
>>>>stack = [[1], [2]]
>>>>fields = [3]
>>>>out = stack.pop().extend(fields)
>>>>print out
> 
> None
> 
> 
> Shouldn't those two give identical result?

No.  somelist.extend() changes somelist but doesn't return a value (in 
other words, it returns None, which is exactly what you're seeing).

Also, this is not really appropriate for python-dev, but rather for 
python-list, I would think.

-- 
Sjoerd Mullender <sjoerd@acm.org>

From pedronis at bluewin.ch  Tue Mar 23 10:15:05 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Tue Mar 23 10:10:23 2004
Subject: [Python-Dev] An issue recently brought up in patch
	#872326(generator expression)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEHCJLAB.tim.one@comcast.net>
References: <200403230248.i2N2mPp10048@guido.python.org>
Message-ID: <5.2.1.1.0.20040323161203.028fe080@pop.bluewin.ch>

At 22:54 22.03.2004 -0500, Tim Peters wrote:

> > But I always assumed that the toplevel iterable would be different.
>
>Jiwon's example certainly suggests that it must be.  But why <0.7 wink>?

(not that this is deep) It seems, it's only one and also expression in the 
whole thing that can (not that it should) be naturally computed without 
starting consuming some iterable.


From tim.one at comcast.net  Tue Mar 23 11:06:24 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 23 11:06:23 2004
Subject: [Python-Dev] Re: [Python-checkins]
	python/dist/src/Objectstupleobject.c, 2.86, 2.87
In-Reply-To: <16477.49367.403242.229270@montanaro.dyndns.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEJOJLAB.tim.one@comcast.net>

     arigo> memset() hunt continuing.  This is a net win.

[Skip]
> Maybe we should cast this as a _Py_MEMSET macro to make it more
> obvious what's been done?

An inline memset loop is quite clear on its own, so it's fine by me if
inlined-for-speed memset loops are simply commented as to why they're being
done inline.  A comment at the site is evidence that someone actually
thought about it, and maybe even tried it both ways; using a macro without
comment might well be due to monkey-see monkey-do, or inappropriate
copy-&-paste.  IOW, I think defining a macro would make it less obvious.
Comments require more effort, and in these cases that's good.


From skip at pobox.com  Tue Mar 23 11:24:23 2004
From: skip at pobox.com (Skip Montanaro)
Date: Tue Mar 23 11:24:34 2004
Subject: [Python-Dev] order of decorator application?
Message-ID: <16480.25783.932424.143499@montanaro.dyndns.org>


(I'm working on PEP 318...)

Is there a concensus on the order of application for decorators?  Should

    def func(a,b) [d1,d2]:
        pass

be equivalent to

    def func(a,b):
        pass
    func = d1(d2(func))

or

    def func(a,b):
        pass
    func = d2(d1(func))

or is it a coin toss?  I'm having trouble finding coherent discussion on
this topic in the archives.  Pointers appreciated.

Thx,

Skip

From guido at python.org  Tue Mar 23 11:28:33 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 23 11:28:39 2004
Subject: [Python-Dev] order of decorator application?
In-Reply-To: Your message of "Tue, 23 Mar 2004 10:24:23 CST."
	<16480.25783.932424.143499@montanaro.dyndns.org> 
References: <16480.25783.932424.143499@montanaro.dyndns.org> 
Message-ID: <200403231628.i2NGSXX11614@guido.python.org>

> (I'm working on PEP 318...)
> 
> Is there a concensus on the order of application for decorators?  Should
> 
>     def func(a,b) [d1,d2]:
>         pass
> 
> be equivalent to
> 
>     def func(a,b):
>         pass
>     func = d1(d2(func))
> 
> or
> 
>     def func(a,b):
>         pass
>     func = d2(d1(func))
> 
> or is it a coin toss?  I'm having trouble finding coherent discussion on
> this topic in the archives.  Pointers appreciated.

I think the consensus was d2(d1(func)).  I.e. application from left to
right -- just like all lists are evaluated L2R in Python.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Tue Mar 23 11:42:29 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar 23 11:43:31 2004
Subject: [Python-Dev] order of decorator application?
In-Reply-To: <16480.25783.932424.143499@montanaro.dyndns.org>
Message-ID: <5.1.1.6.0.20040323112805.02e782f0@telecommunity.com>

At 10:24 AM 3/23/04 -0600, Skip Montanaro wrote:

>(I'm working on PEP 318...)
>
>Is there a concensus on the order of application for decorators?  Should
>
>     def func(a,b) [d1,d2]:
>         pass
>
>be equivalent to
>
>     def func(a,b):
>         pass
>     func = d1(d2(func))

No.


>or
>
>     def func(a,b):
>         pass
>     func = d2(d1(func))

Almost.  :)  Try this phrasing, which provides its own justification for 
the interpretation:

def func(a,b):
     pass

for decorator in [d1,d2]:
     func = decorator(func)


This is still "almost", because 'func' should only be bound *once*, at the 
conclusion of the operation.  IOW, more like:

def _temp(a,b):
     pass

_temp.__name__ = "func"

for decorator in [d1,d2]:
     _temp = decorator(_temp)

func = _temp

In practice, the compiler can and should unroll the loop, keep the 
intermediate values on the stack, and have the right function name assigned 
from the start.  But the specification *should* require that 'func' not be 
rebound until the operation is concluded.  This allows multi-function 
objects to be accumulated, such as for properties, generic functions, and 
so on, without losing the "previous" definition.  For example, I should be 
able to do something like:

    def some_attr(self) [getter]:
        ...

    def some_attr(self,value) [setter]:
        ...

And have the first 'some_attr' value in effect when 'setter()' is called on 
the second 'some_attr'.  This is also important for creating generic 
functions, or doing signature overloading for interfacing with languages 
like C++ or Java.


From skip at pobox.com  Tue Mar 23 12:02:06 2004
From: skip at pobox.com (Skip Montanaro)
Date: Tue Mar 23 12:02:48 2004
Subject: [Python-Dev] PEP 318 - posting draft
Message-ID: <16480.28046.500125.309554@montanaro.dyndns.org>


Here's the current state of PEP 318.  I have to take a break from this to
get some real work done and may not get back to it in a major way for
awhile.  I received a significant rewrite of Kevin Smith's most recent
version from Jim Jewett and incorporated much of what he wrote, modifying a
lot of it along the way, but have still not digested everything he sent me.

I've tried to reflect the concensus which seems to be emerging on the
python-dev list, though I suspect I've done a poor job of that.  The
discussions there and on comp.lang.python have ranged far and wide and thus
resist summary in a finite amount of time.  I recommend interested
comp.lang.python readers spend some time in the python-dev archives for
February and March if they find major fault with the current state of the
proposal.

If you post corrections or comments to either list I should see them.

Skip Montanaro

------------------------------------------------------------------------------
PEP: 318
Title: Function/Method Decorator Syntax
Version: $Revision: 1.5 $
Last-Modified: $Date: 2004/03/23 16:41:17 $
Author: Kevin D. Smith <Kevin.Smith@theMorgue.org>, 
        Jim Jewett <jimjjewett@users.sourceforge.net>,
        Skip Montanaro <skip@pobox.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 05-Jun-2003
Python-Version: 2.4
Post-History: 09-Jun-2003, 10-Jun-2003, 27-Feb-2004, 23-Mar-2004


Abstract
========
    
The current method for declaring class and static methods is awkward 
and can lead to code that is difficult to understand.  Ideally, these 
transformations should be made at the same point in the code where the 
declaration itself is made.  This PEP introduces new syntax for 
transformations of a declaration.


Motivation
==========

The current method of applying a transformation to a function or
method places the actual translation after the function body.  For
large functions this separates a key component of the function's
behavior from the definition of the rest of the function's external
interface.  For example::

    def foo(self):
        perform method operation
    foo = classmethod(foo)

This becomes less readable with longer methods.  It also seems less
than pythonic to name the function three times for what is
conceptually a single declaration.  A solution to this problem is to
move the transformation of the method closer to the method's own
declaration.  While the new syntax is not yet final, the intent is to
replace::

    def foo(cls):
        pass
    foo = synchronized(lock)(foo)
    foo = classmethod(foo)

with an alternative that places the decoration in the function's
declaration::

    def foo(cls) using [synchronized(lock), classmethod]:
        pass

Background
==========

There is general agreement that syntactic support is desirable to the
current state of affairs.  Guido mentioned `syntactic support for
decorators`_ in his DevDay keynote presentation at the `10th Python
Conference`_, though `he later said`_ it was only one of several
extensions he proposed there "semi-jokingly".  `Michael Hudson raised
the topic`_ on ``python-dev`` shortly after the conference,
attributing the bracketed syntax to an earlier proposal on
``comp.lang.python`` by `Gareth
McCaughan`_.

.. _syntactic support for decorators: http://www.python.org/doc/essays/ppt/python10/py10keynote.pdf
.. _10th python conference: http://www.python.org/workshops/2002-02/
.. _michael hudson raised the topic: http://mail.python.org/pipermail/python-dev/2002-February/020005.html
.. _he later said: http://mail.python.org/pipermail/python-dev/2002-February/020017.html
.. _gareth mccaughan: http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=slrna40k88.2h9o.Gareth.McCaughan%40g.local

Design Goals
============

The new syntax should

*  work for arbitrary wrappers, including user-defined callables and
   the existing builtins ``classmethod()`` and ``staticmethod``

*  work with multiple wrappers per definition

*  make it obvious what is happening; at the very least it should be 
   obvious that new users can safely ignore it when writing their own 
   code

*  not make future extensions more difficult

*  be easy to type;  programs that use it are expected to use it very 
   frequently

*  not make it more difficult to scan through code quickly.  It should 
   still be easy to search for all definitions, a particular 
   definition, or the arguments that a function accepts

*  not needlessly complicate secondary support tools such as
   language-sensitive editors and other "`toy parser tools out
   there`_"

.. _toy parser tools out there: http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=mailman.1010809396.32158.python-list%40python.org

Proposed Syntax
===============

The currently proposed syntax is::

    def func(arg1, arg2, ...) [dec1, dec2, ...]:
        pass

The decorators are near the declaration of the function's API but are
clearly secondary.  The square brackets make it possible to fairly
easily break long lists of decorators across multiple lines.

Alternate Proposals
===================

A few other syntaxes have been proposed::

    def func(arg1, arg2, ...) as dec1, dec2, ...:
        pass

The absence of brackets makes it cumbersome to break long lists of
decorators across multiple lines.  The keyword "as" doesn't have the
same meaning as its use in the ``import`` statement.

::

    def [dec1, dec2, ...] func(arg1, arg2, ...):
        pass

This form has the disadvantage that the decorators become visually
higher priority than the function name and argument list.

::

    def func [dec1, dec2, ...] (arg1, arg2, ...):
        pass

Quixote's Page Template Language uses this form, but only supports a
single decorator chosen from a restricted set.  For short lists it
works okay, but for long list it separates the argument list from the
function name.

::

    using:
        dec1
        dec2
        ...
    def foo(arg1, arg2, ...):
        pass

The function definition is not nested within the using: block making
it impossible to tell which objects following the block will be
decorated.  Nesting the function definition within the using: block
suggests block structure that doesn't exist.  The name ``foo`` would
actually exist at the same scope as the using: block.  Finally, it
would require the introduction of a new keyword.

Current Implementation
======================

Michael Hudson has posted a `patch`_ at Starship, which implements the
proposed syntax and left-first application of decorators::

    def func(arg1, arg2, ...) [dec1, dec2]:
        pass

is equivalent to::

    def func(arg1, arg2, ...):
        pass
    func = dec2(dec1(func))

though without the intermediate creation of a variable named ``func``.

.. _patch: http://starship.python.net/crew/mwh/hacks/meth-syntax-sugar.diff

Examples
========

Much of the discussion on ``comp.lang.python`` and the ``python-dev``
mailing list focuses on the use of the ``staticmethod()`` and
``classmethod()`` builtins.  This capability is much more powerful
than that.  This section presents some examples of use.

1. Define a function to be executed at exit.  Note that the function
   isn't actually "wrapped" in the usual sense.

::

    def onexit(f):
        import atexit
        atexit.register(f)
        return f

    def func() [onexit]:
        ...

2. Define a class with a singleton instance.  Note that once the class
   disappears enterprising programmers would have to be more creative
   to create more instances.  (From Shane Hathaway on ``python-dev``.)

::

    def singleton(cls):
        return cls()

    class MyClass [singleton]:
        ...

3. Decorate a function with release information.  (Based on an example
   posted by Anders Munch on ``python-dev``.)

::

    def release(**kwds):
        def decorate(f):
            for k in kwds:
                setattr(f, k, kwds[k])
            return f
        return decorate

    def classmethod(f) [release(versionadded="2.2",
                                author="Guido van Rossum")]:
        ...

4. Enforce function argument and return types.

::

    def accepts(*types):
        def check_accepts(f):
            def new_f(*args, **kwds):
                for (a, t) in zip(args, types):
                    assert isinstance(a, t), \
                           "arg %r does not match %s" % (a,t)
                return f(*args, **kwds)
            assert len(types) == f.func_code.co_argcount
            return new_f
        return check_accepts

    def returns(rtype):
        def check_returns(f):
            def new_f(*args, **kwds):
                result = f(*args, **kwds)
                assert isinstance(result, rtype), \
                       "return value %r does not match %s" % (result,rtype)
                return result
            return new_f
        return check_returns

    def func(arg1, arg2) [accepts(int, (int,float)),
                          returns((int,float))]:
        return arg1 * arg2

Of course, all these examples are possible today, though without the
syntactic support.

Possible Extensions
===================

The proposed syntax is general enough that it could be used on class
definitions as well::

    class foo(object) [dec1, dec2, ...]:
        class definition here

Use would likely be much less than function decorators.  The current
patch only implements function decorators.


Copyright
=========

This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
End:


From bob at redivi.com  Tue Mar 23 12:26:21 2004
From: bob at redivi.com (Bob Ippolito)
Date: Tue Mar 23 12:24:43 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <16480.28046.500125.309554@montanaro.dyndns.org>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
Message-ID: <339FFDCC-7CEF-11D8-B723-000A95686CD8@redivi.com>


On Mar 23, 2004, at 12:02 PM, Skip Montanaro wrote:

> Here's the current state of PEP 318.  I have to take a break from this  
> to
> get some real work done and may not get back to it in a major way for
> awhile.  I received a significant rewrite of Kevin Smith's most recent
> version from Jim Jewett and incorporated much of what he wrote,  
> modifying a
> lot of it along the way, but have still not digested everything he  
> sent me.
>
> I've tried to reflect the concensus which seems to be emerging on the
> python-dev list, though I suspect I've done a poor job of that.  The
> discussions there and on comp.lang.python have ranged far and wide and  
> thus
> resist summary in a finite amount of time.  I recommend interested
> comp.lang.python readers spend some time in the python-dev archives for
> February and March if they find major fault with the current state of  
> the
> proposal.
>
> If you post corrections or comments to either list I should see them.
>
> Skip Montanaro
>
> ----------------------------------------------------------------------- 
> -------
> PEP: 318
> Title: Function/Method Decorator Syntax
> Version: $Revision: 1.5 $
> Last-Modified: $Date: 2004/03/23 16:41:17 $
> Author: Kevin D. Smith <Kevin.Smith@theMorgue.org>,
>         Jim Jewett <jimjjewett@users.sourceforge.net>,
>         Skip Montanaro <skip@pobox.com>
> Status: Draft
> Type: Standards Track
> Content-Type: text/x-rst
> Created: 05-Jun-2003
> Python-Version: 2.4
> Post-History: 09-Jun-2003, 10-Jun-2003, 27-Feb-2004, 23-Mar-2004
--snip--
> with an alternative that places the decoration in the function's
> declaration::
>
>     def foo(cls) using [synchronized(lock), classmethod]:
>         pass

Shouldn't the using keyword go away, so it's consistent with the  
current implementation?

> Current Implementation
> ======================
>
> Michael Hudson has posted a `patch`_ at Starship, which implements the
> proposed syntax and left-first application of decorators::
>
>     def func(arg1, arg2, ...) [dec1, dec2]:
>         pass
>
> is equivalent to::
>
>     def func(arg1, arg2, ...):
>         pass
>     func = dec2(dec1(func))
>
> though without the intermediate creation of a variable named ``func``.
>
> .. _patch:  
> http://starship.python.net/crew/mwh/hacks/meth-syntax-sugar.diff

The current implementation is here, as far as I know:
http://starship.python.net/crew/mwh/hacks/meth-syntax-sugar-3.diff

> Possible Extensions
> ===================
>
> The proposed syntax is general enough that it could be used on class
> definitions as well::
>
>     class foo(object) [dec1, dec2, ...]:
>         class definition here
>
> Use would likely be much less than function decorators.  The current
> patch only implements function decorators.

The current patch *does* implement class decorators, with this syntax.

-bob


From jim.jewett at eds.com  Tue Mar 23 12:48:01 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Tue Mar 23 12:49:03 2004
Subject: [Python-Dev] order of decorator application?
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3EE@USAHM010.amer.corp.eds.com>

Skip:

> Is there a concensus on the order of application for decorators? 

I think people will accept either; the question is what counts as
"first".

In
	name = a(b(c(name)))

Is a first because it appears (and is resolved) first, or is c
first because it is applied first?

This also interacts with whether the decorators are before or
after the function name, and with whether we're modelling

	name = a(name)
	name = b(name)
	name = c(name)

or

	name = a(b(c(name)))

-jJ

From guido at python.org  Tue Mar 23 14:02:42 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 23 14:02:48 2004
Subject: [Python-Dev] An issue recently brought up in patch
	#872326(generator expression)
In-Reply-To: Your message of "Mon, 22 Mar 2004 22:54:14 EST."
	<LNBBLJKPBEHFEDALKOLCKEHCJLAB.tim.one@comcast.net> 
References: <LNBBLJKPBEHFEDALKOLCKEHCJLAB.tim.one@comcast.net> 
Message-ID: <200403231902.i2NJ2gL12076@guido.python.org>

> Not sure that's it.  In some sense it's also arbitrary that Python
> decides in
> 
>     def f(x=DEFAULT_X_VALUE):
>         ...
> 
> to capture the binding of DEFAULT_X_VALUE (which is a free vrbl so
> far as the defn of f is concerned) at the time f is defined and
> reuse it on each call; it would have been possible to define the
> language to use whatever binding is current each time f is called.

Either choice is just as arbitrary though.

> Outside of explicitly burying something inside a lambda body,
> there's no precendent in Python for "delaying" evaluation of
> anything in a Python expression either.  So generator expressions
> aren't like any non-lambda expressions in Python either: the time at
> which their guts get evaluated can be arbitrarily far removed from
> the time the statement holding the guts gets executed.

And that's what makes me feel uncomfortable with the binding
capturing.  Another thing that makes me uncomfortable:

When you write

    gens = []
    for var in range(10):
        gens.append(x+var for x in range(10))

the value of var is captured in each generator expression; but when
you write

    gens = []
    for self.var in range(10):
        gens.append(x+self.var for x in range(10))

(yes that is valid syntax!) the value if self is captured, and all
generator expressions will generate the same sequence.  If you object
to the "for self.var in" syntax, I'm sure I can come up with another
example -- the point is that we're not capturing *values*, we're
capturing *bindings*.  But since in most simple examples bindings and
values are synonymous (since in most simple examples all values are
immutable -- we like to use numbers and strings in examples for
simplicity), the difference may elude most folks until they're caught
in the trap -- just as with using a mutable object as a class variable
or default value.

OTOH it's clear that actually capturing values would make things
worse.

All this makes me lean towards getting rid of the binding capture
feature.  That way everybody will get bitten by the late binding fair
and square the first time they try it.

> >> now we have some iterators being treated more equally
> >> than others.
> >>
> >> I'm getting an "architecture smell" here. Something is wrong
> >> somewhere, and I don't think we're tinkering in the right place to
> >> fix it properly.
> 
> When scopes and lifetimes get intricate, it's easier to say what you
> mean in Scheme (which is a good argument for not letting scopes and
> lifetimes get intricate <wink>).
> 
> [Guido]
> > I'm not disagreeing -- I was originally vehemently against the
> > idea of capturing free variables, but Tim gave an overwhelming
> > argument that whenever it made a difference that was the desired
> > semantics.
> 
> Na, and that's a big part of the problem we're having here: I didn't
> make an argument, I just trotted out non-insane examples.  They all
> turned out to suck under "delay evaluation" semantics, to such an
> extent that wholly delayed evaluation appeared to be a downright
> foolish choice.  But decisions driven *purely* by use cases can be
> mysterious.  I was hoping to see many more examples, on the chance
> that a clarifying principle would reveal itself.

Maybe your examples were insane after all. :-)

I expect that 99.9% of all use cases for generator expressions use up
all of the generated values before there's a chance of rebinding any
of the variables, like the prototypical example:

    sum(x**2 for x in range(10))

I don't recall Tim's examples, but I find it hard to believe that
there are a lot of use cases for building lists (or other containers)
containing a bunch of generator expressions that are then used up at
some later point, where the author isn't sophisticated enough to deal
with the late binding by explicitly inserting a lambda with a default
variable binding to capture the one variable in need of capture.

Which reminds me, under binding-capturing semantics, most bindings
will be captured unnecessarily -- typically there's only a single
variable whose capture makes a difference, but we can't define a rule
that determines which variable this would be without adding syntax.
(Using the loop control variables of all containing for loops would be
one of the saner rules to try, but even this will sometimes capture
too much and sometimes not enough -- and the compiler can't know in
general what the lifetime of the generator expression will be.)

In summary, I'm strongly leaning towards not capturing *any* bindings,
Tim's examples be damned.

> > But I always assumed that the toplevel iterable would be different.
> 
> Jiwon's example certainly suggests that it must be.  But why <0.7 wink>?

Because (as someone already explained) it's independent of the iteration.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From bob at redivi.com  Tue Mar 23 14:15:14 2004
From: bob at redivi.com (Bob Ippolito)
Date: Tue Mar 23 14:11:59 2004
Subject: [Python-Dev] order of decorator application?
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D3EE@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D3EE@USAHM010.amer.corp.eds.com>
Message-ID: <6A0A28A4-7CFE-11D8-B723-000A95686CD8@redivi.com>


On Mar 23, 2004, at 12:48 PM, Jewett, Jim J wrote:

> Skip:
>
>> Is there a concensus on the order of application for decorators?
>
> I think people will accept either; the question is what counts as
> "first".
>
> In
> 	name = a(b(c(name)))
>
> Is a first because it appears (and is resolved) first, or is c
> first because it is applied first?
>
> This also interacts with whether the decorators are before or
> after the function name, and with whether we're modelling
>
> 	name = a(name)
> 	name = b(name)
> 	name = c(name)
>
> or
>
> 	name = a(b(c(name)))

I think that the majority opinion is that they should be evaluated left 
to right, which sounds sensible to me because that's how everything 
else works in Python until you start using lots of parentheses.  I 
would probably "accept" either, but I would be very disappointed if it 
wasn't left-to-right.

Either way, if you had a real strong opinion about one direction over 
the other, you could write a function that takes a list of functions 
and applies them in the order you would like them applied, eliminating 
any ambiguity.  The most obvious function to do this would probably 
look something like:

# warning: untested, I don't care if it actually works, it's close 
enough
def left_to_right(*args):
     def left_to_right(fn):
         return reduce(lambda a,b:b(a), args, fn)
     return left_to_right

def decorated()[left_to_right(a, b, c, d)]:
     pass

-bob


From guido at python.org  Tue Mar 23 14:12:30 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 23 14:12:38 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: Your message of "Tue, 23 Mar 2004 11:02:06 CST."
	<16480.28046.500125.309554@montanaro.dyndns.org> 
References: <16480.28046.500125.309554@montanaro.dyndns.org> 
Message-ID: <200403231912.i2NJCUt12117@guido.python.org>

>     def func [dec1, dec2, ...] (arg1, arg2, ...):
>         pass
> 
> Quixote's Page Template Language uses this form, but only supports a
> single decorator chosen from a restricted set.  For short lists it
> works okay, but for long list it separates the argument list from the
> function name.

OTOH, for long argument lists, the "preferred" form has the reverse
problem: the decorators are separated from the function name by the
long argument list.

The question then becomes, what is more likely: long argument lists or
long lists of decorators?  I *know* that (at least in certain code
bases :) long argument lists are common.  Will long lists of
decorators ever become popular?  I think we'll have to look for
languages that already have decorators (especially C#) for what the
future might give.  I'll be talking to folks at PyCon about this.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Tue Mar 23 14:38:29 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar 23 14:39:32 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403231912.i2NJCUt12117@guido.python.org>
References: <Your message of "Tue, 23 Mar 2004 11:02:06 CST."
	<16480.28046.500125.309554@montanaro.dyndns.org>
	<16480.28046.500125.309554@montanaro.dyndns.org>
Message-ID: <5.1.1.6.0.20040323142023.02590080@telecommunity.com>

At 11:12 AM 3/23/04 -0800, Guido van Rossum wrote:
> >     def func [dec1, dec2, ...] (arg1, arg2, ...):
> >         pass
> >
> > Quixote's Page Template Language uses this form, but only supports a
> > single decorator chosen from a restricted set.  For short lists it
> > works okay, but for long list it separates the argument list from the
> > function name.
>
>OTOH, for long argument lists, the "preferred" form has the reverse
>problem: the decorators are separated from the function name by the
>long argument list.
>
>The question then becomes, what is more likely: long argument lists or
>long lists of decorators?  I *know* that (at least in certain code
>bases :) long argument lists are common.  Will long lists of
>decorators ever become popular?

That depends on how many chainable decorators people come up with.  Most of 
the built-in decorators (classmethod, staticmethod, property) are not 
chainable in any reasonable sense, as their output is a descriptor.

In PEAK I have a couple of other function->descriptor decorators, one 
descriptor->descriptor decorator, and a handful of function->function 
decorators.  Looking at the ones I have now, the longest chain I could make 
would probably be something like:

     def something(self) [
         events.taskFactory, storage.autocommit, binding.Make, 
binding.classAttr
     ]:
         pass

But that's rather silly, because I've basically said I want an asynchronous 
pseudothread to be started inside a transaction that will immediately 
commmit (before the pseudothread has even done anything), and oh yeah, it 
should happen only once per subclass of this class, when you first access 
the 'something' attribute on the class (as distinct from its 
instances).  Whew!  Anyway, I think it shows that a long decorator chain is 
no worse on the eyes than a long argument list.

I think that initially, the most common decorator chains will be of length 
1, with occasional "gusts" of 2 to 3.  The most likely cause of longer 
chains will be people using contract decorators, since those are probably 
the most-chainable of decorator types.  As time goes on and people come up 
with more ways to use them, the overall average length will probably creep 
upward, but I think the median will tend to stay hovering somewhere between 
1 and 3.


From theller at python.net  Tue Mar 23 14:41:38 2004
From: theller at python.net (Thomas Heller)
Date: Tue Mar 23 14:41:44 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403231912.i2NJCUt12117@guido.python.org> (Guido van
	Rossum's message of "Tue, 23 Mar 2004 11:12:30 -0800")
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403231912.i2NJCUt12117@guido.python.org>
Message-ID: <smfzfj0t.fsf@python.net>

Guido van Rossum <guido@python.org> writes:

>>     def func [dec1, dec2, ...] (arg1, arg2, ...):
>>         pass
>> 
>> Quixote's Page Template Language uses this form, but only supports a
>> single decorator chosen from a restricted set.  For short lists it
>> works okay, but for long list it separates the argument list from the
>> function name.
>
> OTOH, for long argument lists, the "preferred" form has the reverse
> problem: the decorators are separated from the function name by the
> long argument list.
>
> The question then becomes, what is more likely: long argument lists or
> long lists of decorators?  I *know* that (at least in certain code
> bases :) long argument lists are common.  Will long lists of
> decorators ever become popular?

I don't think so.  Although even a single decorator may be a function
taking quite some arguments.

Thomas


From python at rcn.com  Tue Mar 23 15:00:09 2004
From: python at rcn.com (Raymond Hettinger)
Date: Tue Mar 23 15:02:52 2004
Subject: [Python-Dev] An issue recently brought up in
	patch#872326(generator expression)
In-Reply-To: <200403231902.i2NJ2gL12076@guido.python.org>
Message-ID: <00a501c41111$72472d40$4dbc958d@oemcomputer>

> All this makes me lean towards getting rid of the binding capture
> feature.  That way everybody will get bitten by the late binding fair
> and square the first time they try it.

I prefer this approach over one that has subtleties and nuances.


Raymond


From jack at performancedrivers.com  Tue Mar 23 15:36:35 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Tue Mar 23 15:36:39 2004
Subject: [Python-Dev] An issue recently brought up in
	patch#872326(generator expression)
In-Reply-To: <00a501c41111$72472d40$4dbc958d@oemcomputer>
References: <200403231902.i2NJ2gL12076@guido.python.org>
	<00a501c41111$72472d40$4dbc958d@oemcomputer>
Message-ID: <20040323203635.GD25606@performancedrivers.com>

On Tue, Mar 23, 2004 at 03:00:09PM -0500, Raymond Hettinger wrote:
> 
> I prefer this approach over one that has subtleties and nuances.
> 
+1 in general, +1 QOTW in particular.


From tim.one at comcast.net  Tue Mar 23 15:39:57 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 23 15:39:59 2004
Subject: [Python-Dev] An issue recently brought up
	inpatch#872326(generator expression)
In-Reply-To: <00a501c41111$72472d40$4dbc958d@oemcomputer>
Message-ID: <LNBBLJKPBEHFEDALKOLCAELOJLAB.tim.one@comcast.net>

[Guido]
>> All this makes me lean towards getting rid of the binding capture
>> feature.  That way everybody will get bitten by the late binding fair
>> and square the first time they try it.

[Raymond]
> I prefer this approach over one that has subtleties and nuances.

I'm afraid it's not that easy.  There's no precedent in Python for delaying
evaluation of anything in an expression without an explicit lambda, so late
binding doesn't escape "subtleties and nuances" either -- to the contrary,
in some natural senses late binding is maximally surprising in a Python
expression.

http://mail.python.org/pipermail/python-dev/2003-October/039323.html

spells out the subtle bugs that follow from late binding of either "p" or
"pipe" in this example (which I still think "is-- or should be --an easy
application for pipelining generator expressions"):

    pipe = source
    for p in predicates:
        # add a filter over the current pipe, and call that the new pipe
        pipe = e for e in pipe if p(e)

I suppose the guts could be written:

  pipe = (e for e in (lambda pipe=pipe: pipe)() if (lambda p=p: p)()(e))

That's the ticket -- nothing subtle or nuanced about that <wink>.  I'm not
sure enough of the intended late-binding semantics to guess whether

  pipe = (e for e in (lambda pipe=pipe: pipe)() if (lambda p=p: p(e)))

would work the same way, but figure it would.

OTOH, I'm tempted to say I don't really care if generator expressions are
prone to disaster when they get non-trivial.  That would be a self-serving
but anti-social thing to say.


From jim.jewett at eds.com  Tue Mar 23 15:47:48 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Tue Mar 23 15:48:07 2004
Subject: [Python-Dev] PEP 318 
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D3F2@USAHM010.amer.corp.eds.com>

PEP-318 (Skip):

>>     def func [dec1, dec2, ...] (arg1, arg2, ...):
>>         pass

>> ... For short lists it works okay, but for long list it separates
>> the argument list from the function name.

Guido van Rossum:

> OTOH, for long argument lists, the "preferred" form has the reverse
> problem: the decorators are separated from the function name by the
> long argument list.

I know it is common to look for "what would I call this with".  
I suppose staticmethod and classmethod are important because they 
change that calling signature.

Are other decorators expected to be something people would scan for?  
If not, then having them at the top of the function (so you see them 
if you're reading the whole function) is probably enough.

> ... what is more likely: long argument lists or long lists of 
> decorators?  I *know* that (at least in certain code bases :)
> long argument lists are common. 

I suspect those same code bases will pass long argument lists to 
several of the individual decorators.  Anders Munch's original 
release example used more than two keywords.  I believe the following 
example is just as reasonable as a long argument list.

Before the def, using an optional second header clause on the 
funcdef production:

    using:
        release(version="1.0",
                author="me",
                status="Well I wrote it so it works, m'kay?",
                warning="You might want to use the Foo class instead")
        provides_API_myweb
        synchronized(lock[23])
    def foo(arg1, arg2):
        pass

In the middle:

    def foo as [release(version="1.0",
                        author="me",
                        status="Well I wrote it so it works, m'kay?",
                  # Line length becomes an issue
                  warning="You might want to use the Foo class instead"),
                provides_API_myweb,
                synchronized(lock[23])] (arg1, arg2):
        pass

At the end:

    def foo(arg1, arg2) as [release(version="1.0",
                                    author="me",
                  # Line length becomes an issue
                  status="Well I wrote it so it works, m'kay?",
                  warning="You might want to use the Foo class instead"),
                            provides_API_myweb,
                            synchronized(lock[23])]:
        pass

From guido at python.org  Tue Mar 23 15:54:39 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 23 15:54:46 2004
Subject: [Python-Dev] An issue recently brought up in
	patch#872326(generator expression)
In-Reply-To: Your message of "Tue, 23 Mar 2004 15:00:09 EST."
	<00a501c41111$72472d40$4dbc958d@oemcomputer> 
References: <00a501c41111$72472d40$4dbc958d@oemcomputer> 
Message-ID: <200403232054.i2NKsd912566@guido.python.org>

> > All this makes me lean towards getting rid of the binding capture
> > feature.  That way everybody will get bitten by the late binding fair
> > and square the first time they try it.
> 
> I prefer this approach over one that has subtleties and nuances.

I was partly inpsired to this position by reading a draft for Paul
Graham's new book, Hackers and Painters (which will include last
year's PyCon keynote on the 100-year language).  In one of his many
criticisms of Common Lisp (not his favorite Lisp dialect :), Paul
complains about hygienic macros that they are designed to take away
the power and sharp edges, but that for him the attraction of Lisp is
precisely in that power.

I think that in this particular case, late binding, with its sharp
edges, gives more power than binding capturing, and the latter, in its
attempts to smoothen the sharp edges, also takes away some power.

One way of deciding which option gives more power would be to see how
easy it is to implement the other option on top of it.  Implementing
binding capture on top of late binding is a solved problem in Python
(albeit not particularly elegant).  I have no idea how to implement
late binding under binding-capture; one would probably have to use
locals() or store the to-be-bound-late variables in a container.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pedronis at bluewin.ch  Tue Mar 23 16:15:27 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Tue Mar 23 16:10:39 2004
Subject: [Python-Dev] An issue recently brought up in
	patch#872326(generator expression)
In-Reply-To: <200403232054.i2NKsd912566@guido.python.org>
References: <Your message of "Tue, 23 Mar 2004 15:00:09 EST."
	<00a501c41111$72472d40$4dbc958d@oemcomputer>
	<00a501c41111$72472d40$4dbc958d@oemcomputer>
Message-ID: <5.2.1.1.0.20040323221316.032f0de8@pop.bluewin.ch>

At 12:54 23.03.2004 -0800, Guido van Rossum wrote:
> > > All this makes me lean towards getting rid of the binding capture
> > > feature.  That way everybody will get bitten by the late binding fair
> > > and square the first time they try it.
> >
> > I prefer this approach over one that has subtleties and nuances.
>
>I was partly inpsired to this position by reading a draft for Paul
>Graham's new book, Hackers and Painters (which will include last
>year's PyCon keynote on the 100-year language).  In one of his many
>criticisms of Common Lisp (not his favorite Lisp dialect :), Paul
>complains about hygienic macros that they are designed to take away
>the power and sharp edges, but that for him the attraction of Lisp is
>precisely in that power.

hmm, I'm confused CL has non-hygienic macros, although you can workaround that
using the package system or gensym,
  Scheme has hygienic macros,

the current list comprehension in Python e.g. behaves like an unhygienic macro:

x = 3
l= [ l*2 for x in l]
# x is not 3 anymore. 


From jeremy at alum.mit.edu  Tue Mar 23 16:30:34 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue Mar 23 16:31:45 2004
Subject: [Python-Dev] An issue recently brought up in
	patch#872326(generator expression)
In-Reply-To: <5.2.1.1.0.20040323221316.032f0de8@pop.bluewin.ch>
References: <Your message of "Tue,
	23 Mar 2004 15:00:09 EST." <00a501c41111$72472d40$4dbc958d@oemcomputer>
	<00a501c41111$72472d40$4dbc958d@oemcomputer>
	<5.2.1.1.0.20040323221316.032f0de8@pop.bluewin.ch>
Message-ID: <1080077434.31191.130.camel@localhost.localdomain>

On Tue, 2004-03-23 at 16:15, Samuele Pedroni wrote:
> At 12:54 23.03.2004 -0800, Guido van Rossum wrote:
> > > > All this makes me lean towards getting rid of the binding capture
> > > > feature.  That way everybody will get bitten by the late binding fair
> > > > and square the first time they try it.
> > >
> > > I prefer this approach over one that has subtleties and nuances.
> >
> >I was partly inpsired to this position by reading a draft for Paul
> >Graham's new book, Hackers and Painters (which will include last
> >year's PyCon keynote on the 100-year language).  In one of his many
> >criticisms of Common Lisp (not his favorite Lisp dialect :), Paul
> >complains about hygienic macros that they are designed to take away
> >the power and sharp edges, but that for him the attraction of Lisp is
> >precisely in that power.
> 
> hmm, I'm confused CL has non-hygienic macros, although you can workaround that
> using the package system or gensym,
>   Scheme has hygienic macros,

I believe Paul is definitely a fan of CL over Scheme.  Academic: "Paul,
that macro isn't hygenic."  Paul: "So tell my mother."

> the current list comprehension in Python e.g. behaves like an unhygienic macro:
> 
> x = 3
> l= [ l*2 for x in l]
> # x is not 3 anymore. 

I guess I prefer Scheme, because I'd like to see this fixed ;-).  I
don't think it's much like a macro, though.  The expanded code is
visible at the invocation site.

Jeremy



From shane at zope.com  Tue Mar 23 17:18:37 2004
From: shane at zope.com (Shane Hathaway)
Date: Tue Mar 23 17:18:57 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHJJLAB.tim.one@comcast.net>
References: <000001c40e45$8591e580$6402a8c0@arkdesktop>
	<LNBBLJKPBEHFEDALKOLCIEHJJLAB.tim.one@comcast.net>
Message-ID: <4060B7BD.3060303@zope.com>

Tim Peters wrote:
> [Andrew Koenig]
> 
>>... and I wouldn't mind it if there were a way of testing for
>>substitutability that were as easy to spell as "is" -- in fact, I
>>wouldn't mind if it were spelled "is" even though I realize it's
>>probably impractical to do so.
> 
> 
> It's definitely impractical to do so in the 2.3 line.  Looking beyond that,
> I'm not sure we have real use cases for substitutability.  It seemed to boil
> down to:
> 
>     x substitutable y =
>         x is y  if x is mutable, or a mutable object is reachable
>                 from x, or y is mutable, or a mutable object is
>                 reachable from y
>         x == y  otherwise (x and y are immutable, and only immutable
>                 objects are reachable from them)

As I understood it, 'b' and 'c' are supposed to be substitutable in the 
following example, since 'b' and 'c' will have the same value even after 
'a' changes:

a = []
b = (a,)
c = (a,)

Shane

From raymond.hettinger at verizon.net  Tue Mar 23 18:55:55 2004
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Tue Mar 23 18:58:11 2004
Subject: [Python-Dev] Timing for Py2.4
Message-ID: <001e01c41132$62275b80$063ec797@oemcomputer>

The PEP for Py2.4 got updated and the dates were pushed back to
September.  Is there any reason that we couldn't get a first alpha out
in early June or late May? 
 
Short of generator expressions, Py2.4 is already in good shape for an
alpha.  I expect generator expressions to be hammered out shortly and
will soon be able to devote my full energy to wrapping it up. The
statistics and Decimal modules are almost ready for prime time. Unless
someone has a specific summer plan to work on another major feature
(perhaps attribute lookup), I do not expect any significant developments
May through September that would warrant holding up 2.4.
 
 
Raymond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20040323/cf0fca4b/attachment-0001.html
From ark at acm.org  Tue Mar 23 20:45:03 2004
From: ark at acm.org (Andrew Koenig)
Date: Tue Mar 23 20:44:37 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <4060B7BD.3060303@zope.com>
Message-ID: <003101c41141$a219c200$39414d0c@ARKLAPTOP>

> As I understood it, 'b' and 'c' are supposed to be substitutable in the 
> following example, since 'b' and 'c' will have the same value even after 
> 'a' changes:

> a = []
> b = (a,)
> c = (a,)

Correct.


From anthony at interlink.com.au  Tue Mar 23 20:45:25 2004
From: anthony at interlink.com.au (Anthony Baxter)
Date: Tue Mar 23 20:45:37 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <001e01c41132$62275b80$063ec797@oemcomputer>
References: <001e01c41132$62275b80$063ec797@oemcomputer>
Message-ID: <4060E835.50805@interlink.com.au>

Raymond Hettinger wrote:
> The PEP for Py2.4 got updated and the dates were pushed back to 
> September.  Is there any reason that we couldn?t get a first alpha out 
> in early June or late May?

I _really_ want to give ast-compiler and generator expressions time to
bake. genexps in particular have the potential to have all sorts of
nasty corner cases.

Once we release 2.4.0, changing genexps to fix any previously-
unforseen uglies is going to be much, much harder.

Anthony


-- 
Anthony Baxter     <anthony@interlink.com.au>
It's never too late to have a happy childhood.

From allison at sumeru.stanford.EDU  Tue Mar 23 21:01:05 2004
From: allison at sumeru.stanford.EDU (Dennis Allison)
Date: Tue Mar 23 21:01:14 2004
Subject: [Python-Dev] Threading, NPTL, CPU affinity, Linux 2.6, etc.
Message-ID: <Pine.LNX.4.10.10403231746590.3069-100000@sumeru.stanford.EDU>

I'm not sure if this is the right forum at this point for these issues.
I've been lurking on the list for a while now and don't remember any
traffic directed at these issues.

With NPTL the threading model changes.  It's mostly a Linux 2.6 issue.
What if any changes are needed to make Python stable in a NPTL environment
and what changes might we want to take advantage of the improved
performace and new features.

Along the same lines, some kernels now support CPU affinity.  It's a low
level scheduling issue, but necessary to avoid with certainty problems
with the GIL.  Do we want to expose some of the controls in Python?  How
do these controls interact with processes and threads?

	-dra



From tim.one at comcast.net  Tue Mar 23 21:10:44 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 23 21:10:46 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <4060B7BD.3060303@zope.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEENPJLAB.tim.one@comcast.net>

[Shane Hathaway]
> As I understood it, 'b' and 'c' are supposed to be substitutable in
> the following example, since 'b' and 'c' will have the same value
> even after 'a' changes:
>
> a = []
> b = (a,)
> c = (a,)

Fair enough.  Do interesting/valuable use cases appear then?

From tim.one at comcast.net  Tue Mar 23 21:48:24 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 23 21:48:24 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <20040320134507.GB2320@titan.progiciels-bpi.ca>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEOFJLAB.tim.one@comcast.net>

[Fran?ois Pinard]
> There is question which traversed my mind recently.  While I would
> never use `is' between ordinary strings, I might be tempted to use
> `is' between explicitely interned strings, under the hypothesis that
> for example:
>
>     a = intern('a b')
>     b = intern('a b')
>     a is b
>
> dependably prints `True'.  However, I remember a thread in this forum
> by which strings might be un-interned on the fly -- but do not
> remember the details; I presume it was for strings which the garbage
> collector may not reach anymore.

Right, interned strings are no longer immortal (they used to be, but that
changed).  That can't hurt the kind of case you show above, though -- so
long as either a or b remains bound to its interned string, that string is
reachable and so won't be reclaimed by the garbage collector.

The kind of thing that can fail now is so obscure I doubt anyone was doing
it:  relying on the id() of an interned string remaining valid forever.
Like doing

    address_of_a = id(intern('a b'))

and then later assuming that

    id(some_string) == address_of_a

if and only if some_string is the interned string 'a b'.  That was
"reliable" when interned strings were immortal, but not anymore.  For
example (and it may or may not work anything like this under your Python
installation -- depends on internal vagaries):

>>> id(intern('a b'))
6973920
>>> id(intern('c d'))
6973824
>>> id(intern('e f'))
6973760
>>> id(intern('g h')) # it turns out this one is a repeat of the first
6973920
>>>

> There are a few real-life cases where speed considerations would
> invite programmers to use `is' over `==' for strings, given they all
> get interned to start with so the speed-up could be gleaned later.
> The fact that `is' exposes the implementation is quite welcome in
> such cases.

Sure, and I've done that myself in several programs too.  I like "is"!


From tim.one at comcast.net  Tue Mar 23 22:10:18 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 23 22:10:35 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <4060E835.50805@interlink.com.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEOGJLAB.tim.one@comcast.net>

[Raymond Hettinger]
>> The PEP for Py2.4 got updated and the dates were pushed back to
>> September.  Is there any reason that we couldn?t get a first alpha
>> out in early June or late May?

[Anthony Baxter]
> I _really_ want to give ast-compiler and generator expressions time to
> bake. genexps in particular have the potential to have all sorts of
> nasty corner cases.
>
> Once we release 2.4.0, changing genexps to fix any previously-
> unforseen uglies is going to be much, much harder.

The ast-branch compiler is likely to have its share of subtle glitches.
They may be more serious, since it looks like "the plan" for genexps now is
to say that genexps work -- whatever they happen to do <wink>.

Raymond, the 2.4 PEP was updated after a 2.4 planning meeting during the
PyCon sprints on Sunday.  There was plenty of debate there.


From guido at python.org  Tue Mar 23 22:11:21 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 23 22:11:35 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: Your message of "Tue, 23 Mar 2004 18:55:55 EST."
	<001e01c41132$62275b80$063ec797@oemcomputer> 
References: <001e01c41132$62275b80$063ec797@oemcomputer> 
Message-ID: <200403240311.i2O3BLI13254@guido.python.org>

> The PEP for Py2.4 got updated and the dates were pushed back to
> September.  Is there any reason that we couldn't get a first alpha out
> in early June or late May? 
>  
> Short of generator expressions, Py2.4 is already in good shape for an
> alpha.  I expect generator expressions to be hammered out shortly and
> will soon be able to devote my full energy to wrapping it up. The
> statistics and Decimal modules are almost ready for prime time. Unless
> someone has a specific summer plan to work on another major feature
> (perhaps attribute lookup), I do not expect any significant developments
> May through September that would warrant holding up 2.4.

I was actually hoping to also get decorators in, but I haven't decided
on the syntax yet.  It's between before or after the argument list.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg at cosc.canterbury.ac.nz  Tue Mar 23 22:20:14 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar 23 22:20:41 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <16480.28046.500125.309554@montanaro.dyndns.org>
Message-ID: <200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>

> 2. Define a class with a singleton instance.
> 
>     class MyClass [singleton]:
>         ...

This example appears before you've mentioned the possibility
of applying this to classes. May be the "Possible Extensions"
section should appear earlier?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From jepler at unpythonic.net  Tue Mar 23 22:21:37 2004
From: jepler at unpythonic.net (Jeff Epler)
Date: Tue Mar 23 22:22:02 2004
Subject: [Python-Dev] Threading, NPTL, CPU affinity, Linux 2.6, etc.
In-Reply-To: <Pine.LNX.4.10.10403231746590.3069-100000@sumeru.stanford.EDU>
References: <Pine.LNX.4.10.10403231746590.3069-100000@sumeru.stanford.EDU>
Message-ID: <20040324032137.GA11833@unpythonic.net>

I think the answer is: who cares?

Python is programmed to the pthreads API on Linux, so as long as NPTL
isn't a buggy implementation of the standard, Python will run there.
(well, Python bugs are possible too, but rarely observed in real life)

Python has never IMO been interested in getting absolute best speed in
threading situations, favoring simplicity (and portability to non-linux,
non-pthreads environments) and a small thread API.  There's no reason an
extension couldn't be added for systems using pthreads or NPTL to tweak
these parameters, but I'd expect this to start in a third party library
somewhere (I dunno, maybe zope or numarray) and move to the Python
standard library later or never.

And then there's the whole Twisted cult, who will cut your heart out
with an event-driven spoon if they hear you've been using threads...

Jeff

From tim.one at comcast.net  Tue Mar 23 22:45:38 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 23 22:45:40 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <200403240311.i2O3BLI13254@guido.python.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEOJJLAB.tim.one@comcast.net>

[Guido]
> I was actually hoping to also get decorators in,

FYI, the 2.4 planning mtg assumed that.  I channeled you thusly:  "Guido
really wants to get them in, but isn't in love with a syntax yet."

> but I haven't decided on the syntax yet.

Command-line option <wink>.

> It's between before or after the argument list.

Or *is* the argument list.  The arguments will be bound to new local name
__args__, which will get magically bound to an instance of a subclass of
collections.deque, for efficient shifting of args off both ends.  We need a
subclass to allow efficent access by argument name too.

Argument names will be extracted from optional decorators whose names begin
with "__arg_":

    def sum2(staticmethod, __arg_x, __arg_y):
        return x+y
        # or
        return sum(__args__)
        # or
        return __args__.byname("x") + __args__[-1]
        # or
        return sum(__args__.pop() for dummy in __args__)

it's-easier-when-you-have-one-to-reject-without-regret-ly y'rs  - tim


From ark at acm.org  Tue Mar 23 22:47:24 2004
From: ark at acm.org (Andrew Koenig)
Date: Tue Mar 23 22:46:57 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEENPJLAB.tim.one@comcast.net>
Message-ID: <000001c41152$baca5830$29714d0c@ARKLAPTOP>

> [Shane Hathaway]
> As I understood it, 'b' and 'c' are supposed to be substitutable in
> the following example, since 'b' and 'c' will have the same value
> even after 'a' changes:
>
> a = []
> b = (a,)
> c = (a,)

> [Tim Peters]
> Fair enough.  Do interesting/valuable use cases appear then?

Depends on what you consider interesting or valuable :-)

If I understand things correctly, if I evaluate

	x = (3, 4)
	y = (3, 4)

the implementation is permitted to bind x and y to the same object, or not,
as it pleases.  If you like, all instances of (3, 4) are equivalent and the
implementation is permitted to chose any representative of that equivalence
class.

I think the interesting use case is whenever one wants to test for
membership in that equivalence class.

Now, you will probably say that == does the trick: if x is a member of the
equivalence class of (3, 4), then x == y is True if and only if y is also a
member of that equivalence class.

However, consider this:

	a = []
	x = (3, a)
	y = (3, a)

I claim that x and y are *still* members of an equivalence class in exactly
the same sense that they were members before -- aside from checking the
identity of x and y, there is no way of distinguishing them.

However, the test x == y no longer determines whether x and y are members of
the same equivalence class.  Indeed, there is no built-in test that will
determine whether x and y are in the same equivalence class.

Finally:

	x = [3, 4]
	y = [3, 4]

Now x and y are *not* in the same equivalence class, because changing one of
them doesn't change the other.  Nevertheless, x == y yields True.


So what I think is the interesting use case is this:  You evaluate two
expressions, for which the implementation is permitted to return the same
object twice or two distinct members of the same equivalence class, or
perhaps two distinct objects.  How can one determine straightforwardly
whether those objects are members of the same equivalence class?

If the objects' value is (3, 4), you can use ==.

If the objects' value is (3, a), there's no really easy way to do it.

If the objects' value is [3, 4], you can use 'is'.

I think it would be useful to have an operation that is as easy to use as
'is' that would make the determination. 


From barry at python.org  Tue Mar 23 23:00:12 2004
From: barry at python.org (Barry Warsaw)
Date: Tue Mar 23 23:00:18 2004
Subject: [Python-Dev] PEP 292 - simpler string substitutions
Message-ID: <1080100790.32228.26.camel@geddy.wooz.org>

PEP 292 describes simpler string substitutions, i.e. $-strings.  PEP 320
says that this is going into Python 2.4.  I've had some code laying
around that implemented $-strings as a subclass of unicode instead of
what PEP 292 describes as a new string method.  The advantage to this
approach is that the feature is provided with no language changes, or
changes to the existing string classes.

I've cleaned up my code and posted a patch to SF.  I've also checked in
an updated PEP 292 that fits my implementation more closely.  Please
review the PEP and the patch:

http://www.python.org/peps/pep-0292.html

The reference implementation in the patch adds a couple of extra things
that aren't part of the PEP, but were discussed at Monday's sprint. 
Specifically:

- a top level package called 'stringlib' is added to the standard
library.

- the dstring class implemented PEP 292 is available through
stringlib.dstring

- some handy dict subclasses that work with the dstring class are also
provided.

The idea was that some of the constants and functions we want to keep
from the string module would go in the stringlib package.  That's not
essential to the PEP of course.

If this stuff is approved for Python 2.4, I'll add test cases and
documentation.

-Barry



From greg at cosc.canterbury.ac.nz  Wed Mar 24 00:37:56 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 24 00:38:04 2004
Subject: [Python-Dev] Possible resolution of generator expression variable
	capture dilemma
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEHCJLAB.tim.one@comcast.net>
Message-ID: <200403240537.i2O5buOM018356@cosc353.cosc.canterbury.ac.nz>

Tim Peters:

> When scopes and lifetimes get intricate, it's easier to say what you
> mean in Scheme

Interesting you should mention that. Recently in c.l.py, there was a
discussion about the semantics of for-loops and nested scopes in which
someone claimed that Scheme's scoping works differently from Python's.

It doesn't, really. What's different is that Scheme's equivalent of a
for-loop creates a new binding for the loop variable on each
iteration. This tends to result in fewer late-binding surprises,
because you get the effect of variable capture -- but due to the
semantics of the for-loop, and other common constructs which introduce
new bindings, rather than anything that goes on inside them.

Let's consider Tim's famous example:

    pipe = source
    for p in predicates:
        # add a filter over the current pipe, and call that the new pipe
        pipe = e for e in pipe if p(e)

Originally it was claimed that the values of both 'p' and 'pipe' need
to be captured for this to do what is intended. However, if the
outermost iterator is to be pre-evaluated, that takes care of 'pipe'.

Now, if Python's for-loop were to create a new binding for the loop
variable on each iteration, as in Scheme, then that would take care of
'p' as well.

So, I propose that the semantics of for-loop variables be changed to
do exactly that (and no more than that).

I'm well aware that the suggestion of making the loop variable local
to the loop has been suggested several times before, and (at least
before Python 3.0) rejected.

However, I'm suggesting something different. I'm *not* proposing to
make it *local* to the loop -- its name will still reside in the
enclosing namespace, and its value will still be available after the
loop finishes. In fact, if it's not referenced from any nested scope,
there will be no change in semantics at all.

What *will* change is perhaps best explained by means of the
implementation, which is very simple. If the loop variable is
referenced from a nested scope, it will be held in a cell. Now, on
each iteration, instead of replacing the contents of the cell as a
normal assignment would, we create a *new* cell and re-bind the name
to the new cell.

That's all.

An advantage of this approach is that *all* forms of nested scope
(lambda, def, etc.) would benefit, not just generator expressions. I
suspect it would eradicate most of the few remaining uses for the
default-argument hack, for instance (which nested scopes were supposed
to do, but didn't).

Is there a list of Tim's wacky examples anywhere, so we can check how
many of them this would solve?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Wed Mar 24 00:42:58 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 24 00:43:08 2004
Subject: [Python-Dev] An issue recently brought up in
	patch#872326(generator expression)
In-Reply-To: <200403232054.i2NKsd912566@guido.python.org>
Message-ID: <200403240542.i2O5gwqE018363@cosc353.cosc.canterbury.ac.nz>

Guido van Rossum <guido@python.org>:

> I think that in this particular case, late binding, with its sharp
> edges, gives more power than binding capturing, and the latter, in its
> attempts to smoothen the sharp edges, also takes away some power.

Also, it only takes away *some* of the sharp edges, and
rather arbitrarily chosen ones at that, so you still have
to be on the lookout for sharp edges whenever you use it.
Rather like having a sword that's only sharp on two edges
instead of three...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From mollitor at earthlink.net  Wed Mar 24 00:46:37 2004
From: mollitor at earthlink.net (Robert Mollitor)
Date: Wed Mar 24 00:45:20 2004
Subject: [Python-Dev] generator expression syntax
Message-ID: <9DADA9CC-7D56-11D8-B3B6-000393100E1A@earthlink.net>


Hi,

If I may, I would like to make a comment about the generator expression 
syntax.

The idea of generator expressions seems good, but the proposed syntax 
seems
a little wrong to me.  It seems a little wrong to me.

First, the syntax is too dependent on the parentheses.  To my mind, 
this is a fourth
meaning for parentheses.  (1) Parentheses group and associate 
expressions:
"(a * (b + c))", "if (a and b)".  (2) Parentheses construct tuples: 
"(1, 2, 3)", "()", "('a',)".
(3) Parentheses enclose argument lists (arguably a special case of 
tuple-constructor):
"def f(a, b, c)", "obj.dump()", "class C (A, B)".  And now (4*) 
generator expressions:
"(x*x for x in list)".  I realize that in some sense the parentheses 
are not part of the
expression syntax (since we wouldn't require "sum((x * x for x in 
list))"), but they are
mandatory nonetheless because you can't have "a = x*x for x in list".  
This seems
like it stretching a familiar construct too far.

Second, it looks like a "tuple comprehension".  The list comprehension 
construct
yields a list.  A generator expression looks like it should yield a 
tuple, but it doesn't.
In fact, the iterator object that is returned is not even 
subscriptable.  While

	def f(arg):
		for a in arg:
			print a
	f(x*x for x in (1,2,3))

will work as expected,

	def g(arg):
		print arg[1:]
	g(x*x for x in (1,2,3))

will not.

Third, it seems Lisp-ish or Objective-C-ish and not Pythonic to me.  I 
realize that is just a
style thing, but that's the flavor I get.

Fourth, it seems like this variable binding thing will trip people up 
because it is not obvious
that a function is being defined.  Lambdas have variable binding 
issues, but that is obviously
a special construct.  The current generator expression construct is too 
deceptively simple
looking for its own good, in my opinion.  My (admittedly weak) 
understanding of the variable
binding issue is that the behavior of something like

	a = (1,2,3)
	b = (x*x for x in a)
	a = (7,8,9)
	for c in b:
		print c

is still up in the air.  It seems that either way it is resolved will 
be confusing for various reasons.


OK, I not completely sure if this will work to everyone's satisfaction, 
but here is my proposal:  replace
the

	(x*x for x in a)

construct with

	lambda: yield x*x for x in a

CONS

  - "Ick, lambdas"
  -  Its longer

PROS

  - Lambdas are funky, but they are explicitly funky: look up 'lambda' 
in the index and go to that
section of the book

  - Use the variable binding rules of lambas and people will be as happy 
with that as they are with
lambdas in the first place (for better or worse)

- No new meaning for parentheses are introduced

- The grammar change is straight-forward (I think):

	replace

		lambdef: 'lambda' [varargslist] ':' test

	with

		lambdef: 'lambda' ( varargslist ':' test | ':' ( test | 'yield' test 
gen_for ))

	or

		lambdef: 'lambda' ( varargslist ':' test | ':' ( test | 'yield' test 
[gen_for] ))

(The last variant would allow a single element generator to be 
specified.  Maybe not terribly useful,
but as useful as

	def f(a): yield a

I suppose)

So here would be the recasting of some of examples in PEP 289:

	sum(lambda: yield x*x for x in range(10))

	d = dict (lambda: yield x, func(k) for k in keylist)

	sum(lambda: yield a.myattr for a in data)

	g = lambda: yield x**2 for x in range(10)
	print g.next()

	reduce(operator.add, lambda: yield x**2 for x in range(10))

	lambda: yield for x in (1, 2, 3)   # assuming we don't use list_for 
instead of gen_for in the grammar, I guess

	# Now if we use lambda behavior, then I don't think we would have free 
variable binding, so
	x = 0
	g = lambda:yield x for c in "abc" # The 'c' variable would not be 
exported
	x = 1
	print g.next()	# prints 1 (current x), not 0 (captured x)
	x = 2
	print g.next()     # would it print 2 now?  Obviously I don't have a 
firm grasp on the bytecode implementation

	# I think the following would work, too
	for xx in lambda: yield x*x for x in range(10):
		print xx

	# If so, how's this for decadent
	for i,a in lambda: yield i,list[i] for i in range(len(list)):
		print i, a


I hope this provided some food for constructive thought.


Robert Mollitor


From jcarlson at uci.edu  Wed Mar 24 02:02:31 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed Mar 24 02:06:03 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000001c41152$baca5830$29714d0c@ARKLAPTOP>
References: <LNBBLJKPBEHFEDALKOLCEENPJLAB.tim.one@comcast.net>
	<000001c41152$baca5830$29714d0c@ARKLAPTOP>
Message-ID: <20040323224817.E8BD.JCARLSON@uci.edu>

[snip]

> perhaps two distinct objects.  How can one determine straightforwardly
> whether those objects are members of the same equivalence class?

You were there and helped point out some of the limitations of the
original code I posted, which now has a _straightforward_ recursive
implementation here:
http://mail.python.org/pipermail/python-dev/2004-March/043357.html

It holds up to the equivalence class requirements and is easy to use. As
far as I can tell, it would be relatively straightforward to convert it
into a C implementation and give it some sort of keyword name. Whether
the conversion and keyword makes sense in the future, or whether a
function is sufficient, I'll leave to others.

 - Josiah


From andersjm at dancontrol.dk  Wed Mar 24 04:02:56 2004
From: andersjm at dancontrol.dk (Anders J. Munch)
Date: Wed Mar 24 04:03:06 2004
Subject: [Python-Dev] Possible resolution of generator expression
	variablecapture dilemma
References: <200403240537.i2O5buOM018356@cosc353.cosc.canterbury.ac.nz>
Message-ID: <006c01c4117e$cfa67280$f901a8c0@hirsevej.dk>

From: "Greg Ewing" <greg@cosc.canterbury.ac.nz>
>
>What's different is that Scheme's equivalent of a
> for-loop creates a new binding for the loop variable on each
> iteration. This tends to result in fewer late-binding surprises,
> because you get the effect of variable capture -- but due to the
> semantics of the for-loop, and other common constructs which introduce
> new bindings, rather than anything that goes on inside them.

I had a similar thought - to early-bind loop variables and late-bind
everything else, for exactly the reason you give.

Alas it doesn't do the job because it fails to handle loop-local
derivatives.  A simple change to Tim's example:

     pipe = source
     for p in predicates:
         filter = p.filter
         # add a filter over the current pipe, and call that the new pipe
         pipe = e for e in pipe if filter(e)

Now we can do something clever to fix p, but then we really should do
the same thing for filter.  But how can the interpreter know that
filter is a loop-local derivative of the loop variable?  That would
require an uncomfortable amount of dwim/magic.

- Anders



From gmccaughan at synaptics-uk.com  Wed Mar 24 04:37:45 2004
From: gmccaughan at synaptics-uk.com (Gareth McCaughan)
Date: Wed Mar 24 04:37:54 2004
Subject: [Python-Dev] generator expression syntax
In-Reply-To: <9DADA9CC-7D56-11D8-B3B6-000393100E1A@earthlink.net>
References: <9DADA9CC-7D56-11D8-B3B6-000393100E1A@earthlink.net>
Message-ID: <200403240937.45265.gmccaughan@synaptics-uk.com>

On Wednesday 2004-03-24 05:46, Robert Mollitor wrote:

> The idea of generator expressions seems good, but the proposed syntax
> seems a little wrong to me.
> 
> First, the syntax is too dependent on the parentheses.  To my mind, 
> this is a fourth meaning for parentheses.  (1) Parentheses group and
> associate expressions: "(a * (b + c))", "if (a and b)".  (2) Parentheses
> construct tuples: "(1, 2, 3)", "()", "('a',)". (3) Parentheses enclose
> argument lists (arguably a special case of tuple-constructor):
> "def f(a, b, c)", "obj.dump()", "class C (A, B)".  And now (4*)
> generator expressions: "(x*x for x in list)".  I realize that in some
> sense the parentheses are not part of the expression syntax (since we
> wouldn't require "sum((x * x for x in list))"), but they are mandatory
> nonetheless because you can't have "a = x*x for x in list".  This seems
> like it stretching a familiar construct too far.

Actually 4 and 2 are very similar. It's *commas* that construct tuples,
not parentheses; the parentheses are just necessary in some contexts
to shield the tuple construction from other things going on nearby.
Requiring parens around a generator expression is just like requiring
parens around a single-element tuple expression.

> Second, it looks like a "tuple comprehension".  The list comprehension
> construct yields a list.  A generator expression looks like it should yield
> a tuple, but it doesn't.

OK, so generator expressions should be wrapped in the same punctuation
that's used to write explicit generators. Er, except that there isn't
any.

> Third, it seems Lisp-ish or Objective-C-ish and not Pythonic to me.
> I realize that is just a style thing, but that's the flavor I get.

There are two separate statements here: (1) it seems Lisp-ish or
ObjC-ish, and (2) it doesn't seem Pythonic. #1 isn't a problem
in itself; are you deducing #2 from #1 or claiming that it's true
in its own right? (I don't see either #1 or #2 myself, but I find
#1 easier to sympathize with than #2.)

> Fourth, it seems like this variable binding thing will trip people up
> because it is not obvious that a function is being defined.  Lambdas
> have variable binding issues, but that is obviously a special construct.

It's not clear to me that lambdas are any more obviously a special
construct than generator expressions. Are you sure it's not just
that you're used to lambdas and know that they're special, but aren't
yet used to generator expressions?

> OK, I not completely sure if this will work to everyone's satisfaction, 
> but here is my proposal:  replace
> the
> 
> 	(x*x for x in a)
> 
> construct with
> 
> 	lambda: yield x*x for x in a

I predict that to a good first approximation this one won't work
to *anyone's* satisfaction.

> 
> CONS
> 
>   - "Ick, lambdas"
>   -  It's longer

  - lambda doesn't mean "generator expression" or "thing involving
    variable binding" or anything like it; it means "anonymous
    function"
  - it introduces a magical new thing that you can do *only* in
    lambdas and not in named functions
  - The feature of ordinary generators that this trades on (namely
    that a generator definition makes something that you call in
    order to get an iterable thing) is probably the single most
    obscure point about generators
  - the variable binding features that lambdas already have aren't
    the same ones that are being proposed for generator expressions
  - it's as ugly as, ummm, something very ugly

> PROS
> 
>   - Lambdas are funky, but they are explicitly funky: look up 'lambda' 
> in the index and go to that section of the book

This "pro" would equally justify overloading "if" or inventing
a keyword "wombiquangle" for this purpose.

>   - Use the variable binding rules of lambas and people will be as happy 
> with that as they are with lambdas in the first place (for better or worse)

The use cases of lambdas and of generator expressions are not
the same. Why should the appropriate variable binding rules
be the same? And why, therefore, should people be equally
happy with them?

> So here would be the recasting of some of examples in PEP 289:
> 
> 	sum(lambda: yield x*x for x in range(10))
> 
> 	d = dict (lambda: yield x, func(k) for k in keylist)
[etc]

I can't believe you typed all those in and looked at them and
still like this idea :-).

> 	# I think the following would work, too
> 	for xx in lambda: yield x*x for x in range(10):
> 		print xx
> 
> 	# If so, how's this for decadent
> 	for i,a in lambda: yield i,list[i] for i in range(len(list)):
> 		print i, a

You think this is a *good* thing? Compare these, with their
misleading initial segments "for something in lambda:" and all,
with

    for xx in (x*x for x in range(10)):
        print x

    for i,a in ((i,list[i]) for i in range(len(list))):
        print i,a

which, for all their allegedly superfluous parentheses
and allegedly weird variable binding behaviour, I can
at least read.

-- 
g


From guido at python.org  Wed Mar 24 06:37:12 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 24 06:37:21 2004
Subject: [Python-Dev] generator expression syntax
In-Reply-To: Your message of "Wed, 24 Mar 2004 00:46:37 EST."
	<9DADA9CC-7D56-11D8-B3B6-000393100E1A@earthlink.net> 
References: <9DADA9CC-7D56-11D8-B3B6-000393100E1A@earthlink.net> 
Message-ID: <200403241137.i2OBbCg14158@guido.python.org>

> 	lambda: yield x*x for x in a

Sorry, this was considered and rejected ages ago.

Please don't open the syntactic discussion on generator expressions;
we've had it already and you're unlikely to unearth anything new.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From mollitor at earthlink.net  Wed Mar 24 06:48:50 2004
From: mollitor at earthlink.net (Robert Mollitor)
Date: Wed Mar 24 06:47:31 2004
Subject: [Python-Dev] generator expression syntax
In-Reply-To: <200403241137.i2OBbCg14158@guido.python.org>
Message-ID: <37C64320-7D89-11D8-9FD4-000393100E1A@earthlink.net>


On Wednesday, March 24, 2004, at 06:37 AM, Guido van Rossum wrote:

>> 	lambda: yield x*x for x in a
>
> Sorry, this was considered and rejected ages ago.
>
> Please don't open the syntactic discussion on generator expressions;
> we've had it already and you're unlikely to unearth anything new.

OK.  Sorry.  I didn't look much deeper than the PEP and recent 
discussion.


Robert Mollitor



From guido at python.org  Wed Mar 24 08:30:08 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 24 08:30:37 2004
Subject: [Python-Dev] Possible resolution of generator expression variable
	capture dilemma
In-Reply-To: Your message of "Wed, 24 Mar 2004 17:37:56 +1200."
	<200403240537.i2O5buOM018356@cosc353.cosc.canterbury.ac.nz> 
References: <200403240537.i2O5buOM018356@cosc353.cosc.canterbury.ac.nz> 
Message-ID: <200403241330.i2ODU8714717@guido.python.org>

> However, I'm suggesting something different. I'm *not* proposing to
> make it *local* to the loop -- its name will still reside in the
> enclosing namespace, and its value will still be available after the
> loop finishes. In fact, if it's not referenced from any nested scope,
> there will be no change in semantics at all.
> 
> What *will* change is perhaps best explained by means of the
> implementation, which is very simple. If the loop variable is
> referenced from a nested scope, it will be held in a cell. Now, on
> each iteration, instead of replacing the contents of the cell as a
> normal assignment would, we create a *new* cell and re-bind the name
> to the new cell.

If I had known Scheme 15 years ago I might have considered this -- it
is certainly an interesting approach.  I'll have to ponder this a lot
more before accepting it now; the potential consequences seem
enormous.  (It also pretty much forces this particular way of
implementing nested scopes.)

I think it's definitely out of the question before 3.0; it's too
subtle a change to too fundamental a construct.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From FBatista at uniFON.com.ar  Wed Mar 24 08:30:57 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 24 08:32:39 2004
Subject: [Python-Dev] Timing for Py2.4
Message-ID: <A128D751272CD411BC9200508BC2194D033837A8@escpl.tcp.com.ar>

Raymond Hettinger wrote:

#- The PEP for Py2.4 got updated and the dates were pushed back to
#- September.  Is there any reason that we couldn't get a first alpha
#- out in early June or late May? 
#- ...
#- The statistics and Decimal modules are almost ready for prime time.

I almost finished the test cases for the Decimal module. Need to make the
Context test cases and get the PEP accepted (already asked to David Goodger
how to call for a revision), and then I could complete the test cases.

When the test cases will be finished (and meanwhile are reviewed by the
community), I could start fixing the Decimal module itself.

With all this said, I think that the dates you proposed are allright.

.	Facundo

From guido at python.org  Wed Mar 24 08:55:49 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 24 08:56:03 2004
Subject: [Python-Dev] redefining is
In-Reply-To: Your message of "Tue, 23 Mar 2004 22:47:24 EST."
	<000001c41152$baca5830$29714d0c@ARKLAPTOP> 
References: <000001c41152$baca5830$29714d0c@ARKLAPTOP> 
Message-ID: <200403241355.i2ODtnR14773@guido.python.org>

> If the objects' value is (3, 4), you can use ==.
> 
> If the objects' value is (3, a), there's no really easy way to do it.
> 
> If the objects' value is [3, 4], you can use 'is'.
> 
> I think it would be useful to have an operation that is as easy to
> use as 'is' that would make the determination.

You've said that a few times.  While the concept is clear and easily
implemented: by adding a special method that defaults to 'is' but is
done differently for tuples, numbers and strings.  But I'm not sure
what having this check available would buy me.  Can you show some
application that would actually use this in a way that can't be easily
done without a language addition?

--Guido van Rossum (home page: http://www.python.org/~guido/)

From skip at pobox.com  Wed Mar 24 09:12:14 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 24 09:12:54 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <4060E835.50805@interlink.com.au>
References: <001e01c41132$62275b80$063ec797@oemcomputer>
	<4060E835.50805@interlink.com.au>
Message-ID: <16481.38718.160156.28191@montanaro.dyndns.org>


    Anthony> I _really_ want to give ast-compiler and generator expressions
    Anthony> time to bake. genexps in particular have the potential to have
    Anthony> all sorts of nasty corner cases.

Are these both already in CVS and used by default?

Skip

From ark at acm.org  Wed Mar 24 09:15:16 2004
From: ark at acm.org (Andrew Koenig)
Date: Wed Mar 24 09:14:41 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <200403241355.i2ODtnR14773@guido.python.org>
Message-ID: <000401c411aa$6e34e7d0$8228000a@ARKLAPTOP>

> You've said that a few times.  

Yes I did, but I was asked again so I answered again :-)

> While the concept is clear and easily
> implemented: by adding a special method that defaults to 'is' but is
> done differently for tuples, numbers and strings.  But I'm not sure
> what having this check available would buy me.  Can you show some
> application that would actually use this in a way that can't be easily
> done without a language addition?

You said you wanted to exchange some more email during the conference, but
this is getting silly :-)  Let's find some time to talk about it.


From skip at pobox.com  Wed Mar 24 09:17:31 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 24 09:17:49 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
Message-ID: <16481.39035.450700.708157@montanaro.dyndns.org>


    >> 2. Define a class with a singleton instance.
    >> 
    >> class MyClass [singleton]:
    >> ...

    Greg> This example appears before you've mentioned the possibility of
    Greg> applying this to classes. May be the "Possible Extensions" section
    Greg> should appear earlier?

Actually, it should be deleted and the PEP updated.  Michael's latest patch
apparently supports class decorators, so it's no longer an extension.

Skip

From skip at pobox.com  Wed Mar 24 09:20:50 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 24 09:20:54 2004
Subject: [Python-Dev] PEP 292 - simpler string substitutions
In-Reply-To: <1080100790.32228.26.camel@geddy.wooz.org>
References: <1080100790.32228.26.camel@geddy.wooz.org>
Message-ID: <16481.39234.24534.844612@montanaro.dyndns.org>


    Barry> If this stuff is approved for Python 2.4, I'll add test cases and
    Barry> documentation.

Holding dstrings for ransom, are we? ;-)

nobody-move-or-the-drummer-gets-it-ly, y'rs,

Skip

From guido at python.org  Wed Mar 24 09:28:55 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 24 09:29:01 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: Your message of "Wed, 24 Mar 2004 08:17:31 CST."
	<16481.39035.450700.708157@montanaro.dyndns.org> 
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz> 
	<16481.39035.450700.708157@montanaro.dyndns.org> 
Message-ID: <200403241428.i2OEStL14897@guido.python.org>

> Actually, it should be deleted and the PEP updated.  Michael's latest patch
> apparently supports class decorators, so it's no longer an extension.

But while you're at it, please add to the PEP that I don't like the
idea of class decorators.  I think they don't solve a real problem,
unlike function decorators, and I think that making classes and
functions more similar just adds confusion.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From bob at redivi.com  Wed Mar 24 10:00:53 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 24 09:58:20 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403241428.i2OEStL14897@guido.python.org>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
Message-ID: <0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>


On Mar 24, 2004, at 9:28 AM, Guido van Rossum wrote:

>> Actually, it should be deleted and the PEP updated.  Michael's latest 
>> patch
>> apparently supports class decorators, so it's no longer an extension.
>
> But while you're at it, please add to the PEP that I don't like the
> idea of class decorators.  I think they don't solve a real problem,
> unlike function decorators, and I think that making classes and
> functions more similar just adds confusion.

I disagree.  There's definitely a use case for something less permanent 
than a metaclass that gets a reference to a class immediately after 
creation but before it ends up in the module namespace.  For example, 
ZopeX3 interfaces and PyProtocols interfaces both use sys._getframe() 
introspection to add a temporary metaclass so that they can declare 
that a class supports (or does not support) a particular set of 
interfaces from the class body itself.  Using the [] syntax to decorate 
the class would deprecate these sorts of nasty hacks.

-bob


From pje at telecommunity.com  Wed Mar 24 10:37:37 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 24 10:32:25 2004
Subject: [Python-Dev] Possible resolution of generator expression
	variable capture dilemma
In-Reply-To: <200403240537.i2O5buOM018356@cosc353.cosc.canterbury.ac.nz>
References: <LNBBLJKPBEHFEDALKOLCKEHCJLAB.tim.one@comcast.net>
Message-ID: <5.1.0.14.0.20040324101456.03a35130@mail.telecommunity.com>

At 05:37 PM 3/24/04 +1200, Greg Ewing wrote:
>An advantage of this approach is that *all* forms of nested scope
>(lambda, def, etc.) would benefit, not just generator expressions. I
>suspect it would eradicate most of the few remaining uses for the
>default-argument hack, for instance (which nested scopes were supposed
>to do, but didn't).

Wow.  I haven't spent a lot of time thinking through for possible holes, 
but my initial reaction to this is that it'd be a big plus.  It's always 
counterintuitive to me that I can't define a function or lambda expression 
in a for loop without having to first create a function that returns a 
function.

But then...  what about while loops?  I think it'd be confusing if I 
changed between a for and a while loop (either direction), and the 
semantics of nested function definitions changed.  Indeed, I think you'd 
need to make this work for *all* variables rebound inside *all* loops that 
contain a nested scope, not just a for loop's index variable.

Would this produce any incompatibilities?  Right now, if you define a 
function inside a loop, intending to call it from outside the loop, your 
code doesn't work in any sane way today.  If you define one that's to be 
called from inside the loop, it will work the same way...  unless you're 
rebinding variables after the definition, but before the call point.

So, it does seem that there could be some code that would change its 
semantics.  Such code would have to define a function inside a loop, and 
reference a variable that is rebound inside the loop, but after the 
function is defined.  E.g.:

for x in 1,2,3:
     def y():
         print z
     z = x * 2
     y()

Today, this code prints 2, 4, and 6, but under the proposed approach it 
would presumably get an unbound local error.

So, I think the trick here would be figuring out how to specify this in 
such a way that it both makes sense for its intended use, while not fouling 
up code that works today.  Reallocating cells at the top of the loop might 
work:

for x in 1,2,3:
     def y(): print z
     z = x * 2
     def q(): print z
     z = x * 3
     y()
     q()

This code will now print 3,3,6,6,9,9, and would do the same under the 
proposed approach.  What *doesn't* work is invoking a previous definition 
after modifying a local:

for x in 1,2,3:
     z = x * 3
     if x>1:
         y()
     def y():
         print z
     z = x * 2

Today, this prints 6,9, but under the proposed semantics it would print 4,6.

Admittedly, I am hard-pressed to imagine an actual use case for this 
pattern of code execution, but if anybody's done it, their code would break.

Unfortunately, this means that even your comparatively modest proposal 
(only 'for' loops, and only the index variable) can have this same issue, 
if the loop index variable is being rebound.  This latter pattern (define 
in one iteration, invoke in a later one) will change its meaning under such 
a capture scheme.


From aahz at pythoncraft.com  Wed Mar 24 10:37:07 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 24 10:37:22 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <001e01c41132$62275b80$063ec797@oemcomputer>
References: <001e01c41132$62275b80$063ec797@oemcomputer>
Message-ID: <20040324153707.GA10363@panix.com>

On Tue, Mar 23, 2004, Raymond Hettinger wrote:
>
> The PEP for Py2.4 got updated and the dates were pushed back to
> September.  Is there any reason that we couldn't get a first alpha out
> in early June or late May? 
>  
> Short of generator expressions, Py2.4 is already in good shape for an
> alpha.  I expect generator expressions to be hammered out shortly and
> will soon be able to devote my full energy to wrapping it up. The
> statistics and Decimal modules are almost ready for prime time. Unless
> someone has a specific summer plan to work on another major feature
> (perhaps attribute lookup), I do not expect any significant developments
> May through September that would warrant holding up 2.4.

There's also PEP 328, but that should be easy to implement once I write
up all the feedback and Guido Pronounces.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From guido at python.org  Wed Mar 24 10:40:07 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 24 10:40:15 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: Your message of "Wed, 24 Mar 2004 10:00:53 EST."
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com> 
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org> 
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com> 
Message-ID: <200403241540.i2OFe7w15819@guido.python.org>

> > But while you're at it, please add to the PEP that I don't like the
> > idea of class decorators.  I think they don't solve a real problem,
> > unlike function decorators, and I think that making classes and
> > functions more similar just adds confusion.
> 
> I disagree.  There's definitely a use case for something less permanent 
> than a metaclass that gets a reference to a class immediately after 
> creation but before it ends up in the module namespace.  For example, 
> ZopeX3 interfaces and PyProtocols interfaces both use sys._getframe() 
> introspection to add a temporary metaclass so that they can declare 
> that a class supports (or does not support) a particular set of 
> interfaces from the class body itself.  Using the [] syntax to decorate 
> the class would deprecate these sorts of nasty hacks.

But the use cases are certainly very *different* than those for
function/method decorators, and should be spelled out separately.

For example, I'd like to see you show in much more detail how class
decorators could be used to replace those Zope3 hacks.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From skip at pobox.com  Wed Mar 24 10:56:10 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 24 10:56:23 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403241428.i2OEStL14897@guido.python.org>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
Message-ID: <16481.44954.360333.551250@montanaro.dyndns.org>


    >> Actually, it should be deleted and the PEP updated.  Michael's latest
    >> patch apparently supports class decorators, so it's no longer an
    >> extension.

    Guido> But while you're at it, please add to the PEP that I don't like
    Guido> the idea of class decorators.  I think they don't solve a real
    Guido> problem, unlike function decorators, and I think that making
    Guido> classes and functions more similar just adds confusion.

I think this use case is rather elegant:

    def singleton(cls):
        return cls()

    class Foo [singleton]:
        ...

Skip

From skip at pobox.com  Wed Mar 24 10:59:10 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 24 10:59:20 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403241540.i2OFe7w15819@guido.python.org>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
	<200403241540.i2OFe7w15819@guido.python.org>
Message-ID: <16481.45134.726894.939016@montanaro.dyndns.org>

    Bob> For example, ZopeX3 interfaces and PyProtocols interfaces both use
    Bob> sys._getframe() introspection to add a temporary metaclass so that
    Bob> they can ...

    Guido> But the use cases are certainly very *different* than those for
    Guido> function/method decorators, and should be spelled out separately.

    Guido> For example, I'd like to see you show in much more detail how
    Guido> class decorators could be used to replace those Zope3 hacks.

Bob, if you can come up with an example or two that won't consume infinite
space, I'd be happy to add it/them to the Examples section of the PEP.

Skip

From guido at python.org  Wed Mar 24 11:07:25 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 24 11:08:03 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: Your message of "Wed, 24 Mar 2004 09:56:10 CST."
	<16481.44954.360333.551250@montanaro.dyndns.org> 
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org> 
	<16481.44954.360333.551250@montanaro.dyndns.org> 
Message-ID: <200403241607.i2OG7QZ15893@guido.python.org>

> I think this use case is rather elegant:
> 
>     def singleton(cls):
>         return cls()
> 
>     class Foo [singleton]:
>         ...

And how would this be better than

    class Foo(singleton):
        ...

(with a suitable definition of singleton, which could just be 'object'
AFAICT from your example)?

--Guido van Rossum (home page: http://www.python.org/~guido/)

From jack at performancedrivers.com  Wed Mar 24 11:16:51 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Wed Mar 24 11:16:57 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
Message-ID: <20040324161651.GA8588@performancedrivers.com>

On Wed, Mar 24, 2004 at 10:00:53AM -0500, Bob Ippolito wrote:
> 
> On Mar 24, 2004, at 9:28 AM, Guido van Rossum wrote:
> 
> >>Actually, it should be deleted and the PEP updated.  Michael's latest 
> >>patch
> >>apparently supports class decorators, so it's no longer an extension.
> >
> >But while you're at it, please add to the PEP that I don't like the
> >idea of class decorators.  I think they don't solve a real problem,
> >unlike function decorators, and I think that making classes and
> >functions more similar just adds confusion.
> 
> I disagree.  There's definitely a use case for something less permanent 
> than a metaclass that gets a reference to a class immediately after 
> creation but before it ends up in the module namespace.

I would use it with factories as (ex 1):

class SubCollect(Collect) [register(db=4, name='newsletter')]: pass

instead of (ex 2):

class SubCollect(Collect): pass
register(SubCollect, db=4, name='newsletter')

or the metaclass [ab]use (ex 3):

class Collect(object):
  __metaclass__ = register_anyone_who_derives_from_us

class SubCollect(Collect):
  db = 4              # metaclass looks for cls.db
  name = 'newsletter' # and cls.name


I actually moved from using #2 to #3 in real life.
#2 is typo prone, and if you omit the register() call you're SOL.
It also has the same distance problem as the 'staticmethod' call,
it is hard to see the call because it is always after the class
definition.

I would prefer doing it the #1 way.  It doesn't modify the class
(it just registers it with a Factory) but it puts the meta information
about the class registration in a place that is easy to see and understand
without littering the __dict__ with meta info like #3.  Decoration
makes it very easy to see which classes register and which don't.
The metaclass way requires an ugly DO_NOT_REGISTER_ME = 1 for intermediate
classes that the factory shouldn't care about.

The decorator is the only one that makes it really obvious what
we're doing.

The end calls look like

guy_cls = find_collect_class(db=4, name='newsletter')

-jackdied


From aahz at pythoncraft.com  Wed Mar 24 11:17:19 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 24 11:17:35 2004
Subject: [Python-Dev] An issue recently brought up in
	patch#872326(generator expression)
In-Reply-To: <200403240542.i2O5gwqE018363@cosc353.cosc.canterbury.ac.nz>
References: <200403232054.i2NKsd912566@guido.python.org>
	<200403240542.i2O5gwqE018363@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040324161719.GA15004@panix.com>

On Wed, Mar 24, 2004, Greg Ewing wrote:
> Guido van Rossum <guido@python.org>:
>> 
>> I think that in this particular case, late binding, with its sharp
>> edges, gives more power than binding capturing, and the latter, in its
>> attempts to smoothen the sharp edges, also takes away some power.
> 
> Also, it only takes away *some* of the sharp edges, and rather
> arbitrarily chosen ones at that, so you still have to be on the
> lookout for sharp edges whenever you use it.  Rather like having a
> sword that's only sharp on two edges instead of three...

"Understanding is a three-edged sword."
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From skip at pobox.com  Wed Mar 24 11:21:57 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 24 11:22:08 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403241607.i2OG7QZ15893@guido.python.org>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<16481.44954.360333.551250@montanaro.dyndns.org>
	<200403241607.i2OG7QZ15893@guido.python.org>
Message-ID: <16481.46501.518573.357666@montanaro.dyndns.org>


    >> I think this use case is rather elegant:
    >> 
    >> def singleton(cls):
    >>     return cls()
    >> 
    >> class Foo [singleton]:
    >>     ...

    Guido> And how would this be better than

    Guido>     class Foo(singleton):
    Guido>         ...

    Guido> (with a suitable definition of singleton, which could just be
    Guido> 'object' AFAICT from your example)?

"Better"?  I don't know.  Certainly different.  In the former, Foo gets
bound to a class instance.  In the latter, it would be a separate step which
you omitted:

    class Foo(singleton):
        ...
    Foo = Foo()

Skip


From guido at python.org  Wed Mar 24 11:24:43 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 24 11:24:50 2004
Subject: [Python-Dev] Possible resolution of generator expression variable
	capture dilemma
In-Reply-To: Your message of "Wed, 24 Mar 2004 10:37:37 EST."
	<5.1.0.14.0.20040324101456.03a35130@mail.telecommunity.com> 
References: <LNBBLJKPBEHFEDALKOLCKEHCJLAB.tim.one@comcast.net>  
	<5.1.0.14.0.20040324101456.03a35130@mail.telecommunity.com> 
Message-ID: <200403241624.i2OGOhW15952@guido.python.org>

Phillip shows some examples involving inner functions defined inside
the for loop.  But in current Python these examples work just as well
if the function is defined *outside* the loop (as long as no default
argument values are needed):

> for x in 1,2,3:
>      def y():
>          print z
>      z = x * 2
>      y()

is the same as

  def y():
      print z
  for x in 1, 2, 3:
      z = x*2
      y()

That would change under Phillip's proposal.

There are similar examples that would break under Greg's original
proposal: just use the loop variable in the function, e.g.:

  def y():
      print x*2
  for x in 1, 2, 3:
      y()

All this is just more reason to put this one off until 3.0.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From bob at redivi.com  Wed Mar 24 11:55:54 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 24 11:52:37 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <16481.45134.726894.939016@montanaro.dyndns.org>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
	<200403241540.i2OFe7w15819@guido.python.org>
	<16481.45134.726894.939016@montanaro.dyndns.org>
Message-ID: <1D0CF32F-7DB4-11D8-BB21-000A95686CD8@redivi.com>

On Mar 24, 2004, at 10:59 AM, Skip Montanaro wrote:

>     Bob> For example, ZopeX3 interfaces and PyProtocols interfaces 
> both use
>     Bob> sys._getframe() introspection to add a temporary metaclass so 
> that
>     Bob> they can ...
>
>     Guido> But the use cases are certainly very *different* than those 
> for
>     Guido> function/method decorators, and should be spelled out 
> separately.
>
>     Guido> For example, I'd like to see you show in much more detail 
> how
>     Guido> class decorators could be used to replace those Zope3 hacks.
>
> Bob, if you can come up with an example or two that won't consume 
> infinite
> space, I'd be happy to add it/them to the Examples section of the PEP.

def provides(*interfaces):
     """
     An actual, working, implementation of provides for
     the current implementation of PyProtocols.  Not
     particularly important for the PEP text.
     """
     def provides(typ):
         declareImplementation(typ, instancesProvide=interfaces)
         return typ
     return provides

class IBar(Interface):
     """Declare something about IBar here"""

class Foo(object) [provides(IBar)]:
	"""Implement something here..."""

No stack introspection, no forced superclass or metaclasses, no 
side-effects on subclasses, etc.  Foo is entirely unchanged by this 
operation (in PyProtocols at least).

-bob


From pje at telecommunity.com  Wed Mar 24 12:46:10 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 24 12:47:24 2004
Subject: [Python-Dev] Possible resolution of generator expression
	variable capture dilemma
In-Reply-To: <200403241624.i2OGOhW15952@guido.python.org>
References: <Your message of "Wed, 24 Mar 2004 10:37:37 EST."
	<5.1.0.14.0.20040324101456.03a35130@mail.telecommunity.com>
	<LNBBLJKPBEHFEDALKOLCKEHCJLAB.tim.one@comcast.net>
	<5.1.0.14.0.20040324101456.03a35130@mail.telecommunity.com>
Message-ID: <5.1.1.6.0.20040324123518.02eceec0@telecommunity.com>

At 08:24 AM 3/24/04 -0800, Guido van Rossum wrote:
>Phillip shows some examples involving inner functions defined inside
>the for loop.  But in current Python these examples work just as well
>if the function is defined *outside* the loop (as long as no default
>argument values are needed):
>
> > for x in 1,2,3:
> >      def y():
> >          print z
> >      z = x * 2
> >      y()
>
>is the same as
>
>   def y():
>       print z
>   for x in 1, 2, 3:
>       z = x*2
>       y()
>
>That would change under Phillip's proposal.

Hm.  I was viewing it a bit differently.  In effect, I was treating the 
loop as a new "nested scope", so a function defined within a loop would 
have different variable capture than one defined outside a loop.


>There are similar examples that would break under Greg's original
>proposal: just use the loop variable in the function, e.g.:

Right, I mentioned at the end that the issue with my proposal was also 
applicable to Greg's.


>All this is just more reason to put this one off until 3.0.

I agree, and withdraw my support for early binding in generator 
expressions, despite Tim's example.  Instead, we should have a consistent 
failure under "practicality beats purity".  That is, a straightforward 
translation of:

     pipe = source
     for p in predicates:
         # add a filter over the current pipe, and call that the new pipe
         pipe = e for e in pipe if p(e)

to:

     pipe = source
     for p in predicates:
         def __():
             for e in pipe:
                if p(e):
                    yield e
         # add a filter over the current pipe, and call that the new pipe
         pipe = __()

produces exactly the same broken behavior in today's Python.  The solution, 
of course is:

     def ifilter(pred,pipe):
         for e in pipe:
             if pred(e):
                 yield e

     pipe = source
     for p in predicates:
         # add a filter over the current pipe, and call that the new pipe
         pipe = ifilter(p,pipe)



From pje at telecommunity.com  Wed Mar 24 13:17:16 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 24 13:18:23 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403241540.i2OFe7w15819@guido.python.org>
References: <Your message of "Wed, 24 Mar 2004 10:00:53 EST."
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
	<16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
Message-ID: <5.1.1.6.0.20040324124833.02ee1490@telecommunity.com>

At 07:40 AM 3/24/04 -0800, Guido van Rossum wrote:
> > > But while you're at it, please add to the PEP that I don't like the
> > > idea of class decorators.  I think they don't solve a real problem,
> > > unlike function decorators, and I think that making classes and
> > > functions more similar just adds confusion.
> >
> > I disagree.  There's definitely a use case for something less permanent
> > than a metaclass that gets a reference to a class immediately after
> > creation but before it ends up in the module namespace.  For example,
> > ZopeX3 interfaces and PyProtocols interfaces both use sys._getframe()
> > introspection to add a temporary metaclass so that they can declare
> > that a class supports (or does not support) a particular set of
> > interfaces from the class body itself.  Using the [] syntax to decorate
> > the class would deprecate these sorts of nasty hacks.
>
>But the use cases are certainly very *different* than those for
>function/method decorators, and should be spelled out separately.

Class decorators provide a mechanism that is of greater or equal power to 
metaclasses, but more explicit (you don't have to trace down the entire 
inheritance tree to find them, and they call attention to their specialness 
more) and more combinable.  It is *much* easier to create and chain 
co-operative class decorators, than it is to create and chain combinable 
metaclasses.

It could perhaps be argued that the difficulty with combining metaclasses 
is that Python does not automatically resolve the "inheritance of metaclass 
constraints" as described in the "Putting Metaclasses to Work".  If Python 
did this, then decorators could be implemented by adding them as base 
classes.  However, this would lead to all sorts of interesting pollution in 
__bases__ and __mro__, unless there was magic added to filter the 
decorators back out.

Indeed, we can see this today in systems like Zope's interface metaclasses, 
which have to deal with the artificial 'Interface' instance that is used 
for subclassing.  Granted, today you can use an explicit __metaclass__, but 
it's ugly and provides no way to cleanly combine metaclasses.  I suspect 
that a close look at metaclass usage in today's Python would show that most 
uses are complicated and error-prone ways to emulate class decorators!

So, speaking as an infamous metaclass hacker myself <0.5 wink>, I would say 
that if Python had to choose between having the __metaclass__ syntax and 
using class decorators, I'd go for the decorators, because they're much 
easier to write and use, and harder to screw up, due to the relative 
simplicity and absence of dead chicken-waving.

That's not to say that I'd want to abandon the underlying system of classes 
having metaclasses, just that I'd ditch the '__metaclass__  + 
__new__(meta,name,bases,dict) + super(X,meta).__new__(...)' way of spelling 
them.  Indeed, the spelling is a pretty compelling argument by 
itself.  Here's a sketch of a metaclass that emulates a class decorator:

class AbstractDecoratorClass(type):
     def __new__(meta,name,bases,cdict):
         old = 
super(AbstractDecoratorClass,meta).__new__(meta,name,bases,cdict)
         return meta.decorate(old)

     def decorate(klass,inputclass) [classmethod]:
         raise NotImplementedError


class MyDecoratorClass(AbstractDecoratorClass):

     def decorate(klass,inputclass) [classmethod]:
         # XXX do something here
         return inputclass


class MyDecorator(object):
     __metaclass__ = MyDecoratorClass


class Something(MyDecorator):
     # ...

Whew!  That's a ton of boilerplate, especially when you consider that it 
doesn't support parameterized decorators, pollutes __bases__/__mro__, 
requires non-trivial effort to combine with other decorators, and is 
seriously error-prone.  For example, depending on what my 'decorate' method 
does, it might break when applied to the 'MyDecorator' instance, requiring 
the addition of an 'if' block to handle that special case.


From guido at python.org  Wed Mar 24 13:32:04 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 24 13:32:11 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: Your message of "Wed, 24 Mar 2004 10:21:57 CST."
	<16481.46501.518573.357666@montanaro.dyndns.org> 
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<16481.44954.360333.551250@montanaro.dyndns.org>
	<200403241607.i2OG7QZ15893@guido.python.org> 
	<16481.46501.518573.357666@montanaro.dyndns.org> 
Message-ID: <200403241832.i2OIW4S16223@guido.python.org>

>     >> I think this use case is rather elegant:
>     >> 
>     >> def singleton(cls):
>     >>     return cls()
>     >> 
>     >> class Foo [singleton]:
>     >>     ...
> 
>     Guido> And how would this be better than
> 
>     Guido>     class Foo(singleton):
>     Guido>         ...
> 
>     Guido> (with a suitable definition of singleton, which could just be
>     Guido> 'object' AFAICT from your example)?
> 
> "Better"?  I don't know.  Certainly different.  In the former, Foo gets
> bound to a class instance.  In the latter, it would be a separate step which
> you omitted:
> 
>     class Foo(singleton):
>         ...
>     Foo = Foo()

Ok, so the metaclass would have to be a little different, but this can
be done with metaclasses.  (But I think that this in particular
example, declaring the instance through the class is merely
confusing. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From skip at pobox.com  Wed Mar 24 15:01:59 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 24 15:02:58 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403241832.i2OIW4S16223@guido.python.org>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<16481.44954.360333.551250@montanaro.dyndns.org>
	<200403241607.i2OG7QZ15893@guido.python.org>
	<16481.46501.518573.357666@montanaro.dyndns.org>
	<200403241832.i2OIW4S16223@guido.python.org>
Message-ID: <16481.59703.868873.805393@montanaro.dyndns.org>


    >> class Foo(singleton):
    >> ...
    >> Foo = Foo()

    Guido> Ok, so the metaclass would have to be a little different, but
    Guido> this can be done with metaclasses.

I'll take your word for it, but I wouldn't have the faintest idea how to do
that.

Skip

From martin at v.loewis.de  Wed Mar 24 15:19:22 2004
From: martin at v.loewis.de (Martin v. Löwis)
Date: Wed Mar 24 15:19:26 2004
Subject: [Python-Dev] Python startup time: String objects
Message-ID: <200403242019.i2OKJMC15465@twilight.domainfactory.de>

At pycon, I have been looking into Python startup time. 

I found that CVS-Python allocates roughly 12,000 string objects on startup,
whereas Python 2.2 only allocates 8,000 string objects. In either case,
most strings come from unmarshalling string objects, and the increase is
(probably) due to the increased number of modules loaded at startup
(up from 26 to 34).

The string objects allocated during unmarshalling are often quickly 
discarded after being allocated, as they are identifiers, and get
interned - so only the interned version of the string survives, and
the second copy is deallocated.

I'd like to change the marshal format to perform sharing of equal
strings, instead of marshalling the same identifiers multiple times.
To do so, a dictionary of strings is created on marshalling and a
list is created on unmarshalling, and a new marshal code for
string-backreference would be added.

What do you think?

Regards,
Martin

From pje at telecommunity.com  Wed Mar 24 15:27:21 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 24 15:28:28 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403241956.i2OJu7T16447@guido.python.org>
References: <Your message of "Wed, 24 Mar 2004 13:17:16 EST."
	<5.1.1.6.0.20040324124833.02ee1490@telecommunity.com>
	<Your message of "Wed, 24 Mar 2004 10:00:53 EST."
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
	<16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
	<5.1.1.6.0.20040324124833.02ee1490@telecommunity.com>
Message-ID: <5.1.1.6.0.20040324152355.0201fcc0@telecommunity.com>

At 11:56 AM 3/24/04 -0800, Guido van Rossum wrote:
>Can you reformulate that as an addendum to the PEP?  It really needs
>good motivation for class decorators.
>
>--Guido van Rossum (home page: http://www.python.org/~guido/)

Are you sure it should be an addendum, as opposed to just being folded into 
the PEP's various sections?  For example, the 'singleton' example is 
already in the Examples section, so should it just be addition to motivation?

OTOH, I guess I could see putting brief mentions in those sections, and 
then having an appendix for a detailed explanation of how much harder it is 
to do decorator-like things with metaclasses.

And on the other other hand, maybe citing my email in the Python-dev 
archive would suffice in place of that appendix.  :)

Anyway, let me know which way I should go.


From aleaxit at yahoo.com  Wed Mar 24 15:41:05 2004
From: aleaxit at yahoo.com (Alex Martelli)
Date: Wed Mar 24 15:41:26 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <16481.59703.868873.805393@montanaro.dyndns.org>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403241832.i2OIW4S16223@guido.python.org>
	<16481.59703.868873.805393@montanaro.dyndns.org>
Message-ID: <200403242141.05544.aleaxit@yahoo.com>

On Wednesday 24 March 2004 09:01 pm, Skip Montanaro wrote:
>     >> class Foo(singleton):
>     >> ...
>     >> Foo = Foo()
>
>     Guido> Ok, so the metaclass would have to be a little different, but
>     Guido> this can be done with metaclasses.
>
> I'll take your word for it, but I wouldn't have the faintest idea how to do
> that.

Perhaps something like...:

class metaSingleton(type):
    def __new__(cls, classname, classbases, classdict):
        result = type.__new__(cls, classname, classbases, classdict)
        if classbases:
            return result()
        else:
            return result

class singleton:
    __metaclass__ = metaSingleton


class Foo(singleton):
    pass

print 'foo is', Foo, 'of type', type(Foo)


which outputs:

foo is <__main__.Foo object at 0x403f968c> of type <class '__main__.Foo'>

...?  (As to _why_ one would want such semantics, I dunno -- Singleton being 
most definitely _not_ my favourite DP, anyway...:-)


Alex


From bob at redivi.com  Wed Mar 24 15:53:04 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 24 15:49:44 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403241832.i2OIW4S16223@guido.python.org>
References: <16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<16481.44954.360333.551250@montanaro.dyndns.org>
	<200403241607.i2OG7QZ15893@guido.python.org>
	<16481.46501.518573.357666@montanaro.dyndns.org>
	<200403241832.i2OIW4S16223@guido.python.org>
Message-ID: <3F0FBF4D-7DD5-11D8-BB21-000A95686CD8@redivi.com>


On Mar 24, 2004, at 1:32 PM, Guido van Rossum wrote:

>>>> I think this use case is rather elegant:
>>>>
>>>> def singleton(cls):
>>>>     return cls()
>>>>
>>>> class Foo [singleton]:
>>>>     ...
>>
>>     Guido> And how would this be better than
>>
>>     Guido>     class Foo(singleton):
>>     Guido>         ...
>>
>>     Guido> (with a suitable definition of singleton, which could just 
>> be
>>     Guido> 'object' AFAICT from your example)?
>>
>> "Better"?  I don't know.  Certainly different.  In the former, Foo 
>> gets
>> bound to a class instance.  In the latter, it would be a separate 
>> step which
>> you omitted:
>>
>>     class Foo(singleton):
>>         ...
>>     Foo = Foo()
>
> Ok, so the metaclass would have to be a little different, but this can
> be done with metaclasses.  (But I think that this in particular
> example, declaring the instance through the class is merely
> confusing. :-)

Fine, but try doing singleton *and something else that needs a 
metaclass* without first composing every 
metaclass-supported-class-decorator combination you want to use a 
priori.

-bob


From pje at telecommunity.com  Wed Mar 24 15:49:48 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 24 15:51:05 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403242141.05544.aleaxit@yahoo.com>
References: <16481.59703.868873.805393@montanaro.dyndns.org>
	<16480.28046.500125.309554@montanaro.dyndns.org>
	<200403241832.i2OIW4S16223@guido.python.org>
	<16481.59703.868873.805393@montanaro.dyndns.org>
Message-ID: <5.1.1.6.0.20040324154636.0205e4c0@telecommunity.com>

At 09:41 PM 3/24/04 +0100, Alex Martelli wrote:

>..?  (As to _why_ one would want such semantics, I dunno -- Singleton being
>most definitely _not_ my favourite DP, anyway...:-)

I definitely don't like singletons either, but there are some kinds of 
functionality that are *already* singletons, like it or not, and when 
putting a wrapper around them, one is forced to have another 
singleton.  For example, peak.events provides an event-driven wrapper 
around 'signal' functionality, and it's implemented by a singleton because 
-- like it or not -- Python's signal handling interface supports only one 
handler per signal.


From skip at pobox.com  Wed Mar 24 15:51:24 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 24 15:51:36 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <5.1.1.6.0.20040324152355.0201fcc0@telecommunity.com>
References: <Your message of "Wed, 24 Mar 2004 13:17:16 EST."
	<5.1.1.6.0.20040324124833.02ee1490@telecommunity.com>
	<Your message of "Wed, 24 Mar 2004 10:00:53 EST."
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
	<16480.28046.500125.309554@montanaro.dyndns.org>
	<200403240320.i2O3KEre018150@cosc353.cosc.canterbury.ac.nz>
	<16481.39035.450700.708157@montanaro.dyndns.org>
	<200403241428.i2OEStL14897@guido.python.org>
	<0C01C98F-7DA4-11D8-BB21-000A95686CD8@redivi.com>
	<5.1.1.6.0.20040324124833.02ee1490@telecommunity.com>
	<5.1.1.6.0.20040324152355.0201fcc0@telecommunity.com>
Message-ID: <16481.62668.701997.462879@montanaro.dyndns.org>


    >> Can you reformulate that as an addendum to the PEP?  It really needs
    >> good motivation for class decorators.

    Phillip> Are you sure it should be an addendum, as opposed to just being
    Phillip> folded into the PEP's various sections?  For example, the
    Phillip> 'singleton' example is already in the Examples section, so
    Phillip> should it just be addition to motivation?

I should point out that at the time I was editing the current version I
wasn't aware that Michael Hudson's latest patch supported class
decorations.  I think it would be worthwhile to discuss both types of
decorators on an equal footing.

Skip

From pje at telecommunity.com  Wed Mar 24 16:09:45 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 24 16:10:56 2004
Subject: [Python-Dev] PEP 318 - posting draft
In-Reply-To: <200403242141.05544.aleaxit@yahoo.com>
References: <16481.59703.868873.805393@montanaro.dyndns.org>
	<16480.28046.500125.309554@montanaro.dyndns.org>
	<200403241832.i2OIW4S16223@guido.python.org>
	<16481.59703.868873.805393@montanaro.dyndns.org>
Message-ID: <5.1.1.6.0.20040324160631.024b2e50@telecommunity.com>

At 09:41 PM 3/24/04 +0100, Alex Martelli wrote:

>Perhaps something like...:
>
>class metaSingleton(type):
>     def __new__(cls, classname, classbases, classdict):
>         result = type.__new__(cls, classname, classbases, classdict)
>         if classbases:
>             return result()
>         else:
>             return result

By the way, there's a bug in the above implementation: it can't be safely 
mixed with other metaclasses, because it calls 'type' directly instead of 
using 'super()'.


From eppstein at ics.uci.edu  Wed Mar 24 17:12:48 2004
From: eppstein at ics.uci.edu (David Eppstein)
Date: Wed Mar 24 17:18:40 2004
Subject: [Python-Dev] PEP 318 (was re: redefining is)
References: <LNBBLJKPBEHFEDALKOLCEENPJLAB.tim.one@comcast.net>
	<000001c41152$baca5830$29714d0c@ARKLAPTOP>
	<20040323224817.E8BD.JCARLSON@uci.edu>
Message-ID: <eppstein-FEACF2.14124824032004@sea.gmane.org>

In article <20040323224817.E8BD.JCARLSON@uci.edu>,
 Josiah Carlson <jcarlson@uci.edu> wrote:

> > perhaps two distinct objects.  How can one determine straightforwardly
> > whether those objects are members of the same equivalence class?
> 
> You were there and helped point out some of the limitations of the
> original code I posted, which now has a _straightforward_ recursive
> implementation here:
> http://mail.python.org/pipermail/python-dev/2004-March/043357.html

Heh, another use for the class variant of PEP 318.  Josiah's code 
depends on knowing which classes have immutable instances, using a 
hardcoded set of builtin types.  With PEP318, one could do

class foo [immutable]:
    ...

with an appropriate definition of immutable that either decorates the 
class object or adds to the set of known immutables.  Perhaps also with 
code to catch and warn against obvious attempts at mutation of foos...

-- 
David Eppstein                      http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science


From jcarlson at uci.edu  Wed Mar 24 18:37:19 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed Mar 24 18:41:37 2004
Subject: [Python-Dev] PEP 318 (was re: redefining is)
In-Reply-To: <eppstein-FEACF2.14124824032004@sea.gmane.org>
References: <20040323224817.E8BD.JCARLSON@uci.edu>
	<eppstein-FEACF2.14124824032004@sea.gmane.org>
Message-ID: <20040324152821.BF92.JCARLSON@uci.edu>

> > You were there and helped point out some of the limitations of the
> > original code I posted, which now has a _straightforward_ recursive
> > implementation here:
> > http://mail.python.org/pipermail/python-dev/2004-March/043357.html
> 
> Heh, another use for the class variant of PEP 318.  Josiah's code 
> depends on knowing which classes have immutable instances, using a 
> hardcoded set of builtin types.  With PEP318, one could do
> 
> class foo [immutable]:
>     ...
> 
> with an appropriate definition of immutable that either decorates the 
> class object or adds to the set of known immutables.  Perhaps also with 
> code to catch and warn against obvious attempts at mutation of foos...

One could even include the disclaimer that any code that modifies an
instance that is supposed to be immutable, is inherantly broken and is
not supported.  Of course the testing before and after method calls and
attribute access would be difficult, if not impossible with current
Python.

 - Josiah


From greg at cosc.canterbury.ac.nz  Wed Mar 24 18:53:00 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 24 18:53:10 2004
Subject: [Python-Dev] generator expression syntax
In-Reply-To: <200403240937.45265.gmccaughan@synaptics-uk.com>
Message-ID: <200403242353.i2ONr02g019604@cosc353.cosc.canterbury.ac.nz>

Gareth McCaughan <gmccaughan@synaptics-uk.com>:

> It's not clear to me that lambdas are any more obviously a special
> construct than generator expressions.

If the absence of lambda is considered to make it look too
straightforward, we could always introduce a new mysterious-looking
keyword, and even throw in a colon for good measure...

  g = (gamma: x*x for x in stuff)

:-)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Wed Mar 24 20:43:15 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 24 20:43:31 2004
Subject: [Python-Dev] Possible resolution of generator expression variable
	capture dilemma
In-Reply-To: <006c01c4117e$cfa67280$f901a8c0@hirsevej.dk>
Message-ID: <200403250143.i2P1hFpu020040@cosc353.cosc.canterbury.ac.nz>

"Anders J. Munch" <andersjm@dancontrol.dk>:

> A simple change to Tim's example:
> 
>      pipe = source
>      for p in predicates:
>          filter = p.filter
>          # add a filter over the current pipe, and call that the new pipe
>          pipe = e for e in pipe if filter(e)

I was afraid someone would say something like that. :-)

This is a good point. Clearly there are cases where you want
new-binding behaviour on something that is not a loop variable. But
equally there are cases where you definitely *don't* want it, and it
seems like a bad idea to have the compiler try to guess which is
which.

Once again, Scheme has the advantage here, because it's always
explicit about whether a new binding is being created or an existing
one changed. So perhaps what we want is an explicit way of creating a
new binding.  Maybe something like

  let <var> = <expr>

Using this, the augmented Tim example becomes

   pipe = source
   for p in predicates:
     let filter = p.filter
     let pipe = e for e in pipe if filter(e)

(By using 'let' on 'pipe' as well, this will work even if the
first iterator is not pre-computed, although pre-computing it
might still be desirable for other reasons to be debated
separately. If it is pre-computed, the second 'let' would not be
necessary in this example.)

Now, if we're being explicit about things, it would make sense to be
explicit about what happens to the loop variable as well.  I suggest

  for each <var> in <expr>:

as a new-binding version of the for loop. The original Tim
example is then

     pipe = source
     for each p in predicates:
         let pipe = e for e in pipe if p(e)

An objection to all this might be that it won't do anything to reduce
surprises, because you have to already be aware of the possibility of
surprises and put lets and eaches in the right places to avoid them.

But that's no worse than the way things are, where you have to be
aware of the possibility of surprises and do awkward things with
lambdas or extra function defs to avoid them.

The difference is that there will be a simple and straightforward way
of avoiding the surprises, rather than an obscure and hard to read
one.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From bob at redivi.com  Wed Mar 24 21:02:12 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 24 20:58:50 2004
Subject: [Python-Dev] PEP 318 (was re: redefining is)
In-Reply-To: <20040324152821.BF92.JCARLSON@uci.edu>
References: <20040323224817.E8BD.JCARLSON@uci.edu>
	<eppstein-FEACF2.14124824032004@sea.gmane.org>
	<20040324152821.BF92.JCARLSON@uci.edu>
Message-ID: <6EABFDCA-7E00-11D8-BB21-000A95686CD8@redivi.com>

On Mar 24, 2004, at 6:37 PM, Josiah Carlson wrote:

>>> You were there and helped point out some of the limitations of the
>>> original code I posted, which now has a _straightforward_ recursive
>>> implementation here:
>>> http://mail.python.org/pipermail/python-dev/2004-March/043357.html
>>
>> Heh, another use for the class variant of PEP 318.  Josiah's code
>> depends on knowing which classes have immutable instances, using a
>> hardcoded set of builtin types.  With PEP318, one could do
>>
>> class foo [immutable]:
>>     ...
>>
>> with an appropriate definition of immutable that either decorates the
>> class object or adds to the set of known immutables.  Perhaps also 
>> with
>> code to catch and warn against obvious attempts at mutation of foos...
>
> One could even include the disclaimer that any code that modifies an
> instance that is supposed to be immutable, is inherantly broken and is
> not supported.  Of course the testing before and after method calls and
> attribute access would be difficult, if not impossible with current
> Python.

it could also be simply:

class foo [provides(Immutability)]:
     pass

or...

declareImplements(int, instancesImplement=(Immutability,))

Basically what you want, when you are declaring type information, is a 
way to find the type information.  You don't really know nor care if 
it's actually part of the type object.  It's actually better that it's 
not, such as when you are declaring stuff about types that you can't 
change -- like int, long, etc.  It just so happens that PyProtocols (or 
something like it, but I demonstrate PyProtocols syntax) already takes 
care of this for you.

-bob


From pje at telecommunity.com  Wed Mar 24 21:32:56 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 24 21:27:12 2004
Subject: [Python-Dev] PEP 318 (was re: redefining is)
In-Reply-To: <6EABFDCA-7E00-11D8-BB21-000A95686CD8@redivi.com>
References: <20040324152821.BF92.JCARLSON@uci.edu>
	<20040323224817.E8BD.JCARLSON@uci.edu>
	<eppstein-FEACF2.14124824032004@sea.gmane.org>
	<20040324152821.BF92.JCARLSON@uci.edu>
Message-ID: <5.1.0.14.0.20040324212030.03024830@mail.telecommunity.com>

At 09:02 PM 3/24/04 -0500, Bob Ippolito wrote:

>it could also be simply:
>
>class foo [provides(Immutability)]:
>     pass
>
>or...
>
>declareImplements(int, instancesImplement=(Immutability,))
>
>Basically what you want, when you are declaring type information, is a way 
>to find the type information.  You don't really know nor care if it's 
>actually part of the type object.  It's actually better that it's not, 
>such as when you are declaring stuff about types that you can't change -- 
>like int, long, etc.  It just so happens that PyProtocols (or something 
>like it, but I demonstrate PyProtocols syntax) already takes care of this 
>for you.

Technically, if/when PyProtocols supports decorators, it'll probably be 
spelled:

     class foo [
         protocols.instances_provide(Immutability)
     ]:
         ...

or maybe:

     class foo [
         protocols.implements(Immutability)
     ]:
         ...

to distinguish from the class *itself* providing immutability.  I've been 
trying to promote a Python-wide terminology that distinguishes between 
"providing X" (being an object that does X), "supporting X" (either 
providing X or being adaptable to X), and "implementing X" (being a type 
whose instances provide X).  Zope X3 and PyProtocols both conform to this 
terminology in their APIs, at least with respect to provides vs. implements.

I do see a use for a 'provides()' decorator, though, in relation to 
functions.  E.g.:

def something(x,y) [protocols.provides(ICallableWithTwoArgs)]:
     ...

Too bad there's no way to do this with modules, which are the only 
remaining thing PyProtocols uses stack inspection to annotate.  But I can't 
even begin to guess how one would create a module decorator syntax.  :)


From guido at python.org  Wed Mar 24 22:29:22 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 24 22:29:31 2004
Subject: [Python-Dev] Python startup time: String objects
In-Reply-To: Your message of "Wed, 24 Mar 2004 21:19:22 +0100."
	<200403242019.i2OKJMC15465@twilight.domainfactory.de> 
References: <200403242019.i2OKJMC15465@twilight.domainfactory.de> 
Message-ID: <200403250329.i2P3TMB17371@guido.python.org>

> At pycon, I have been looking into Python startup time. 
> 
> I found that CVS-Python allocates roughly 12,000 string objects on
> startup, whereas Python 2.2 only allocates 8,000 string objects. In
> either case, most strings come from unmarshalling string objects,
> and the increase is (probably) due to the increased number of
> modules loaded at startup (up from 26 to 34).

But is this really where the time goes?  On my home box (~11K
pystones/second) I can allocate 12K strings in 17 msec.

> The string objects allocated during unmarshalling are often quickly 
> discarded after being allocated, as they are identifiers, and get
> interned - so only the interned version of the string survives, and
> the second copy is deallocated.
> 
> I'd like to change the marshal format to perform sharing of equal
> strings, instead of marshalling the same identifiers multiple times.
> To do so, a dictionary of strings is created on marshalling and a
> list is created on unmarshalling, and a new marshal code for
> string-backreference would be added.
> 
> What do you think?

Feels like a rather dicey incompatible change to marshal, and rather a
lot of work unless you know it is going to make a significant change.
It seems that marshalling would have to become a two-pass thing,
unless you want to limit that dict/list to function scope, in which
case I'm not sure it'll make much of a difference.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From nbastin at opnet.com  Wed Mar 24 22:36:06 2004
From: nbastin at opnet.com (Nick Bastin)
Date: Wed Mar 24 22:36:14 2004
Subject: [Python-Dev] Patch to speed up python 2.4 startup time
Message-ID: <8C9FDFAE-7E0D-11D8-8113-000393CBDF94@opnet.com>

I've put a patch up at sourceforge (#922881) which is somewhat hackish 
change in warnings.py that gets back about 50% of the startup time we 
lost between 2.2.2 and 2.3.x, while avoiding the import deadlock 
problem with delayed import.  The general feeling that I get from the 
developers that I've surveyed is that it sounds/looks good, but I 
wanted some more people to take a look at it.  I'm also working on a 
plan for solving the import deadlock problem in the long term 
(per-module locking - more on that later).

This patch does pass all tests.

The timings are below:

PYTHON STARTUP TIME - 100 INVOCATIONS (no site.py)

-------------------------------------
Python 2.2.2 release tree, 9:06, 4/24/2004
-------------------------------------
3.450u 2.200s 0:05.96 94.7%     0+0k 0+0io 0pf+0w
3.410u 2.270s 0:05.96 95.3%     0+0k 0+0io 0pf+0w
3.400u 2.340s 0:05.92 96.9%     0+0k 0+0io 0pf+0w
-------------------------------------------------
3.420u 2.270s 0:05.95
-------------------------------------------------

-------------------------------------
Current 2.4 tree, 21:30, 04/23/2004
-------------------------------------
5.410u 4.310s 0:10.50 92.5%     0+0k 0+0io 0pf+0w
5.680u 4.270s 0:10.49 94.8%     0+0k 0+0io 0pf+0w
5.720u 4.330s 0:10.52 95.5%     0+0k 0+0io 0pf+0w
-------------------------------------------------
5.603u 4.303s 0:10.50
-------------------------------------------------

-------------------------------------
Current 2.4 with warnings.py patch
-------------------------------------
4.380u 3.450s 0:08.23 95.1%     0+0k 0+0io 0pf+0w
4.420u 3.410s 0:08.31 94.2%     0+0k 0+0io 0pf+0w
4.580u 3.190s 0:08.24 94.2%     0+0k 0+0io 0pf+0w
-------------------------------------------------
4.460u 3.350s 0:08.26
-------------------------------------------------

--
Nick


From tim.one at comcast.net  Wed Mar 24 23:27:54 2004
From: tim.one at comcast.net (Tim Peters)
Date: Wed Mar 24 23:28:03 2004
Subject: [Python-Dev] redefining is
In-Reply-To: <000001c41152$baca5830$29714d0c@ARKLAPTOP>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEFPJMAB.tim.one@comcast.net>

[Andrew Koenig]
> ...
> If I understand things correctly, if I evaluate
>
> 	x = (3, 4)
> 	y = (3, 4)
>
> the implementation is permitted to bind x and y to the same object,
> or not, as it pleases.

Right.

> If you like, all instances of (3, 4) are equivalent and the
> implementation is permitted to chose any representative of that
> equivalence class.

Yup.

> I think the interesting use case is whenever one wants to test for
> membership in that equivalence class.

Which equivalence class?  If it's the equivalence class of objects the
implementation is permitted to bind to y, then that's the set of 2-tuples
containing 3 at index 0 and 4 at index 1.  I don't believe the language
definition is actually clear about whether, e.g., (3L, 4L) is in that class,
although I'm sure it's Guido's intent that (3L, 4L) is not in that class.

> Now, you will probably say that == does the trick: if x is a member
> of the equivalence class of (3, 4), then x == y is True if and only
> if y is also a member of that equivalence class.

== ain't it.  For example, it could be that (x, x) == (3, 4), where x is an
instance of a perverse user-defined class that arranges to compare equal to
3 or 4 on alternating __eq__ calls.  Less strained, (3+0j, 4+0j) == (3, 4),
but the implementation can't return the former here either.

> However, consider this:
>
> 	a = []
> 	x = (3, a)
> 	y = (3, a)
>
> I claim that x and y are *still* members of an equivalence class in
> exactly the same sense that they were members before --

Well, they may well be members of *an* equivalence class, depending on how
the equivalence relation is defined.  If you're claiming that they're
members of "the" equivalence class Python's implementation is free to draw
from ("exactly the same sense" seems to imply that's your claim), then I
think Guido would disagree:  I believe Guido will say that

    x is y

cannot be true after the code snippet above.  And we really have to ask him
that, because I don't think the "Objects, values and types" section in the
language reference manual is clear enough to nail it.

> aside from checking the identity of x and y, there is no way of
> distinguishing them.

There are potentially other ways apart from checking "x is y" or "id(x) ==
id(y)" directly.  Like they may have different refcounts (which CPython
exposes), gc.get_referrers() may return different lists, even
gc.get_referents() may return different objects (depending on whether the
same instance of 3 appears in both).

> However, the test x == y no longer determines whether x and y are
> members of the same equivalence class.

But == never did, and I remain fuzzy on whether you're trying to capture a
Python-defined equivalence class, or a more-or-less arbitrary equivalence
class of your own invention.

> Indeed, there is no built-in test that will determine whether x and
> y are in the same equivalence class.

I believe that.  What I don't know is why I'd want to write Python code that
can determine it.

> Finally:
>
> 	x = [3, 4]
> 	y = [3, 4]
>
> Now x and y are *not* in the same equivalence class, because changing
> one of them doesn't change the other.  Nevertheless, x == y yields
> True.
>
>
> So what I think is the interesting use case is this:  You evaluate two
> expressions, for which the implementation is permitted to return the
> same object twice or two distinct members of the same equivalence
> class, or perhaps two distinct objects.  How can one determine
> straightforwardly whether those objects are members of the same
> equivalence class?

I don't have a use case for answering that very question -- that's what I
keep asking about.

> If the objects' value is (3, 4), you can use ==.

Nope (see above).

> If the objects' value is (3, a), there's no really easy way to do it.

Not even if it's (3, 4).

> If the objects' value is [3, 4], you can use 'is'.

Yes.

> I think it would be useful to have an operation that is as easy to
> use as 'is' that would make the determination.

Why <wink>?


From tim.one at comcast.net  Thu Mar 25 00:13:47 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 25 00:13:52 2004
Subject: [Python-Dev] Possible resolution of generator expression
	variablecapture dilemma
In-Reply-To: <200403241330.i2ODU8714717@guido.python.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGEJMAB.tim.one@comcast.net>

[Guido]
> If I had known Scheme 15 years ago I might have considered this --
> ...

Yes, but you would have rejected it in favor of Python's original 3-level
scoping then anyway.  That was an experiment worth trying!  What you would
have learned from Scheme 15 years ago is that combining nested scopes with
indefinite extents is:

   - subtle
   - the subtleties are crucial
   - intent can't be guessed

So Scheme did a very Pythonic thing here, requiring the programmer to
identify intended scope explicitly (Scheme never guesses about this -- there
are no inferences about intended scope, only explicit scope declarations).

Python didn't need that for its original scoping scheme, and it was no sin
then to leave it out.  The more Python tries to accomodate delayed code
blocks (generator expressions are an instance of that, and it's not the
"generator" part that's creating the problems), though, the more users will
struggle with the lack of explicit scoping.  The "delayed binding" guess
usually works in the simplest examples (but so does the "early binding"
guess), while nested "let" blocks are usually absent in the simplest Scheme
functions.  People never leave well enough alone, though, so I hope
generator expressions don't become another "minor convenience" that's viewed
as a "minor annoyance" a year later.


From blunck at gst.com  Thu Mar 25 00:21:21 2004
From: blunck at gst.com (Christopher Blunck)
Date: Thu Mar 25 00:21:25 2004
Subject: [Python-Dev] Snake-Pit: Cataloging Python builds on different
	platforms
Message-ID: <20040325052121.GA21451@homer.gst.com>

Hi all-

<Disclaimer>
Apologies in advance if this is off topic for python-dev.  I spoke with
Brett Cannon today at PyCON regarding this topic and he said to build something,
and bring it up with pydev when I had something.  I have something so I'm
bringing it up.
</Disclaimer>


Regarding snake-pit . . .

Python is a great cross-platform language.  It runs on all kinds of different
operating systems and architectures.  Staying on top of new operating systems
and architectures is challenging - just ask any of the people that managed
the snake farm.  However, I believe that it's crucially important for us to
know what platforms the latest Python runs on.  I'm specifically referring to
the HEAD tag in the python cvs module on sourceforge.

While at PyCON today I looked around and was really impressed by the variety
of architectures and operating systems that attendees ran on their laptops.  I
thought that if this many people ran this wide a variety on their laptops, 
there may be even more platforms that python enthusiasts run at home.  And
maybe they'd be willing to share some CPU cycles to help the Python 
development effort.

Although low, there are barriers to entry for building Python from source.
You have to know the CVSROOT, know how to check out the code, know which
directory the code resides in, and know how to build it.

Snake-pit is a simple shell script that does all of this for you - it 
checks out the Python source code anonymously, builds it, tests it, and
produces pretty output:
[chris@titan snake-pit]$ ./build.sh
Processing branch: main
Downloading code... done.
Building: clean, configure, compile.
Testing... done.
Results:
            Failed:  1
           Skipped:  32
  Skips Unexpected:  2
            Passed:  239
  
            Failed:  test_repr


What I'd like to do is begin catalog'ing this information on a nightly basis
on different platforms.  The goal is:  at any given time, we'll know which
platforms Python has been compiled and tested on.  After the information
has been catalog'ed, we can do a variety of transformations of it - we can
post it on a webpage or mail it to a listserve (if an error occurs).


I'm writing to solicit feedback for what information would be useful to capture
from a Python build.  So far I'm planning on capturing:
  Platform Category (Linux, BSD, Solaris)
  Distribution (RedHat, FreeBSD, Solaris)
  Version (9.0, 5.2.1, 9)
  Kernel (2.4.22-1.2174, ??, ??)
  SMP?

I'm interested in any feedback (positive and negative) that anybody would be
willing to provide.  Is this fools gold, or is it something that could be
useful?  If useful, how could it be *very* useful?


I'll be at PyCON tomorrow, but if you have comments and don't want to talk in
person the mailing list will be fine.


-c

-- 
 00:00:00  up 114 days, 13:44, 25 users,  load average: 0.85, 0.54, 0.35

From tim.one at comcast.net  Thu Mar 25 00:30:44 2004
From: tim.one at comcast.net (Tim Peters)
Date: Thu Mar 25 00:30:49 2004
Subject: [Python-Dev] Possible resolution of generator expression
	variablecapture dilemma
In-Reply-To: <200403250143.i2P1hFpu020040@cosc353.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEGFJMAB.tim.one@comcast.net>

BTW, have we had an example of a generator expression where late binding was
semantically necessary?  The only one I recall was a strained one along the
lines of:

    sqs = (sq(i) for i in range(n))

    def sq(i):
        return i**2

Late binding was needed then because the generator expression referenced a
name not yet bound when the g-e was evaluated.

Is the popular plan now that this "will work", but provided that "n" is
bound before the g-e is evaluated?  That is, this works:

    n = 10
    sqs = (sq(i) for i in range(n))
    def sq(i): ...
    print list(sqs)

but this doesn't work:

    sqs = (sq(i) for i in range(n))
    n = 10  # swapped with the line above
    def sq(i): ...
    print list(sqs)

and this uses different bindings for the two lexical instances of "n" in the
g-e:

    n = 10
    sqs = (sq(i+n) for i in range(n)  # 10 used in range(n), 20 in i+n
    n = 20
    def sq(i): ...
    print list(sqs)

?

Not wondering how bad it can get, just wondering how bad it starts <wink>.


From python at rcn.com  Thu Mar 25 02:34:10 2004
From: python at rcn.com (Raymond Hettinger)
Date: Thu Mar 25 02:36:37 2004
Subject: [Python-Dev] Possible resolution of generator
	expressionvariablecapture dilemma
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEGEJMAB.tim.one@comcast.net>
Message-ID: <00e601c4123b$90cad980$10b99d8d@oemcomputer>

> People never leave well enough alone, though, so I hope
> generator expressions don't become another "minor convenience" that's
> viewed
> as a "minor annoyance" a year later.

I do not view genexps as "convenient" short-forms for real generators.
Instead, they represent memory/resource friendly versions of list comps
which have been fabulously successful and have helped move us away from
map/lambda forms.  I believe they will make the language more fluid,
cohesive, and expressive.

Once genexps are in, I think everyone will be pleasantly surprised at
how readily the new gizmos work together and how easily they mesh with
what was there before.  Behind the scenes genexps will offer the added
benefit of scalability, handling growing volumes of data without
clobbering memory.  So please, let's don't get any wild ideas about
throwing the baby out with the bathwater.

In deciding on semantics, we would all rather trade away some power in
exchange for greater clarity about what the expression will do without
having to run it.  Put another way, human interpretability is more
important than supporting weird usages.  With that criteria,
early-binding is looking to be least nuanced approach.

wishing-i-was-in-DC-ly yours,


Raymond


#################################################################
#################################################################
#################################################################
#####
#####
#####
#################################################################
#################################################################
#################################################################

From python at rcn.com  Thu Mar 25 02:45:08 2004
From: python at rcn.com (Raymond Hettinger)
Date: Thu Mar 25 02:47:29 2004
Subject: [Python-Dev] PEP 318 (was re: redefining is)
In-Reply-To: <20040324152821.BF92.JCARLSON@uci.edu>
Message-ID: <00eb01c4123d$18e54480$10b99d8d@oemcomputer>

> > Heh, another use for the class variant of PEP 318.  Josiah's code
> > depends on knowing which classes have immutable instances, using a
> > hardcoded set of builtin types.  With PEP318, one could do
> >
> > class foo [immutable]:
> >     ...
> >
> > with an appropriate definition of immutable that either decorates
the
> > class object or adds to the set of known immutables.  Perhaps also
with
> > code to catch and warn against obvious attempts at mutation of
foos...

This PEP is losing its innocence.

Having been repeatedly confronted with FUD arguments in other contexts,
I hate to put on that hat, but I do not think it wise to support too
many varieties of weirdness.  Heck, we've already got meta-classes for
that.


> One could even include the disclaimer that any code that modifies an
> instance that is supposed to be immutable, is inherantly broken and is
> not supported

<fud>
I'm sensing an unnamed code smell.
</fud>



Raymond


From jiwon at softwise.co.kr  Thu Mar 25 04:07:45 2004
From: jiwon at softwise.co.kr (jiwon)
Date: Thu Mar 25 04:09:40 2004
Subject: [Python-Dev] Possible resolution of
	generatorexpressionvariablecapture dilemma
References: <00e601c4123b$90cad980$10b99d8d@oemcomputer>
Message-ID: <05d701c41248$b5a71db0$d70aa8c0@jiwon>

[Raymond]
> In deciding on semantics, we would all rather trade away some power in
> exchange for greater clarity about what the expression will do without
> having to run it.  Put another way, human interpretability is more
> important than supporting weird usages.  With that criteria,
> early-binding is looking to be least nuanced approach.

(If I could say my opinion on this,) I agree that human interpretability is
more important than supporting weird -or seldomly used- usages, and that
early-binding(but without any precomputation) is clearest approach.
Furthermore, I think closures should capture variables, too, but it seems
that's too much change for now :)

Regards, jiwon.


From FBatista at uniFON.com.ar  Thu Mar 25 07:03:17 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Thu Mar 25 07:05:03 2004
Subject: [Python-Dev] Snake-Pit: Cataloging Python builds on different
	platforms
Message-ID: <A128D751272CD411BC9200508BC2194D033837BC@escpl.tcp.com.ar>

#- Python is a great cross-platform language.  It runs on all 
#- kinds of different
#- operating systems and architectures.  Staying on top of new 
#- operating systems
#- and architectures is challenging - just ask any of the 
#- people that managed
#- the snake farm.  However, I believe that it's crucially 
#- important for us to
#- know what platforms the latest Python runs on.  I'm 
#- specifically referring to
#- the HEAD tag in the python cvs module on sourceforge.

Thinks it's interesting to know which is the last Python that runs in a
given platform (for example: I think that in the Palm the las Python ported
is 1.5)

Don't know if it's in the scope of your project.


#- I'm writing to solicit feedback for what information would 
#- be useful to capture
#- from a Python build.  So far I'm planning on capturing:
#-   Platform Category (Linux, BSD, Solaris)
#-   Distribution (RedHat, FreeBSD, Solaris)
#-   Version (9.0, 5.2.1, 9)
#-   Kernel (2.4.22-1.2174, ??, ??)
#-   SMP?

Benchamarks? If not exhaustive, at least something that gives you an idea of
"how well it runs there". Don't know how to deal with the machine
performance diferences.

.	Facundo

From martin at v.loewis.de  Thu Mar 25 08:29:29 2004
From: martin at v.loewis.de (Martin v. Löwis)
Date: Thu Mar 25 08:29:40 2004
Subject: [Python-Dev] Python startup time: String objects
Message-ID: <200403251329.i2PDTT002721@twilight.domainfactory.de>

> But is this really where the time goes?  On my home box (~11K
> pystones/second) I can allocate 12K strings in 17 msec.

I have now implemented this change, and it gives 2% speed-up for
an empty source file. The change is at http://python.org/sf/923098.

> It seems that marshalling would have to become a two-pass thing,
> unless you want to limit that dict/list to function scope, in which
> case I'm not sure it'll make much of a difference.

It's actually simpler than that. Marshalling recursively marshals code
objects that have already been created, and traverses them all.
So a single dict/list is sufficient. The change is 60 lines in marshal.c

Regards,
Martin

From guido at python.org  Thu Mar 25 08:48:56 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 25 08:49:02 2004
Subject: [Python-Dev] Python startup time: String objects
In-Reply-To: Your message of "Thu, 25 Mar 2004 14:29:29 +0100."
	<200403251329.i2PDTT002721@twilight.domainfactory.de> 
References: <200403251329.i2PDTT002721@twilight.domainfactory.de> 
Message-ID: <200403251348.i2PDmuZ18641@guido.python.org>

> > But is this really where the time goes?  On my home box (~11K
> > pystones/second) I can allocate 12K strings in 17 msec.
> 
> I have now implemented this change, and it gives 2% speed-up for
> an empty source file. The change is at http://python.org/sf/923098.
> 
> > It seems that marshalling would have to become a two-pass thing,
> > unless you want to limit that dict/list to function scope, in which
> > case I'm not sure it'll make much of a difference.
> 
> It's actually simpler than that. Marshalling recursively marshals
> code objects that have already been created, and traverses them all.
> So a single dict/list is sufficient. The change is 60 lines in
> marshal.c

Ah, I misunderstood.  The dict/list isn't marshalled separartely; it
is built up during (un)marshalling.  Clever.  (Pickling does a similar
thing.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From hyperdivision at icqus.dyndns.org  Tue Mar 23 06:32:21 2004
From: hyperdivision at icqus.dyndns.org (Hyperdivision)
Date: Thu Mar 25 09:42:40 2004
Subject: [Python-Dev] Objects/bufferobject.c
Message-ID: <40602045.mail88Q1K43LU@12-214-20-149.client.mchsi.com>


i'm using gcc-cvs and i get "invalid lvalue..."
in Objects/bufferobject.c, line 46
i fix this by deleting the cast to char on the left side of =
(my c's not so sharp, so i'm not sure this is the way to fix it)
but i just wanted to point out the problem hoping someone
might know what to do with it...

hyper.~

From ncoghlan at iinet.net.au  Wed Mar 24 09:12:37 2004
From: ncoghlan at iinet.net.au (ncoghlan@iinet.net.au)
Date: Thu Mar 25 09:42:48 2004
Subject: [Python-Dev] order of decorator application?
In-Reply-To: <5.1.1.6.0.20040323112805.02e782f0@telecommunity.com>
References: <5.1.1.6.0.20040323112805.02e782f0@telecommunity.com>
Message-ID: <1080137557.406197554c823@mail.iinet.net.au>

Quoting "Phillip J. Eby" <pje@telecommunity.com>:

> At 10:24 AM 3/23/04 -0600, Skip Montanaro wrote:
> 
> >(I'm working on PEP 318...)
> >
> >
> >     def func(a,b):
> >         pass
> >     func = d2(d1(func))
> 

Another phrasing which makes L2R the 'obvious' evaluation order - don't nest 
the functions when spelling out the long form (i.e. unrolling Phillip's loop):
  def func(a,b):
    pass
  func = d1(func)
  func = d2(func)

The only way the issue is ever confused is when all the decorators are applied 
on the one line in the 'long spelling', in which case the nesting inverts the 
order of the functions. So if we refrain from ever doing that in the official 
documentation, there isn't likely to be as much confusion over the ordering.

Regards,
Nick.


From m_schellens at hotmail.com  Thu Mar 25 09:25:16 2004
From: m_schellens at hotmail.com (Marc Schellens)
Date: Thu Mar 25 09:42:54 2004
Subject: [Python-Dev] embedding python with numarray
Message-ID: <4062EBCC.6030805@hotmail.com>

Using Python 2.3.3,
trying to embed python with numarray,
my C++ application crashes (segmentation fault) at "import_libnumarray()",
(I c&p the macro, its within the macro at:
PyObject *module =
PyImport_ImportModule("numarray.libnumarray");
)

when I:

    1.
       initalize python and numarray like:
    ...
       Py_Initialize();
       import_libnumarray();
    ...

    2 .
       load another module like:

         PyObject* pModule = PyImport_Import(pName);


    3.
    call Py_Finalize()
    and do 1. again.

The palce where it crashes is:
Objects/typeobject.c
line 3004:
if (basebase->tp_as_sequence == NULL)


If I call import_libnumarray() only once (ie calling Py_Initialize()
without import_libnumarray() the second time,
I cannot use numarray:

Traceback (most recent call last):
   File "/usr/local/gdl/gdl/test.py", line 5, in testfun
     print a
   File 
"/usr/local/lib/python2.3/site-packages/numarray/numarraycore.py", line 
660, in __str__
     return array_str(self)
TypeError: 'NoneType' object is not callable


Any suggestions?
How can numarray be restarted properly after a Py_Finalize() ?
Thanks,
marc


From tjreedy at udel.edu  Thu Mar 25 10:32:22 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu Mar 25 10:32:23 2004
Subject: [Python-Dev] Re: order of decorator application?
References: <5.1.1.6.0.20040323112805.02e782f0@telecommunity.com>
	<1080137557.406197554c823@mail.iinet.net.au>
Message-ID: <c3uu20$2dp$1@sea.gmane.org>


<ncoghlan@iinet.net.au> wrote in message
news:1080137557.406197554c823@mail.iinet.net.au...
> Another phrasing which makes L2R the 'obvious' evaluation order - don't
nest
> the functions when spelling out the long form (i.e. unrolling Phillip's
loop):
>   def func(a,b):
>     pass
>   func = d1(func)
>   func = d2(func)

I also regard [deco1, deco2, deco3] as defining a unix-style pipeline:

deco1 <f  | deco2 | deco3 >f

Terry J. Reedy




From jack at performancedrivers.com  Thu Mar 25 10:35:13 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Thu Mar 25 10:35:19 2004
Subject: [Python-Dev] PEP 318 (was re: redefining is)
In-Reply-To: <00eb01c4123d$18e54480$10b99d8d@oemcomputer>
References: <20040324152821.BF92.JCARLSON@uci.edu>
	<00eb01c4123d$18e54480$10b99d8d@oemcomputer>
Message-ID: <20040325153513.GA5089@performancedrivers.com>

On Thu, Mar 25, 2004 at 02:45:08AM -0500, Raymond Hettinger wrote:
> > > Heh, another use for the class variant of PEP 318.  Josiah's code
> > > depends on knowing which classes have immutable instances, using a
> > > hardcoded set of builtin types.  With PEP318, one could do
> > >
> > > class foo [immutable]:
> > >     ...
> > >
> > > with an appropriate definition of immutable that either decorates
> the
> > > class object or adds to the set of known immutables.  Perhaps also
> with
> > > code to catch and warn against obvious attempts at mutation of
> foos...
> 
> This PEP is losing its innocence.
> 
> Having been repeatedly confronted with FUD arguments in other contexts,
> I hate to put on that hat, but I do not think it wise to support too
> many varieties of weirdness.  Heck, we've already got meta-classes for
> that.

class decorators would make my code clearer, not weirder because I would
all but stop using metaclasses.  If someone were talking about inheriting
decorators, _that_ would be doubling the overall weirdness.

What he wants to do is weird, here is how I use the same effect
for something that is much saner.

def rebuild_nightly(cls):
  nightly_reports.append(cls)
  return cls

# without decorators
class NewUsersReport(Report):
  # fifty lines of class body
rebuild_nightly(NewUsersReport)

# with decorators
class NewUsersReport(Report) [rebuild_nightly]:
  # fifty lines of class body

Without decorators, it is easy to lose the rebuild_nightly() call.
And then my report dosn't run.
You can do it using metaclasses but then the magic is hidden very far
away in the Report's metaclass.  class Decorators say loudly that
the class that is about to be defined is modified or participates in X
right at the top where everyone can see it.

I currently do do this with metaclasses, because I've accidentally
left out or deleted the rebuild_nightly() call that was hidden far
away from the Report definition.  That hurts.

> 
> > One could even include the disclaimer that any code that modifies an
> > instance that is supposed to be immutable, is inherantly broken and is
> > not supported
> 
> <fud>
> I'm sensing an unnamed code smell.
> </fud>

Some people will use decorators instead of metaclasses to do odd things.
People currently eval() docstrings, use preprocessors, and do zany things
with stack frames.  That they might use decorators for the same unsupported
'features' (like contracts) doesn't make decorators any more culpable than 
docstrings or stack frames.

My world won't collapse without decorators, but it would be better with them.

-jackdied


From jcarlson at uci.edu  Thu Mar 25 10:33:22 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Thu Mar 25 10:37:06 2004
Subject: [Python-Dev] PEP 318 (was re: redefining is)
In-Reply-To: <00eb01c4123d$18e54480$10b99d8d@oemcomputer>
References: <20040324152821.BF92.JCARLSON@uci.edu>
	<00eb01c4123d$18e54480$10b99d8d@oemcomputer>
Message-ID: <20040325072025.E2FA.JCARLSON@uci.edu>

> > > Heh, another use for the class variant of PEP 318.  Josiah's code
> > > depends on knowing which classes have immutable instances, using a
> > > hardcoded set of builtin types.  With PEP318, one could do
> > >
> > > class foo [immutable]:
> > >     ...
> > >
> > > with an appropriate definition of immutable that either decorates the
> > > class object or adds to the set of known immutables.  Perhaps also with
> > > code to catch and warn against obvious attempts at mutation of foos...
> 
> This PEP is losing its innocence.
> 
> Having been repeatedly confronted with FUD arguments in other contexts,
> I hate to put on that hat, but I do not think it wise to support too
> many varieties of weirdness.  Heck, we've already got meta-classes for
> that.

I think David was just pointing out that if one had a class decorator
that /just happened/ to do what the name suggested, it would also happen
to work nicely with the earlier code posted as to whether an object is
replaceable by a different object.


> > One could even include the disclaimer that any code that modifies an
> > instance that is supposed to be immutable, is inherantly broken and is
> > not supported
> 
> <fud>
> I'm sensing an unnamed code smell.
> </fud>

I've not really seen any code that makes an attempt to be read-only, but
has some hooks to allow it to be written, so I don't know if your
properly named FUD is warranted or not.  Documentation-wise, it would be
something that makes sense to insert 'anything that modifies an
immutable through holes in the interface, is broken', if in the future
Python decides to have some sort of immutability decorator or 'provides
immutability' syntax.

I wasn't implying anything huge, just a relatively minor it should be
documented in the future when/if it becomes available.

 - Josiah


From guido at python.org  Thu Mar 25 10:39:15 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 25 10:39:19 2004
Subject: [Python-Dev] Objects/bufferobject.c
In-Reply-To: Your message of "Tue, 23 Mar 2004 05:32:21 CST."
	<40602045.mail88Q1K43LU@12-214-20-149.client.mchsi.com> 
References: <40602045.mail88Q1K43LU@12-214-20-149.client.mchsi.com> 
Message-ID: <200403251539.i2PFdFI18902@guido.python.org>

> i'm using gcc-cvs and i get "invalid lvalue..."
> in Objects/bufferobject.c, line 46
> i fix this by deleting the cast to char on the left side of =
> (my c's not so sharp, so i'm not sure this is the way to fix it)
> but i just wanted to point out the problem hoping someone
> might know what to do with it...

Hm, the GCC I have likes this just fine.  It seems you are using a
bleeding edge GCC version.  Maybe you can take this up with the GCC
developers?  It could be a bug there...

--Guido van Rossum (home page: http://www.python.org/~guido/)

From tjreedy at udel.edu  Thu Mar 25 10:40:22 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu Mar 25 10:40:25 2004
Subject: [Python-Dev] Re: embedding python with numarray
References: <4062EBCC.6030805__100.457054986869$1080225940@hotmail.com>
Message-ID: <c3uuh0$3rm$1@sea.gmane.org>


"Marc Schellens" <m_schellens@hotmail.com> wrote in message
news:4062EBCC.6030805__100.457054986869$1080225940@hotmail.com...
> Using Python 2.3.3,
> trying to embed python with numarray,
> my C++ application crashes (segmentation fault) at
"import_libnumarray()",
[snip]

This list is for discussion of future development of Python.  At this
point, your problem looks more like a current usage question, and in
particular, a third-party extension usage question, that would be more
appropriate for comp.lang.python or even a numarray users list.

Should you determine, after discussion on either forum, that there is a bug
in either Python or numarray, you can make a bug report on the appropriate
project site at SourceForge.

TJR




From mon at abaqus.com  Thu Mar 25 10:42:06 2004
From: mon at abaqus.com (Nick Monyatovsky)
Date: Thu Mar 25 10:42:13 2004
Subject: [Python-Dev] Coverage Analysis of Python Code
In-Reply-To: Bob Ippolito <bob@redivi.com>
	"Re: [Python-Dev] PEP 318 (was re: redefining is)" (Mar 24, 21:02)
References: <20040323224817.E8BD.JCARLSON@uci.edu> 
	<eppstein-FEACF2.14124824032004@sea.gmane.org> 
	<20040324152821.BF92.JCARLSON@uci.edu> 
	<6EABFDCA-7E00-11D8-BB21-000A95686CD8@redivi.com>
Message-ID: <1040325104206.ZM3688@reliant.hks.com>

Hello,

I wonder if the technical experts in this group could answer the following
question: it is possible to perform coverage analysis on python (py) code?

Let me describe why we are looking for this. We use Python internally, and by
now people got quite fond of it. So fond in fact, that they would often prefer
to code in Python rather than C++ whenever they can. Thus, the body of the
Python code keeps growing and growing.

This is good, and this is bad. The bad part of it is that if there are errors
in the Python code, we will only find them when the execution path goes through
it.
We use pychecker, but it still does not help us catch all the errors, and many
of them sliop through and still show up at run-time. Obviously, the only
defense that remains is the thorough testing. This part is mostly left at the
discretion of individual programmers. Moreover, even if they are very careful
about unit-testing individual functions, there are always unforseen
permutations which can turn out fatal.

This is why we thought it would be really nice to have the ability to keep
track of the coverage of the ever growing body of the Python code that we use.
By this I mean only the .py/.pyc files. We have a fairly good technique for
tracking coverage in C/C++.

If such a detailed coverage is not possible, is there a way, at least, to get a
brief summary of what classes/methods/functions we have defined, where they are
being defined and where they are being called from?

Any pointers will be greatly appreciated.

Thank you,  -- Nick Monyatovsky -- mon@abaqus.com

From guido at python.org  Thu Mar 25 10:41:26 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 25 10:42:56 2004
Subject: [Python-Dev] embedding python with numarray
In-Reply-To: Your message of "Thu, 25 Mar 2004 23:25:16 +0900."
	<4062EBCC.6030805@hotmail.com> 
References: <4062EBCC.6030805@hotmail.com> 
Message-ID: <200403251541.i2PFfQM18913@guido.python.org>

> Using Python 2.3.3,
> trying to embed python with numarray,
> my C++ application crashes (segmentation fault) at "import_libnumarray()",
> (I c&p the macro, its within the macro at:
> PyObject *module =
> PyImport_ImportModule("numarray.libnumarray");
> )
> 
> when I:
> 
>     1.
>        initalize python and numarray like:
>     ...
>        Py_Initialize();
>        import_libnumarray();
>     ...
> 
>     2 .
>        load another module like:
> 
>          PyObject* pModule = PyImport_Import(pName);
> 
> 
>     3.
>     call Py_Finalize()
>     and do 1. again.
> 
> The palce where it crashes is:
> Objects/typeobject.c
> line 3004:
> if (basebase->tp_as_sequence == NULL)
> 
> 
> If I call import_libnumarray() only once (ie calling Py_Initialize()
> without import_libnumarray() the second time,
> I cannot use numarray:
> 
> Traceback (most recent call last):
>    File "/usr/local/gdl/gdl/test.py", line 5, in testfun
>      print a
>    File 
> "/usr/local/lib/python2.3/site-packages/numarray/numarraycore.py", line 
> 660, in __str__
>      return array_str(self)
> TypeError: 'NoneType' object is not callable
> 
> 
> Any suggestions?
> How can numarray be restarted properly after a Py_Finalize() ?

This seems to be a numarray problem.  I don't have the numarray source
handy; I suggest that you bring this under the attention of the
numarray developers.  If they can't figure out what they need to fix
we're here to help.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From skip at pobox.com  Thu Mar 25 10:53:06 2004
From: skip at pobox.com (Skip Montanaro)
Date: Thu Mar 25 10:53:23 2004
Subject: [Python-Dev] Coverage Analysis of Python Code
In-Reply-To: <1040325104206.ZM3688@reliant.hks.com>
References: <20040323224817.E8BD.JCARLSON@uci.edu>
	<eppstein-FEACF2.14124824032004@sea.gmane.org>
	<20040324152821.BF92.JCARLSON@uci.edu>
	<6EABFDCA-7E00-11D8-BB21-000A95686CD8@redivi.com>
	<1040325104206.ZM3688@reliant.hks.com>
Message-ID: <16483.98.142605.636362@montanaro.dyndns.org>


    Nick> I wonder if the technical experts in this group could answer the
    Nick> following question: it is possible to perform coverage analysis on
    Nick> python (py) code?

Look at the trace module.  It can be used as both a Python module or main
program.

(Followups to python-list@python.org/comp.lang.python please.  This really a
Python usage question, not a question about developing the Python
distribution.)

Skip

From gmccaughan at synaptics-uk.com  Thu Mar 25 10:54:00 2004
From: gmccaughan at synaptics-uk.com (Gareth McCaughan)
Date: Thu Mar 25 10:54:07 2004
Subject: [Python-Dev] Objects/bufferobject.c
In-Reply-To: <200403251539.i2PFdFI18902@guido.python.org>
References: <40602045.mail88Q1K43LU@12-214-20-149.client.mchsi.com>
	<200403251539.i2PFdFI18902@guido.python.org>
Message-ID: <200403251554.00393.gmccaughan@synaptics-uk.com>

On Thursday 2004-03-25 15:39, Guido van Rossum wrote:

["Hyperdivision":]
> > i'm using gcc-cvs and i get "invalid lvalue..."
> > in Objects/bufferobject.c, line 46
> > i fix this by deleting the cast to char on the left side of =
> > (my c's not so sharp, so i'm not sure this is the way to fix it)
> > but i just wanted to point out the problem hoping someone
> > might know what to do with it...

[Guido:]
> Hm, the GCC I have likes this just fine.  It seems you are using a
> bleeding edge GCC version.  Maybe you can take this up with the GCC
> developers?  It could be a bug there...

According to the draft C9x standard (the only C standard
I have to hand), "a cast does not yield an lvalue". I think
that line should say

    *ptr = (void*)((char *)*ptr + offset);

which I assume is the intended meaning.

-- 
g


From guido at python.org  Thu Mar 25 10:57:00 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 25 10:57:06 2004
Subject: [Python-Dev] Coverage Analysis of Python Code
In-Reply-To: Your message of "Thu, 25 Mar 2004 10:42:06 EST."
	<1040325104206.ZM3688@reliant.hks.com> 
References: <20040323224817.E8BD.JCARLSON@uci.edu>
	<eppstein-FEACF2.14124824032004@sea.gmane.org>
	<20040324152821.BF92.JCARLSON@uci.edu>
	<6EABFDCA-7E00-11D8-BB21-000A95686CD8@redivi.com> 
	<1040325104206.ZM3688@reliant.hks.com> 
Message-ID: <200403251557.i2PFv0S18982@guido.python.org>

> I wonder if the technical experts in this group could answer the following
> question: it is possible to perform coverage analysis on python (py) code?

Yes, see the trace module.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From mon at abaqus.com  Thu Mar 25 10:58:30 2004
From: mon at abaqus.com (Nick Monyatovsky)
Date: Thu Mar 25 10:58:41 2004
Subject: [Python-Dev] Re: Snake-Pit: Cataloging Python builds on different
	platforms
In-Reply-To: Christopher Blunck <blunck@gst.com>
	"[Python-Dev] Snake-Pit: Cataloging Python builds on different
	platforms" (Mar 25, 0:21)
References: <20040325052121.GA21451@homer.gst.com>
Message-ID: <1040325105830.ZM3798@reliant.hks.com>

>------------------- Message from Christopher Blunck --------------------
>
>  Snake-pit is a simple shell script that does all of this for you - it
>  checks out the Python source code anonymously, builds it, tests it, and
>  produces pretty output:
>  [chris@titan snake-pit]$ ./build.sh
>  Processing branch: main
>  Downloading code... done.
>  Building: clean, configure, compile.
>  Testing... done.
>  Results:
>              Failed:  1
>             Skipped:  32
>    Skips Unexpected:  2
>              Passed:  239
>
>              Failed:  test_repr
>
>
>  What I'd like to do is begin catalog'ing this information on a nightly basis
>  on different platforms.  The goal is:  at any given time, we'll know which
>  platforms Python has been compiled and tested on.  After the information
>  has been catalog'ed, we can do a variety of transformations of it - we can
>  post it on a webpage or mail it to a listserve (if an error occurs).
>
>
>  I'm writing to solicit feedback for what information would be useful to
capture
>  from a Python build.  So far I'm planning on capturing:
>    Platform Category (Linux, BSD, Solaris)
>    Distribution (RedHat, FreeBSD, Solaris)
>    Version (9.0, 5.2.1, 9)
>    Kernel (2.4.22-1.2174, ??, ??)
>    SMP?
>
>  I'm interested in any feedback (positive and negative) that anybody would be
>  willing to provide.  Is this fools gold, or is it something that could be
>  useful?  If useful, how could it be *very* useful?
>
>--------------------------------------------------------

This sounds great. The other items that matter a lot are these:

Architecture:       32-bit/64-bit/Itanium/Opteron/PowerPC/Sparc/etc...
Compiler & Version: GNU/Intel/etc...
Optimization Level: min/default/max
Threaded:           yes/no
Strictness Level:   min/default/max

We may also consider what to do about extensions. If the Python core compiles
and links, but if most of the extensions and useful modules are "skipped", it
needs to be known. Without those modules Python might be quite unusable.

-- Nick Monyatovsky -- mon@abaqus.com

From martin at v.loewis.de  Thu Mar 25 11:28:50 2004
From: martin at v.loewis.de (Martin v. Löwis)
Date: Thu Mar 25 11:28:58 2004
Subject: [Python-Dev] Objects/bufferobject.c
Message-ID: <200403251628.i2PGSoq25662@twilight.domainfactory.de>

> i'm using gcc-cvs and i get "invalid lvalue..."
> in Objects/bufferobject.c, line 46
> i fix this by deleting the cast to char on the left side of =
> (my c's not so sharp, so i'm not sure this is the way to fix it)
> but i just wanted to point out the problem hoping someone
> might know what to do with it...

Thanks for pointing that out. I just fixed it in bufferobject.c 2.24 to read

*(char **)ptr = *(char **)ptr + offset;

Regards,
Martin

From martin at v.loewis.de  Thu Mar 25 11:32:29 2004
From: martin at v.loewis.de (Martin v. Löwis)
Date: Thu Mar 25 11:32:38 2004
Subject: [Python-Dev] Python startup time: String objects
Message-ID: <200403251632.i2PGWT826059@twilight.domainfactory.de>

The following message was sent by Guido van Rossum <guido@python.org> on Thu, 25 Mar 2004 05:48:56 -0800.

> Ah, I misunderstood.  The dict/list isn't marshalled separartely; it
> is built up during (un)marshalling.  Clever.  (Pickling does a similar
> thing.)

Indeed; this is actually a simplified version of the pickle object
sharing.

Armin suggested to remove marshal entirely and use pickle
for storing compiled byte code. It works fine with the version
of pickle that ships with Stackless, but is unfortunately slower
than marshal.

Regards,
Martin

From guido at python.org  Thu Mar 25 13:25:18 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 25 13:25:38 2004
Subject: [Python-Dev] Python startup time: String objects
In-Reply-To: Your message of "Thu, 25 Mar 2004 17:32:29 +0100."
	<200403251632.i2PGWT826059@twilight.domainfactory.de> 
References: <200403251632.i2PGWT826059@twilight.domainfactory.de> 
Message-ID: <200403251825.i2PIPI619317@guido.python.org>

> Armin suggested to remove marshal entirely and use pickle
> for storing compiled byte code. It works fine with the version
> of pickle that ships with Stackless, but is unfortunately slower
> than marshal.

Even with cPickle?

I'm still worried about your patch, because it changes all uses of
marchal, not just how code objects are marshalled.  The marshal module
*is* used by other apps...

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Thu Mar 25 14:07:21 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu Mar 25 14:02:18 2004
Subject: [Python-Dev] Python startup time: String objects
In-Reply-To: <200403251825.i2PIPI619317@guido.python.org>
References: <Your message of "Thu, 25 Mar 2004 17:32:29 +0100."
	<200403251632.i2PGWT826059@twilight.domainfactory.de>
	<200403251632.i2PGWT826059@twilight.domainfactory.de>
Message-ID: <5.1.0.14.0.20040325140452.03091ec0@mail.telecommunity.com>

At 10:25 AM 3/25/04 -0800, Guido van Rossum wrote:
> > Armin suggested to remove marshal entirely and use pickle
> > for storing compiled byte code. It works fine with the version
> > of pickle that ships with Stackless, but is unfortunately slower
> > than marshal.
>
>Even with cPickle?
>
>I'm still worried about your patch, because it changes all uses of
>marchal, not just how code objects are marshalled.  The marshal module
>*is* used by other apps...

Perhaps adding an optional protocol argument to the dump function would do 
the trick?  The default behavior would be the current behavior, and folks 
who want the speed/compactness at the cost of compatibility would be free 
to pass the argument.


From DavidA at activestate.com  Thu Mar 25 14:00:58 2004
From: DavidA at activestate.com (David Ascher)
Date: Thu Mar 25 14:07:05 2004
Subject: [Python-Dev] test_tcl failing...
In-Reply-To: <Pine.LNX.4.30.0403170249170.9071-100000@latte.ActiveState.com>
References: <Pine.LNX.4.30.0403170249170.9071-100000@latte.ActiveState.com>
Message-ID: <40632C6A.5050203@ActiveState.com>

If anyone here at PyCon is able to show me how the test_tcl tests are 
failing on Windows, I'd love a look -- I can't reproduce the failure on 
my boxes.

--david

PS: be careful when talking about the names of test failures in public 
(as in "Trent, is your test_tcl failing?" can sound weird when said out 
loud).


From martin at v.loewis.de  Thu Mar 25 14:49:50 2004
From: martin at v.loewis.de (Martin v. Löwis)
Date: Thu Mar 25 14:50:03 2004
Subject: [Python-Dev] Python startup time: String objects
Message-ID: <200403251949.i2PJnod07229@twilight.domainfactory.de>

> Even with cPickle?

AFAIR: Yes. It was not orders of magnitude slower anymore,
but significantly slower.

> I'm still worried about your patch, because it changes all uses of
> marchal, not just how code objects are marshalled.  The marshal module
> *is* used by other apps...

The change is backwards-compatible in the sense that existing files can
be unmarshalled just fine. Problems will only arise if new marshal output
is unmarshalled by old versions, which could be solved with an option
during marshalling.

Regards,
Martin

From guido at python.org  Thu Mar 25 14:58:51 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 25 14:58:59 2004
Subject: [Python-Dev] Python startup time: String objects
In-Reply-To: Your message of "Thu, 25 Mar 2004 20:49:50 +0100."
	<200403251949.i2PJnod07229@twilight.domainfactory.de> 
References: <200403251949.i2PJnod07229@twilight.domainfactory.de> 
Message-ID: <200403251958.i2PJwqk19625@guido.python.org>

> The change is backwards-compatible in the sense that existing files can
> be unmarshalled just fine. Problems will only arise if new marshal output
> is unmarshalled by old versions, which could be solved with an option
> during marshalling.

Works for me.  Make sure to update py_compile.py.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From jeremy at zope.com  Thu Mar 25 15:08:28 2004
From: jeremy at zope.com (Jeremy Hylton)
Date: Thu Mar 25 15:05:37 2004
Subject: [Python-Dev] Possible resolution of generator
	expressionvariablecapture dilemma
In-Reply-To: <00e601c4123b$90cad980$10b99d8d@oemcomputer>
Message-ID: <web-3446781@digicool.com>

On Thu, 25 Mar 2004 02:34:10 -0500
 "Raymond Hettinger" <python@rcn.com> wrote:
> I do not view genexps as "convenient" short-forms for
> real generators.
> Instead, they represent memory/resource friendly versions
> of list comps
> which have been fabulously successful and have helped
> move us away from
> map/lambda forms.  I believe they will make the language
> more fluid,
> cohesive, and expressive.

It would be great to revise some existing code to use
generator expressions.  I find it hard to judge their
effect, but am inclined to agree with you.  I like list
comprehensions a lot.
 
> In deciding on semantics, we would all rather trade away
> some power in
> exchange for greater clarity about what the expression
> will do without
> having to run it.  Put another way, human
> interpretability is more
> important than supporting weird usages.  With that
> criteria,
> early-binding is looking to be least nuanced approach.

If readability counts, I think the standard binding rules
make the most sense.  The examples I've seen that depend on
differences in binding all seem like edge cases to me.  I
don't expect generator expressions in for loops will be a
common usage.  Put another way, in the cases where it
matters, I don't expect either option to be ovious or
intuitive.  If there's no obvious solution, I'd rather see
the consistent one.

Jeremy

From ndbecker2 at verizon.net  Thu Mar 25 15:07:44 2004
From: ndbecker2 at verizon.net (Neal D. Becker)
Date: Thu Mar 25 17:11:07 2004
Subject: [Python-Dev] Moving from PYTHONDIR/config to pkgconfig
Message-ID: <c3ve6f$e49$1@sea.gmane.org>

Currently, python has it's own private config system that external modules
can use, which is centered around e.g., /usr/lib/python2.3/config/.

More and more modern tools are being converted to use a standardized
pkgconfig system, at least on Linux systems.  I hope python can also be
moved in this direction.


From greg at cosc.canterbury.ac.nz  Thu Mar 25 18:42:52 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu Mar 25 18:43:35 2004
Subject: [Python-Dev] Possible resolution of
	generatorexpressionvariablecapture dilemma
In-Reply-To: <05d701c41248$b5a71db0$d70aa8c0@jiwon>
Message-ID: <200403252342.i2PNgqsL021581@cosc353.cosc.canterbury.ac.nz>

jiwon <jiwon@softwise.co.kr>:

> Furthermore, I think closures should capture variables, too, but it seems
> that's too much change for now :)

That's not an option, because it would break mutually
recursive functions.

With regard to generator expressions, isn't Guido leaning
back towards not having any variable capture? Or is he
un-leaning-back again? This is all getting very confusing...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From guido at python.org  Thu Mar 25 19:01:38 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 25 19:01:46 2004
Subject: [Python-Dev] Possible resolution of
	generatorexpressionvariablecapture dilemma
In-Reply-To: Your message of "Fri, 26 Mar 2004 11:42:52 +1200."
	<200403252342.i2PNgqsL021581@cosc353.cosc.canterbury.ac.nz> 
References: <200403252342.i2PNgqsL021581@cosc353.cosc.canterbury.ac.nz> 
Message-ID: <200403260001.i2Q01ch20178@guido.python.org>

> With regard to generator expressions, isn't Guido leaning
> back towards not having any variable capture?

Right.

> Or is he un-leaning-back again? This is all getting very
> confusing...

Paul Dubois told me he thought that an example I used in my keynote
today (which would fail without capture) would be an attractive
nuisance for scientists (since it looked just like a reasonable way to
compute an outer product).  I'm not sure the argument holds, because
an outer product would typically be created as a list of lists, not a
list of generators.

The example I used was:

F = []
for i in range(10):
    F.append(x*i for x in range(10))

It would end up creating 10 generators referencing the variable i with
value 9.

The version using list comprehensions of course works:

F = []
for i in range(10):
    F.append([x*i for x in range(10)])

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Thu Mar 25 19:07:31 2004
From: guido at python.org (Guido van Rossum)
Date: Thu Mar 25 19:07:39 2004
Subject: [Python-Dev] test_tcl failing...
In-Reply-To: Your message of "Thu, 25 Mar 2004 11:00:58 PST."
	<40632C6A.5050203@ActiveState.com> 
References: <Pine.LNX.4.30.0403170249170.9071-100000@latte.ActiveState.com> 
	<40632C6A.5050203@ActiveState.com> 
Message-ID: <200403260007.i2Q07Va20205@guido.python.org>

When I tried this on my WinXP box, I got this:

C:\Python24>python lib\test\regrtest.py test_tcl
test_tcl
test test_tcl failed -- Traceback (most recent call last):
  File "C:\Python24\lib\test\test_tcl.py", line 116, in testPackageRequire
    tcl.eval('package require Tclx')
TclError: can't find package Tclx

1 test failed:
    test_tcl

C:\Python24>

--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg at cosc.canterbury.ac.nz  Thu Mar 25 19:08:13 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu Mar 25 19:08:46 2004
Subject: [Python-Dev] Python startup time: String objects
In-Reply-To: <200403251825.i2PIPI619317@guido.python.org>
Message-ID: <200403260008.i2Q08DHq021609@cosc353.cosc.canterbury.ac.nz>

Guido:

> I'm still worried about your patch, because it changes all uses of
> marchal, not just how code objects are marshalled.

Perhaps there should be a flag to turn it on?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From martin at v.loewis.de  Fri Mar 26 08:27:18 2004
From: martin at v.loewis.de (Martin v. Löwis)
Date: Fri Mar 26 08:27:22 2004
Subject: [Python-Dev] Moving from PYTHONDIR/config to pkgconfig
Message-ID: <200403261327.i2QDRIF26898@twilight.domainfactory.de>

> Currently, python has it's own private config system that external modules
> can use, which is centered around e.g., /usr/lib/python2.3/config/.
> 
> More and more modern tools are being converted to use a standardized
> pkgconfig system, at least on Linux systems.  I hope python can also be
> moved in this direction.

Without your contribution, it is very unlikely that anything will happen.

Even with a contribution, things might not change. It is important that
old mechanisms of accessing the build infrastructure continue to work,
so any new mechanism would have to be an add-on on top of the
existing infrastructure.

Regards,
Martin

From guido at python.org  Fri Mar 26 10:10:34 2004
From: guido at python.org (Guido van Rossum)
Date: Fri Mar 26 10:10:44 2004
Subject: [Python-Dev] method decorators (PEP 318)
Message-ID: <200403261510.i2QFAZ522251@guido.python.org>

I had a discussion with Robert Mollitor and Jim Hugunin yesterday,
where we looked in more detail at how the corresponding constructs are
used in C# and Java (JDK 1.5 has decorators using an "@foo" syntax).

We came to an interesting conclusion (which Robert had anticipated):
there are two quite different groups of use cases.

One group of use cases modifies the way the method works, its
semantics.  Current Python examples are classmethod, staticmethod, and
one could easily imagine others like the PEP example of an argument
type checking decorator, and "synchronized".  (As Jim observed, even
if it isn't a good idea, it's inevitable that someone will implement
it; I've got one sitting around just in case.)  (I'm not so fond of
the argument type checker example; this is certainly not how we're
going to add optional static typing to Python.)

In Java and C#, these are *not* done using their decorator
mechanisms; instead, those things are all built into the language as
keywords (except classmethod, which has no equivalent).

The other group of use cases merely attaches extra bits of metadata to
the method, without changing its usage.  That is what most C# and Java
decorators are used for.  Examples: C# has a way to specify from which
DLL a particular function should be loaded (or something else to do
with DLLs :-); there's an annotation for obsolete methods; there's an
annotation for specifying author and version info, and other metadata
like which RFE (Request For Enhancement) requested a particular
method; and more things like that.

In Python this second group of use cases would best be served with an
implementation that uses function attributes to store the various
annotations.  So perhaps it would behoove us to provide a syntactic
notation for setting function attributes that doesn't have the problem
they currently have: you have to set them after the function body,
which is just when your attention span is waning...  (And that is what
they have in common with classmethod and friends.)

Another observation is that the annotations in the second group
*commute*: setting the "obsolete" attribute first and adding
author/version info later has the same net effect as setting the
author/version info first and then setting obsolete.  But this is not
so easy for the first group, or when mixing one of the first group
with one of the second.  (Most of the ones in the first group are
simply incompatible with all others in that group.  Classmethod and
synchronized might be written to work together, even though naive
implementations don't, since these assume that the argument is a
function object and they return a descriptor.)

So, anyway, here's a new suggestion (inspired by Robert's proposal):

(1) Put decorators of the first category early in the def statement,
    for example here:

    def foobar [classmethod] (cls, foo, bar):
        ...

(2) Invent some other notation for setting function attributes as part
    of the function *body*, before the doc string even.

For (2) I am thinking aloud here:

   def foobar(self, arg):
       @author: "Guido van Rossum"
       @deprecated
       pass

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Fri Mar 26 11:07:17 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri Mar 26 11:01:33 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>

At 07:10 AM 3/26/04 -0800, Guido van Rossum wrote:

>So, anyway, here's a new suggestion (inspired by Robert's proposal):
>
>(1) Put decorators of the first category early in the def statement,
>     for example here:
>
>     def foobar [classmethod] (cls, foo, bar):
>         ...
>
>(2) Invent some other notation for setting function attributes as part
>     of the function *body*, before the doc string even.
>
>For (2) I am thinking aloud here:
>
>    def foobar(self, arg):
>        @author: "Guido van Rossum"
>        @deprecated
>        pass

Who is the metadata *for*?  Is it just for a human readers and 
documentation tools?  Or is it intended for machine consumption as well?

If it's for the machine, IMO it should be the same syntax, without a new 
construct.  If it's for humans and documentation tools, why shouldn't it go 
in the docstring?

I guess I don't understand why you see these use cases as so different from 
each other as to deserve different syntax, but not different enough to just 
use the docstring.  In other words, why not simply treat:

     def foobar(self, arg) [
         info(author="Guido van Rossum", deprecated=True)
     ]:
         pass

as a straightforward application of the same mechanism?  Why make people 
have to learn two mechanisms, one of which is limited to function 
attributes, and the other of which is awkward to use with multiple items?

C# and Java aren't dynamic languages - they don't treat functions or 
methods as first class objects, so of course they don't support any sort of 
transformation, and are limited to passive metadata in their attribution 
syntax.  But dynamic frameworks in Python want to be able to have "active 
attribution".  A number of people here have presented examples of things 
where they want to have decorators call a framework function to register 
the current function, method, or class in some way.  Function attributes 
don't support this - you still have to call something afterwards.

I don't believe that the interest in decorators is simply to support 
classmethod et al, or to have a way to do function attributes.  Instead, 
they're a workaround for the fact that Python function definitions aren't 
expressions.  Otherwise, we'd be doing things like this monstrosity:

     foobar = classmethod(
         def (self,arg):
             pass
     )

Obviously, I'm not arguing that Python should look like Lisp.  The current 
decorator syntax patch is *much* easier to read than wrapping an entire 
function definition in parentheses.  But the semantics that I think most 
people are asking for with decorators, is the simple Lisp-like capability 
of applying transformations to a function, but with a more Pythonic 
syntax.  That is, one where flat is better than nested, and readability 
counts.  That is the use case that decorators are intended to serve, IMO, 
and I believe that this is what most other proponents of decorators are 
after as well.

Function attributes, on the other hand, don't really address these use 
cases.  I would ask, how much field use of function attributes exists 
today?  Versus how much use of built-in and homemade decorators?  There are 
lots of the latter, and very few of the former.  I would also guess that 
where people are using function attributes, there's a good chance that 
they're using decorators as well, since they probably need to *do* 
something with the attributes.  Considering that both function attributes 
and decorators have the same overhead today for use (i.e., put them at the 
end, no special syntax available), that would suggest that the need/desire 
for decorators is much greater than the need for mere metadata annotation.


From jack at performancedrivers.com  Fri Mar 26 11:08:32 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Fri Mar 26 11:08:52 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261510.i2QFAZ522251@guido.python.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <20040326160832.GA12983@performancedrivers.com>

On Fri, Mar 26, 2004 at 07:10:34AM -0800, Guido van Rossum wrote:
> We came to an interesting conclusion (which Robert had anticipated):
> there are two quite different groups of use cases.
>
> The other group of use cases merely attaches extra bits of metadata to
> the method, without changing its usage.  That is what most C# and Java
> decorators are used for.
I've posted some examples in the 318 threads of both kinds,
the function decorators I use tend to change behavior and
the class decorators I use tend to be meta-markup.

For class decorators I currently use a work around w/ metaclasses,
I don't decorate functions much so I just do it the old fashioned way.

> So, anyway, here's a new suggestion (inspired by Robert's proposal):
> (2) Invent some other notation for setting function attributes as part
>     of the function *body*, before the doc string even.
> 
> For (2) I am thinking aloud here:
> 
>    def foobar(self, arg):
>        @author: "Guido van Rossum"
>        @deprecated
>        pass

I'm agnostic on '@' but this looks like pydoc markup to me, what about

def foobar(self, arg):
  @author = "Guido van Rossum"
  @deprecated = True

People will also want to access these, so also consider how this will read:

def foobar(self, arg):
  @author = "Guido"
  @email = @author.lower() + '@python.org' # contrived example

I'd be a little worried that new users from other languages would start
using '@' as a synonym for 'this' in strange ways.  Nothing that can't
be remedied with a clue bat.

-jackdied



From bob at redivi.com  Fri Mar 26 11:18:13 2004
From: bob at redivi.com (Bob Ippolito)
Date: Fri Mar 26 11:14:50 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <20040326160832.GA12983@performancedrivers.com>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<20040326160832.GA12983@performancedrivers.com>
Message-ID: <2E14D97E-7F41-11D8-8824-000A95686CD8@redivi.com>


On Mar 26, 2004, at 11:08 AM, Jack Diederich wrote:

> On Fri, Mar 26, 2004 at 07:10:34AM -0800, Guido van Rossum wrote:
>> We came to an interesting conclusion (which Robert had anticipated):
>> there are two quite different groups of use cases.
>>
>> The other group of use cases merely attaches extra bits of metadata to
>> the method, without changing its usage.  That is what most C# and Java
>> decorators are used for.
> I've posted some examples in the 318 threads of both kinds,
> the function decorators I use tend to change behavior and
> the class decorators I use tend to be meta-markup.
>
> For class decorators I currently use a work around w/ metaclasses,
> I don't decorate functions much so I just do it the old fashioned way.
>
>> So, anyway, here's a new suggestion (inspired by Robert's proposal):
>> (2) Invent some other notation for setting function attributes as part
>>     of the function *body*, before the doc string even.
>>
>> For (2) I am thinking aloud here:
>>
>>    def foobar(self, arg):
>>        @author: "Guido van Rossum"
>>        @deprecated
>>        pass
>
> I'm agnostic on '@' but this looks like pydoc markup to me, what about
>
> def foobar(self, arg):
>   @author = "Guido van Rossum"
>   @deprecated = True
>
> People will also want to access these, so also consider how this will 
> read:
>
> def foobar(self, arg):
>   @author = "Guido"
>   @email = @author.lower() + '@python.org' # contrived example
>
> I'd be a little worried that new users from other languages would start
> using '@' as a synonym for 'this' in strange ways.  Nothing that can't
> be remedied with a clue bat.

Would we call that Rython or Puby?

FWIW, I agree with PJE.  I want one syntax to rule them all, I see no 
need to use two, even though there are different use cases.  Clearly we 
use foo[bar] syntax for different things depending on what foo is, and 
I haven't seen any complaints about that.

-bob


From bkc at murkworks.com  Fri Mar 26 11:31:25 2004
From: bkc at murkworks.com (Brad Clements)
Date: Fri Mar 26 11:30:23 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <4064148D.12475.C8C67ED4@localhost>

On 26 Mar 2004 at 7:10, Guido van Rossum wrote:

> (2) Invent some other notation for setting function attributes as part
>     of the function *body*, before the doc string even.
> 
> For (2) I am thinking aloud here:
> 
>    def foobar(self, arg):
>        @author: "Guido van Rossum"
>        @deprecated
>        pass
> 


I met with Terry Way and some other folks last night to talk about PEP 316. I've also 
been speaking with David Goodger about developing a "recommended standard" for 
incorporating structured meta data in Python code.

DBC (Pep 316) is one use case,  method and function argument and return 
descriptors and type data is another use case, there's also Grouch for describing 
attribute types and class relationships. And DocTest could be consired to be 
"structured meta data"

Last night, Kevin Yacobson (??) said that he would prefer not to see "more docs 
stuffed in docstrings", but rather "in code", perhaps as class or method attributes as 
you describe above.

Personally, I'm not too concerned if meta data goes in docstrings, "free-floating" triple 
quoted strings scattered in code, or actual attributes of methods and functions. The 
thoughts I have about this are:

1. The meta data shouldn't be far away from the code it describes

2. .pyc files shouldn't be forced to contain this "extra" meta data either on disk or on 
subsequent use.

3. I think its desirable to be able to extract meta data from modules without importing 
them.

4. Structured meta data could be used to:

a. generate tests, either through explicit declaration (i.e. doctest) or implicitely (design 
by contract invariants and assertions combined with argument type knowledge)

b. describe method and function arguments and return values with both type and 
semantic information

c. describe attribute types and semantics

d. other meta data not yet considered.


5. IDE's could use meta data for code completion, hints and structure browsing

6. Other code analysis tools become possible, or existing ones could gain additional 
functionality

7. Meta data is not just for generating documentation

-

At PyCon I'm trying to collect use cases for "meta data in code". I haven't thought 
about how this meta data would be expressed in python code. Ideally, there would be 
a recommended "standard" for how meta data is expressed, and then a recommended 
standard for each meta data type. 


For example, it would be nice to not have to go through the hoops that Epydoc does in 
supporting 4 different documentation "standards".

Is PEP 318 going to specify how to handle arbitrary meta data, or should another PEP 
be proposed?


Am I the only one interested in this?

-- 
Brad Clements,                bkc@murkworks.com   (315)268-1000
http://www.murkworks.com                          (315)268-9812 Fax
http://www.wecanstopspam.org/                   AOL-IM: BKClements


From guido at python.org  Fri Mar 26 11:44:03 2004
From: guido at python.org (Guido van Rossum)
Date: Fri Mar 26 11:44:16 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: Your message of "Fri, 26 Mar 2004 11:07:17 EST."
	<5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com> 
References: <5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com> 
Message-ID: <200403261644.i2QGi3822505@guido.python.org>

> >For (2) I am thinking aloud here:
> >
> >    def foobar(self, arg):
> >        @author: "Guido van Rossum"
> >        @deprecated
> >        pass
> 
> Who is the metadata *for*?  Is it just for a human readers and 
> documentation tools?  Or is it intended for machine consumption as well?

Doc tools are machine consumption too, right?  I'd say for both.  This
stuff should definitely be accessible at runtime (it is in C# and JDK
1.5).  And one shouldn't have to do any parsing of doc strings at run
time.  There are no uses of function attributes that *couldn't* be
replaced by putting structured information into the doc string, but
IMO that misses out on the usefulness of function attributes.

> If it's for the machine, IMO it should be the same syntax, without a
> new construct.  If it's for humans and documentation tools, why
> shouldn't it go in the docstring?

Perhaps my example was a little simplistic.  The C# examples put the
author/version info in an object; in Python, this would translate to
something like:

  def foobar(self, arg):
      @author: AuthorInfo(author="GvR", version="1.0", copyright="GPL", ...)
      @deprecated: True

I could also see using '=' instead of ':':

  def foobar(self, arg):
      @author = AuthorInfo(author="GvR", version="1.0", copyright="GPL", ...)
      @deprecated = True

> I guess I don't understand why you see these use cases as so
> different from each other as to deserve different syntax, but not
> different enough to just use the docstring.  In other words, why not
> simply treat:
> 
>      def foobar(self, arg) [
>          info(author="Guido van Rossum", deprecated=True)
>      ]:
>          pass
> 
> as a straightforward application of the same mechanism?  Why make
> people have to learn two mechanisms, one of which is limited to
> function attributes, and the other of which is awkward to use with
> multiple items?

I find the mark-up in your example about the worst possible mark-up;
in practice, these things can get quite voluminous (we're only
restricting ourselves to a single item that fits on a line because
we're used to giving minimal examples).  I would hate it if something
*semantically* significant like classmethod or synchronized had to
compete for the user's attention with several lines of structured
metadata.

> C# and Java aren't dynamic languages - they don't treat functions or
> methods as first class objects, so of course they don't support any
> sort of transformation, and are limited to passive metadata in their
> attribution syntax.  But dynamic frameworks in Python want to be
> able to have "active attribution".  A number of people here have
> presented examples of things where they want to have decorators call
> a framework function to register the current function, method, or
> class in some way.  Function attributes don't support this - you
> still have to call something afterwards.

A framework-supplied metaclass could easily be designed to look for
function attributes set with this mechanism.  (One could also imagine
a metaclass-free hybrid approach where there's a transformer placed in
the decorator list which looks for function attributes to guide its
transformation.)

And yes, this could also be used to declare class and static methods
(by putting something in the standard metaclass that looks for certain
function attributes, e.g. @wrap=classmethod) but I still think that
these "wrapper descriptors" are better served by a different syntax,
one that is more integrated with the function signature.

> I don't believe that the interest in decorators is simply to support
> classmethod et al, or to have a way to do function attributes.
> Instead, they're a workaround for the fact that Python function
> definitions aren't expressions.  Otherwise, we'd be doing things
> like this monstrosity:
> 
>      foobar = classmethod(
>          def (self,arg):
>              pass
>      )
> 
> Obviously, I'm not arguing that Python should look like Lisp.  The
> current decorator syntax patch is *much* easier to read than
> wrapping an entire function definition in parentheses.  But the
> semantics that I think most people are asking for with decorators,
> is the simple Lisp-like capability of applying transformations to a
> function, but with a more Pythonic syntax.  That is, one where flat
> is better than nested, and readability counts.  That is the use case
> that decorators are intended to serve, IMO, and I believe that this
> is what most other proponents of decorators are after as well.

I would like to see more examples of that use case that aren't
classmethod and aren't mutually exclusing with most other examples.
The PEP stops with the arg checker example, and that's not a very
convincing one (because it's so clumsy for that particular goal).

> Function attributes, on the other hand, don't really address these
> use cases.  I would ask, how much field use of function attributes
> exists today?  Versus how much use of built-in and homemade
> decorators?  There are lots of the latter, and very few of the
> former.

I think this isn't a fair comparison.  Python 2.2 introduced several
built-in decorators, and machinery to build your own, creating a
cottage industry for home-grown decorators.  No similar promotion was
given to function attributes when they were introduced in Python 2.1.
But using decorators as a way to set attributes (as in PEP 318 example
3) is ass-backwards.

> I would also guess that where people are using function attributes,
> there's a good chance that they're using decorators as well, since
> they probably need to *do* something with the attributes.

Depends entirely on the usage.  Maybe you're thinking of some of your
own usage?  (I'm still not familiar with PEAK, nor will I have time to
read up on it any time soon, so please don't just refer to PEAK
without being specific.)

> Considering that both function attributes and decorators have the
> same overhead today for use (i.e., put them at the end, no special
> syntax available), that would suggest that the need/desire for
> decorators is much greater than the need for mere metadata
> annotation.

I'm not convinced.  I expect that if we had good syntax for both, we
would see more use of attributes than of decorators, except perhaps in
meta-heavy frameworks like PEAK.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From jim.jewett at eds.com  Fri Mar 26 11:56:04 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 26 11:57:04 2004
Subject: [Python-Dev] method decorators (PEP 318) - decorators vs transforms
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D40E@USAHM010.amer.corp.eds.com>

Guido (based on discussion with Robert Mollitor and Jim Hugunin)
wrote that there are two very different use cases.

We have been discussing transformers, which change how/what the
function does.  But many uses (such as "release" from the PEP)
are true decorators, which are really closer to:

   def foo():
      pass
   foo.attr = val

> For (2) I am thinking aloud here:

>   def foobar(self, arg):
>       @author: "Guido van Rossum"
>       @deprecated
>       pass

If I understand, the grammar production would be

	'@'+identifier [':' expression]

with expression defaulting to True.

Then

    def foobar as [classmethod, synchronized(lock23)] (args):
        @author: "Guido van Rossum"
        @version: (2,3)
        @deprecated
        @variant: @version + (@author,)

        # this next one might be asking for trouble
        @x_starts_as: @x

        @magic_number: random.randint(34) + y
        @version: @version + ("alpha", 1)

        "This is an example function"
        x = 5
        return x


would be equivalent to:

    def foobar(args):
        "This is an example function"
        x = 5
        return x

    # Evaluate the transforms first, in case they return a 
    # different object, which would not have the annotations.
    foobar = classmethod(foobar)
    foobar = synchronized(lock23)(foobar)

    # Now the annotations go on the final object.  Do we
    # need to worry about immutable objects, or is that the 
    # programmer's own fault?
    foobar.author = "Guido van Rossum"
    foobar.version = (2,3)

    # Default value is True
    foobar.deprecated = True

    # Do decorators have access to current attributes?
    # Did '@' read better than 'self' or 'this'?
    foobar.variant = foobar.version + (foobar.author,)

    # Does the attribute have access to the local scope?
    foobar.x_starts_as = foobar.x

    # Do decorators have access to the enclosing scope? 
    foobar.magic_number = random.randint(34) + y

    # Decorators don't really have to commute, do they?
    foobar.version = foobar.version + ("alpha", 1)

-jJ

From walter.doerwald at livinglogic.de  Fri Mar 26 12:04:58 2004
From: walter.doerwald at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Fri Mar 26 12:05:06 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261644.i2QGi3822505@guido.python.org>
References: <5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>
	<200403261644.i2QGi3822505@guido.python.org>
Message-ID: <406462BA.60305@livinglogic.de>

Guido van Rossum wrote:

> [...]
>   def foobar(self, arg):
>       @author: AuthorInfo(author="GvR", version="1.0", copyright="GPL", ...)
>       @deprecated: True
> 
> I could also see using '=' instead of ':':
> 
>   def foobar(self, arg):
>       @author = AuthorInfo(author="GvR", version="1.0", copyright="GPL", ...)
>       @deprecated = True

For me '@' looks like something that the compiler shouldn't see.
How about:

   def foobar(self, arg):
       .author = AuthorInfo(author="GvR", version="1.0", 
copyright="GPL", ...)
       .deprecated = True

Bye,
    Walter D?rwald



From skip at pobox.com  Fri Mar 26 12:32:41 2004
From: skip at pobox.com (Skip Montanaro)
Date: Fri Mar 26 12:32:50 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261510.i2QFAZ522251@guido.python.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <16484.26937.904019.643044@montanaro.dyndns.org>


    Guido> (1) Put decorators of the first category early in the def
    Guido>     statement, for example here:

    Guido>     def foobar [classmethod] (cls, foo, bar):
    Guido>         ...

How is this better than afterwards?

    Guido> (2) Invent some other notation for setting function attributes as
    Guido>     part of the function *body*, before the doc string even.

    Guido> For (2) I am thinking aloud here:

    Guido>    def foobar(self, arg):
    Guido>        @author: "Guido van Rossum"
    Guido>        @deprecated
    Guido>        pass

It's a shame that there's no good way to define function attributes already.
Aside from the fact that this is different than the other form of
"decoration", it's also different than setting attributes for classes.
Rather than invent a unique syntax I'd prefer to either use a decorator
function or suffer with tacking them on at the end:

    def foobar(self, arg):
        pass
    foobar.author = "Guido van Rossum"
    foobar.deprecated = True

Using the proposed decorator syntax with the decorator after the arglist
with a little judicious indentation I can make it look sort of like what
you're after:

    def foobar(self, arg) [attributes(
        author = "Guido van Rossum"
        deprecated = True
        )]:
        pass

Another possibility would be to overload def:

    def foobar(self, arg):
        def author = "Guido van Rossum"
        def deprecated = True
        pass

I don't know if the parser can handle the lookahead necessary for that
though.

Skip

From pje at telecommunity.com  Fri Mar 26 12:44:20 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri Mar 26 12:38:35 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261644.i2QGi3822505@guido.python.org>
References: <Your message of "Fri, 26 Mar 2004 11:07:17 EST."
	<5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>
	<5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>
Message-ID: <5.1.0.14.0.20040326120205.02336d40@mail.telecommunity.com>

At 08:44 AM 3/26/04 -0800, Guido van Rossum wrote:

>   def foobar(self, arg):
>       @author = AuthorInfo(author="GvR", version="1.0", copyright="GPL", ...)
>       @deprecated = True

My only objection to that format, is that it looks like part of the 
function body.  It might work if moved before the colon, though, and then 
it would need some sort of bracketing.


>I find the mark-up in your example about the worst possible mark-up;
>in practice, these things can get quite voluminous (we're only
>restricting ourselves to a single item that fits on a line because
>we're used to giving minimal examples).  I would hate it if something
>*semantically* significant like classmethod or synchronized had to
>compete for the user's attention with several lines of structured
>metadata.

I can see that.  I can also see from Brad's comments that some people have 
a completely different idea of what "metadata" is than I do.  I think of 
metadata as something that will be used to drive program behavior, whereas 
what I think you and Brad are calling metadata may be more like what I 
think of as "annotation".  And I don't see an issue with having a separate 
syntax for annotation, I just don't think it's a replacement for 
program-controlling metadata.


>A framework-supplied metaclass could easily be designed to look for
>function attributes set with this mechanism.

For that to be really practical, there's going to have to be at minimum a 
user-transparent way for metaclasses to be combined.  It's hard today even 
for wizards.


>(One could also imagine
>a metaclass-free hybrid approach where there's a transformer placed in
>the decorator list which looks for function attributes to guide its
>transformation.)
>
>And yes, this could also be used to declare class and static methods
>(by putting something in the standard metaclass that looks for certain
>function attributes, e.g. @wrap=classmethod) but I still think that
>these "wrapper descriptors" are better served by a different syntax,
>one that is more integrated with the function signature.

Maybe so, although I'm still of the view (as are others) that it shouldn't 
break the symmetry of definition and invocation (i.e., 'def foo(...)' and 
'foo(...)').



> > Obviously, I'm not arguing that Python should look like Lisp.  The
> > current decorator syntax patch is *much* easier to read than
> > wrapping an entire function definition in parentheses.  But the
> > semantics that I think most people are asking for with decorators,
> > is the simple Lisp-like capability of applying transformations to a
> > function, but with a more Pythonic syntax.  That is, one where flat
> > is better than nested, and readability counts.  That is the use case
> > that decorators are intended to serve, IMO, and I believe that this
> > is what most other proponents of decorators are after as well.
>
>I would like to see more examples of that use case that aren't
>classmethod and aren't mutually exclusing with most other examples.
>The PEP stops with the arg checker example, and that's not a very
>convincing one (because it's so clumsy for that particular goal).

Okay.  A few multi-decorator idioms used in PEAK today, but rewritten using 
the patch syntax:

    def childCountMonitor(self) [events.taskFactory, binding.Make]:

The 'events.taskFactory' means that this generator will be treated as an 
event-driven psuedo-thread, and will return an 'events.Task' object 
wrapping the generator-iterator.  The 'binding.Make' means this attribute 
is a property whose value will be computed once, upon first reference to 
the attribute.  Thus, using 'self.childCountMonitor' will return
the "daemonic" pseudothread for this instance that monitors its child 
count, and it will begin running if it has not already done so.

    def syntax(self) [binding.Make, binding.classAttr]:

The 'binding.Make' means that this attribute is a once-property as 
described above, but the 'binding.classAttr' means that it will have a 
value *per class*, rather than per-instance.  In other words, the 
descriptor will be moved to the metaclass.  (This does require support from 
the metaclass, to automatically create a new metaclass for the class where 
this decorator is used).

I haven't done an exhaustive search, but I believe these are the most 
common stacking decorators I use today.  However, I would also note that 
PEAK inclines towards monolithic do-everything decorators with lots of 
keyword arguments, due to the current inconvenience of stacking them.  For 
example, this is more likely the actual spelling today:

    def childCountMonitor(self):
        ...

    childCountMonitor = binding.Make(
        events.taskFactory(childCountMonitor), uponAssembly=True
    )

which would then read either:

    def childCountMonitor(self) [
        events.taskFactory, binding.Make(uponAssembly=True)
    ]:

or maybe even better, something like:

    def childCountMonitor(self) [
        events.taskFactory, binding.Make, binding.autoAssembled
    ]:

The uponAssembly flag indicates that the attribute should be constructed as 
soon as the component has become part of a complete application 
assembly.  It might be nice to make this orthogonal to other decoration 
behaviors.

Anyway, taskFactory, Make, and classAttr are just a few of the decorators I 
use, and they get used independently quite a bit.  They're just the ones 
that are most frequently stacked.


> > Considering that both function attributes and decorators have the
> > same overhead today for use (i.e., put them at the end, no special
> > syntax available), that would suggest that the need/desire for
> > decorators is much greater than the need for mere metadata
> > annotation.
>
>I'm not convinced.  I expect that if we had good syntax for both, we
>would see more use of attributes than of decorators, except perhaps in
>meta-heavy frameworks like PEAK.

Fair enough.  I'll certainly agree that using decorators today to supply 
attributes is easier than applying function attributes directly, in cases 
where either could be used to accomplish a need.  And if there had been a 
good syntax for function attributes available, there would likely be 
several places where I'd have leveraged it in place of decorators, mostly 
to avoid the many duplicated keyword arguments among my current decorators.

I guess my main concern is that I don't want to see PEP 318 crippled by a 
syntax that only works (and then just barely!) for classmethod or another 
single-word decorator.


From pje at telecommunity.com  Fri Mar 26 12:56:43 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri Mar 26 12:51:00 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <16484.26937.904019.643044@montanaro.dyndns.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <5.1.0.14.0.20040326124534.02336510@mail.telecommunity.com>

At 11:32 AM 3/26/04 -0600, Skip Montanaro wrote:
>Using the proposed decorator syntax with the decorator after the arglist
>with a little judicious indentation I can make it look sort of like what
>you're after:
>
>     def foobar(self, arg) [attributes(
>         author = "Guido van Rossum"
>         deprecated = True
>         )]:
>         pass

Thinking about it a bit more, I can now explain why I see a problem with 
creating a second syntax for function attributes.  It has seemed to me the 
unwritten rule of Python is:

* Never create syntax if a builtin will do
* Never create a builtin if a stdlib module will do

and so on.  A decorator syntax can trivially support function attributes, 
but not the other way around.  Arguing that they should have a different 
spelling seems therefore to be a purity vs. practicality argument.

At the same time, I realize Python also generally prefers to have different 
syntax for different use cases.  But, how different are these use cases, 
really?  They're all changing the function in some way.

It's apparent Guido doesn't agree; I just wish I knew what was bothering 
him about the PEP, so I could either provide a convincing counterargument, 
or understand better why I'm wrong.  <0.5 wink>  At the moment, I'm worried 
that something in my actual use cases will scare him into rejecting the PEP 
altogether.  <0.01 wink>


From fumanchu at amor.org  Fri Mar 26 12:57:54 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Fri Mar 26 12:59:47 2004
Subject: [Python-Dev] method decorators (PEP 318)
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561F62@opus.amorhq.net>

Skip Montanaro wrote:
>     Guido> (2) Invent some other notation for setting 
> function attributes as
>     Guido>     part of the function *body*, before the doc 
> string even.
> 
>     Guido> For (2) I am thinking aloud here:
> 
>     Guido>    def foobar(self, arg):
>     Guido>        @author: "Guido van Rossum"
>     Guido>        @deprecated
>     Guido>        pass
> 
> It's a shame that there's no good way to define function 
> attributes already.
> Aside from the fact that this is different than the other form of
> "decoration", it's also different than setting attributes for classes.
> 
> Rather than invent a unique syntax I'd prefer to either use a 
> decorator
> function or suffer with tacking them on at the end:
> 
>     def foobar(self, arg):
>         pass
>     foobar.author = "Guido van Rossum"
>     foobar.deprecated = True

Agreed. There are enough mechanisms to alter an object's attribute dict.
I'd rather not introduce another one, preferring instead to stick to
"object.attr = value"--in this example, "foobar.deprecated = True".
Where that declaration goes is fungible.

> Using the proposed decorator syntax with the decorator after 
> the arglist
> with a little judicious indentation I can make it look sort 
> of like what
> you're after:
> 
>     def foobar(self, arg) [attributes(
>         author = "Guido van Rossum"
>         deprecated = True
>         )]:
>         pass
> 
> Another possibility would be to overload def:
> 
>     def foobar(self, arg):
>         def author = "Guido van Rossum"
>         def deprecated = True
>         pass

I'd still prefer a dot. How about:

    def foobar(self, arg):
        def.author = "Guido van Rossum"
        def.deprecated = True
        pass

?

However, the idea of putting such attributes at the same level as the
rest of the function block is ugly to me--newbies are going to go mad
the first few times, trying to figure out *when* such statements are
evaluated. Such statements IMO need their own "container", whether
that's a block:

    def foobar(self, arg):
        attributes:
            author = "Guido van Rossum"
            deprecated = True
        pass

...or a list:

    def foobar(self, arg) [attributes(
        author = "Guido van Rossum"
        deprecated = True)]:
        pass

...or within the declaration:

    def foobar(self, arg) having (author = "Guido van Rossum",
                                  deprecated = True):
        pass


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From jim.jewett at eds.com  Fri Mar 26 13:09:40 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 26 13:10:06 2004
Subject: [Python-Dev] decorators (not transformations) on functions vs
	classes
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D40F@USAHM010.amer.corp.eds.com>

>> I would also guess that where people are using function attributes,
>> there's a good chance that they're using decorators as well, since
>> they probably need to *do* something with the attributes.

> Depends entirely on the usage.

An attribute affects how the object looks when viewed as data.
For a function, that is all it does; it does not affect 
semantics.  For a class, it can also affect how the class acts.

If the object is just for storage/persistence, it is probably
a class rather than an function.  (Example: Values class of optparse)

    class foo:
        pass
    foo.x = 27

has some meaning.  Any subclass or instance of foo (including 
those already created) will now have an attribute of x, with 
a default value of 27.  Methods can use this value.  Other
code may (idiomatically) use the value to control its own
settings.

    def bar():
        pass
    bar.x = 27

is not so useful.  

There is no intermediate "self" scope between function locals 
and global, so bar itself can't see x.  Functions do not inherit,
so there won't be any instances or subfunctions that could be
affected.  Using a function's attributes to control another
function's execution path is surprising.

For a function, the attribute is only meaningful when viewing
the object as data rather than code.  Many programmers take
this opportunity to replace the function entirely with a class; 
others use naming conventions.  If these solutions aren't careful 
enough then you may not want to trust an attribute; you should 
run the function on known input and test the output.  (And then 
maybe cache the answer on the function, hoping that no one else
will change it, even if they would not have followed the naming
convention.)

-jJ

From shane at zope.com  Fri Mar 26 13:17:29 2004
From: shane at zope.com (Shane Hathaway)
Date: Fri Mar 26 13:17:33 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <16484.26937.904019.643044@montanaro.dyndns.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<16484.26937.904019.643044@montanaro.dyndns.org>
Message-ID: <406473B9.6040508@zope.com>

Skip Montanaro wrote:
> It's a shame that there's no good way to define function attributes already.
> Aside from the fact that this is different than the other form of
> "decoration", it's also different than setting attributes for classes.
> Rather than invent a unique syntax I'd prefer to either use a decorator
> function or suffer with tacking them on at the end:
> 
>     def foobar(self, arg):
>         pass
>     foobar.author = "Guido van Rossum"
>     foobar.deprecated = True

It's interesting that this whole discussion is about placing 
declarations near the "def" statement, rather than after the function 
body.  It appears to be a surprisingly important detail.

Tossing out another idea:

     foobar:
         author = "Guido van Rossum"
         deprecated = True
     def foobar(self, arg):
         pass

The declaration block must be followed immediately by a class or def 
statement.  It looks elegant to me.

Shane

From fumanchu at amor.org  Fri Mar 26 13:23:25 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Fri Mar 26 13:25:35 2004
Subject: [Python-Dev] decorators (not transformations) on functions
	vsclasses
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561F64@opus.amorhq.net>

Jewett, Jim J wrote:
>     def bar():
>         pass
>     bar.x = 27
> 
> is not so useful.  
> 
> There is no intermediate "self" scope between function locals 
> and global, so bar itself can't see x.  Functions do not inherit,
> so there won't be any instances or subfunctions that could be
> affected.  Using a function's attributes to control another
> function's execution path is surprising.

True, but you can also write:

>>> def bar():
... 	bar.x = 4
... 	
>>> bar()
>>> bar.x
4

...so bar itself can "see x", just not until the function is evaluated,
in this example. Classes, in this sense, simply differ by being
evaluated when they are declared.


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From fumanchu at amor.org  Fri Mar 26 13:36:05 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Fri Mar 26 13:37:56 2004
Subject: [Python-Dev] method decorators (PEP 318)
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561F66@opus.amorhq.net>

Shane Hathaway wrote:
> It's interesting that this whole discussion is about placing 
> declarations near the "def" statement, rather than after the function 
> body.  It appears to be a surprisingly important detail.
> 
> Tossing out another idea:
> 
>      foobar:
>          author = "Guido van Rossum"
>          deprecated = True
>      def foobar(self, arg):
>          pass
> 
> The declaration block must be followed immediately by a class or def 
> statement.  It looks elegant to me.

Yes, although I bet the parser would have an easier time with a specific
token to mark the block:

    predef foobar:
        author = "Guido van Rossum"
        deprecated = True
    def foobar(self, arg):
        pass

...not that I like "predef", I just couldn't think of a better one
off-hand. ;)


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From guido at python.org  Fri Mar 26 13:37:58 2004
From: guido at python.org (Guido van Rossum)
Date: Fri Mar 26 13:38:05 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: Your message of "Fri, 26 Mar 2004 12:56:43 EST."
	<5.1.0.14.0.20040326124534.02336510@mail.telecommunity.com> 
References: <200403261510.i2QFAZ522251@guido.python.org>
	<200403261510.i2QFAZ522251@guido.python.org> 
	<5.1.0.14.0.20040326124534.02336510@mail.telecommunity.com> 
Message-ID: <200403261837.i2QIbwl22829@guido.python.org>

> It's apparent Guido doesn't agree; I just wish I knew what was
> bothering him about the PEP, so I could either provide a convincing
> counterargument, or understand better why I'm wrong.  <0.5 wink> At
> the moment, I'm worried that something in my actual use cases will
> scare him into rejecting the PEP altogether.  <0.01 wink>

Let me try to explain what bothers me.

If we were going to use this mostly for decorators spelled with a
single work, like classmethod, I would favor a syntax where the
decorator(s) are put as early as reasonable in the function
definition, in particular, before the argument list.  After seeing all
the examples, I still worry that this:

  def foobar(cls, blooh, blah) [classmethod]:

hides a more important fact for understanding it (classmethod) behind
some less important facts (the argument list).  I would much rather
see this:

  def foobar [classmethod] (cls, blooh, blah):

I agree that if this will be used for decorators with long argument
lists, putting it in front of the arguments is worse than putting it
after, but I find that in that case the current PEP favorite is also
ugly:

  def foobar (self, blooh, blah) [
      metadata(author="GvR",
               version="1.0",
               copyright="PSF",
               ...),
      deprecated,
  ]:
      for bl, oh in blooh:
          print oh(blah(bl))

I don't see a way to address both separate concerns (hiding the most
important fact after the signature, and the ugliness of long complex
lists of decorators) with a single syntactic alternative.  The two
concern are in conflict with each other.  That's why I'm trying to
pull the proposal apart into two directions: put small decorators in
front, put large function attribute sets in the body.

(For those worried that the function attribute sets appear to belong
to the body, I point to the precedent of the docstring.  IMO the start
of the function body is a perfectly fine place for metadata about a
function.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Fri Mar 26 13:41:03 2004
From: guido at python.org (Guido van Rossum)
Date: Fri Mar 26 13:41:11 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: Your message of "Fri, 26 Mar 2004 18:04:58 +0100."
	<406462BA.60305@livinglogic.de> 
References: <5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>
	<200403261644.i2QGi3822505@guido.python.org> 
	<406462BA.60305@livinglogic.de> 
Message-ID: <200403261841.i2QIf3q22844@guido.python.org>

[Walter Doerwald]
> For me '@' looks like something that the compiler shouldn't see.

I don't understand.  Why?  Is that what @ means in other languages?
Not in JDK 1.5 -- the compiler definitely sees it.

> How about:
> 
>    def foobar(self, arg):
>        .author = AuthorInfo(author="GvR", version="1.0",
                              copyright="GPL", ...)
>        .deprecated = True

No, I want to reserve the leading dot for attribute assignment to a
special object specified by a 'with' statement, e.g.

    with self:
        .foo = [1, 2, 3]
        .bar(4, .foo)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Fri Mar 26 13:47:04 2004
From: guido at python.org (Guido van Rossum)
Date: Fri Mar 26 13:47:12 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: Your message of "Fri, 26 Mar 2004 13:17:29 EST."
	<406473B9.6040508@zope.com> 
References: <200403261510.i2QFAZ522251@guido.python.org>
	<16484.26937.904019.643044@montanaro.dyndns.org> 
	<406473B9.6040508@zope.com> 
Message-ID: <200403261847.i2QIl4k22897@guido.python.org>

> It appears to be a surprisingly important detail.

I'm anal about trying to find the optimal syntax given incompatible
constraints.

> Tossing out another idea:
> 
>      foobar:
>          author = "Guido van Rossum"
>          deprecated = True
>      def foobar(self, arg):
>          pass
> 
> The declaration block must be followed immediately by a class or def 
> statement.  It looks elegant to me.

The current parser can't deal with "NAME ':'" as a syntax rule,
because expressions also start with NAME.

I also don't like that you have to repeat the function name.

And this would hide the "important" decorators (e.g. classmethod)
amongst a large collection of metadata (unless you want to combine
this with the "def foo [classmethod] (cls, a, b, c):" syntax.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From jim.jewett at eds.com  Fri Mar 26 13:52:10 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 26 13:53:06 2004
Subject: [Python-Dev] decorators (not transformations) on functions vsclasses
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D410@USAHM010.amer.corp.eds.com>

Regarding decorator attributes vs tranformations, I wrote:
>>     def bar():
>>         pass
>>     bar.x = 27
 
>> is not so useful.  
 
>> There is no intermediate "self" scope between function locals 
>> and global, so bar itself can't see x.

Robert Brewer:

> True, but you can also write:

	>>> def bar():
	... 	bar.x = 4
	... 	
	>>> bar()
	>>> bar.x
	4

> ...so bar itself can "see x", 

The x it sees is tied to the name rather than the object.

>>> def f():
...     print f.x
>>> f.x = 5
>>> g=f
>>> f()
5
>>> del f
>>> g()

Traceback (most recent call last):
  File "<pyshell#174>", line 1, in -toplevel-
    g()
  File "<pyshell#169>", line 2, in f
    print f.x
NameError: global name 'f' is not defined

The self namesapce means that you can tie an attribute
directly to a class object (rather than to its name).

>>> class C:
    def printx(self):
        print self.x
>>> D=C
>>> C.x = 6
>>> del C
>>> d=D()
>>> d.printx()
6

From python at discworld.dyndns.org  Fri Mar 26 13:58:21 2004
From: python at discworld.dyndns.org (Charles Cazabon)
Date: Fri Mar 26 13:54:39 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261847.i2QIl4k22897@guido.python.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<16484.26937.904019.643044@montanaro.dyndns.org>
	<406473B9.6040508@zope.com>
	<200403261847.i2QIl4k22897@guido.python.org>
Message-ID: <20040326185821.GB6667@discworld.dyndns.org>

Guido van Rossum <guido@python.org> wrote:
> > 
> >      foobar:
> >          author = "Guido van Rossum"
> >          deprecated = True
> >      def foobar(self, arg):
> >          pass
> 
> The current parser can't deal with "NAME ':'" as a syntax rule,
> because expressions also start with NAME.
> 
> I also don't like that you have to repeat the function name.
> 
> And this would hide the "important" decorators (e.g. classmethod)
> amongst a large collection of metadata (unless you want to combine
> this with the "def foo [classmethod] (cls, a, b, c):" syntax.

  def foobar(self, arg):
      .classmethod
      .deprecated = True
      .author = "Guido van Rossum"

      pass

?

Charles
-- 
-----------------------------------------------------------------------
Charles Cazabon                           <python@discworld.dyndns.org>
GPL'ed software available at:     http://www.qcc.ca/~charlesc/software/
-----------------------------------------------------------------------

From pje at telecommunity.com  Fri Mar 26 14:05:04 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri Mar 26 13:59:18 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261847.i2QIl4k22897@guido.python.org>
References: <Your message of "Fri,
	26 Mar 2004 13:17:29 EST." <406473B9.6040508@zope.com>
	<200403261510.i2QFAZ522251@guido.python.org>
	<16484.26937.904019.643044@montanaro.dyndns.org>
	<406473B9.6040508@zope.com>
Message-ID: <5.1.0.14.0.20040326140044.026310e0@mail.telecommunity.com>

At 10:47 AM 3/26/04 -0800, Guido van Rossum wrote:
> > Tossing out another idea:
> >
> >      foobar:
> >          author = "Guido van Rossum"
> >          deprecated = True
> >      def foobar(self, arg):
> >          pass
> >
> > The declaration block must be followed immediately by a class or def
> > statement.  It looks elegant to me.
>
>The current parser can't deal with "NAME ':'" as a syntax rule,
>because expressions also start with NAME.
>
>I also don't like that you have to repeat the function name.
>
>And this would hide the "important" decorators (e.g. classmethod)
>amongst a large collection of metadata (unless you want to combine
>this with the "def foo [classmethod] (cls, a, b, c):" syntax.

If it's only classmethod you're worried about obscuring, why not:

def foobar(class cls, otherarg, ...):
     # ...

And then move all other decorators into some kind of delimited block in the 
function body, just before the docstring.


From pje at telecommunity.com  Fri Mar 26 14:10:02 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri Mar 26 14:04:23 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261837.i2QIbwl22829@guido.python.org>
References: <Your message of "Fri, 26 Mar 2004 12:56:43 EST."
	<5.1.0.14.0.20040326124534.02336510@mail.telecommunity.com>
	<200403261510.i2QFAZ522251@guido.python.org>
	<200403261510.i2QFAZ522251@guido.python.org>
	<5.1.0.14.0.20040326124534.02336510@mail.telecommunity.com>
Message-ID: <5.1.0.14.0.20040326135049.029ed0b0@mail.telecommunity.com>

At 10:37 AM 3/26/04 -0800, Guido van Rossum wrote:
> > It's apparent Guido doesn't agree; I just wish I knew what was
> > bothering him about the PEP, so I could either provide a convincing
> > counterargument, or understand better why I'm wrong.  <0.5 wink> At
> > the moment, I'm worried that something in my actual use cases will
> > scare him into rejecting the PEP altogether.  <0.01 wink>
>
>Let me try to explain what bothers me.
>
>If we were going to use this mostly for decorators spelled with a
>single work, like classmethod, I would favor a syntax where the
>decorator(s) are put as early as reasonable in the function
>definition, in particular, before the argument list.  After seeing all
>the examples, I still worry that this:
>
>   def foobar(cls, blooh, blah) [classmethod]:
>
>hides a more important fact for understanding it (classmethod) behind
>some less important facts (the argument list).  I would much rather
>see this:
>
>   def foobar [classmethod] (cls, blooh, blah):

Either way is still a huge improvement over what we have now, but I 
certainly see your point.



>I agree that if this will be used for decorators with long argument
>lists, putting it in front of the arguments is worse than putting it
>after, but I find that in that case the current PEP favorite is also
>ugly:
>
>   def foobar (self, blooh, blah) [
>       metadata(author="GvR",
>                version="1.0",
>                copyright="PSF",
>                ...),
>       deprecated,
>   ]:
>       for bl, oh in blooh:
>           print oh(blah(bl))
>
>I don't see a way to address both separate concerns (hiding the most
>important fact after the signature, and the ugliness of long complex
>lists of decorators) with a single syntactic alternative.  The two
>concern are in conflict with each other.  That's why I'm trying to
>pull the proposal apart into two directions: put small decorators in
>front, put large function attribute sets in the body.
>
>(For those worried that the function attribute sets appear to belong
>to the body, I point to the precedent of the docstring.  IMO the start
>of the function body is a perfectly fine place for metadata about a
>function.)

Okay, then how about:

     def foobar(cls,blooh, blah):
         [classmethod]
         """This is a class method"""
         # body

and
     def foobar(self,bloo,blah):
         [metadata(author="GvR",version=1.0,copyright="PSF"),
          deprecated]
         """This is deprecated"""
         # body

Okay, you're right, they're both ugly.  :)  In fact, they seem uglier than:

     def foobar(self,bloo,blah) [
         metadata(author="GvR",version=1.0,copyright="PSF"),
         deprecated
     ]:
         """This is deprecated"""
         # body

or:

     def foobar(self,bloo,blah) [
         deprecated, metadata(
             author="GvR",version=1.0,copyright="PSF"
         )
     ]:
         """This is deprecated"""
         # body

but I think this is largely a function of whitespace and other optional 
formatting choices.


From guido at python.org  Fri Mar 26 14:08:15 2004
From: guido at python.org (Guido van Rossum)
Date: Fri Mar 26 14:08:23 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: Your message of "Fri, 26 Mar 2004 14:05:04 EST."
	<5.1.0.14.0.20040326140044.026310e0@mail.telecommunity.com> 
References: <Your message of "Fri,
	26 Mar 2004 13:17:29 EST." <406473B9.6040508@zope.com>
	<200403261510.i2QFAZ522251@guido.python.org>
	<16484.26937.904019.643044@montanaro.dyndns.org>
	<406473B9.6040508@zope.com> 
	<5.1.0.14.0.20040326140044.026310e0@mail.telecommunity.com> 
Message-ID: <200403261908.i2QJ8FT23018@guido.python.org>

> If it's only classmethod you're worried about obscuring, why not:
> 
> def foobar(class cls, otherarg, ...):
>      # ...

Not quite -- I'm worried about things *like* classmethod and
staticmethod, and I want to provide you with the ability to write your
own.

> And then move all other decorators into some kind of delimited block
> in the function body, just before the docstring.

Aha!  So you agree with that part.  I rest my case.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From casey at zope.com  Fri Mar 26 14:05:30 2004
From: casey at zope.com (Casey Duncan)
Date: Fri Mar 26 14:08:39 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
References: <5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>
	<200403261644.i2QGi3822505@guido.python.org>
	<406462BA.60305@livinglogic.de>
	<200403261841.i2QIf3q22844@guido.python.org>
Message-ID: <20040326140530.4ec3f870.casey@zope.com>

On Fri, 26 Mar 2004 10:41:03 -0800
Guido van Rossum <guido@python.org> wrote:

> [Walter Doerwald]
> > For me '@' looks like something that the compiler shouldn't see.
> 
> I don't understand.  Why?  Is that what @ means in other languages?
> Not in JDK 1.5 -- the compiler definitely sees it.
> 
> > How about:
> > 
> >    def foobar(self, arg):
> >        .author = AuthorInfo(author="GvR", version="1.0",
>                               copyright="GPL", ...)
> >        .deprecated = True
> 
> No, I want to reserve the leading dot for attribute assignment to a
> special object specified by a 'with' statement, e.g.
> 
>     with self:
>         .foo = [1, 2, 3]
>         .bar(4, .foo)

Have you been reading the Prothon docs? ;^)

-Casey


From guido at python.org  Fri Mar 26 14:19:14 2004
From: guido at python.org (Guido van Rossum)
Date: Fri Mar 26 14:19:21 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: Your message of "Fri, 26 Mar 2004 14:05:30 EST."
	<20040326140530.4ec3f870.casey@zope.com> 
References: <5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>
	<200403261644.i2QGi3822505@guido.python.org>
	<406462BA.60305@livinglogic.de>
	<200403261841.i2QIf3q22844@guido.python.org> 
	<20040326140530.4ec3f870.casey@zope.com> 
Message-ID: <200403261919.i2QJJEs23075@guido.python.org>

> > No, I want to reserve the leading dot for attribute assignment to a
> > special object specified by a 'with' statement, e.g.
> > 
> >     with self:
> >         .foo = [1, 2, 3]
> >         .bar(4, .foo)
> 
> Have you been reading the Prothon docs? ;^)

No, I've expressed this idea before, and they probably saw it.  Plus,
this is in VB so neither of us can claim to have invented it. :-)

http://freespace.virgin.net/s.cowan/vbhowto/how_to/vb_controls/with_statement.htm

(And "with" was in Pascal 30+ years ago, without the leading dot.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From skip at pobox.com  Fri Mar 26 14:26:39 2004
From: skip at pobox.com (Skip Montanaro)
Date: Fri Mar 26 14:26:47 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <5.1.0.14.0.20040326120205.02336d40@mail.telecommunity.com>
References: <Your message of "Fri, 26 Mar 2004 11:07:17 EST."
	<5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>
	<5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>
	<5.1.0.14.0.20040326120205.02336d40@mail.telecommunity.com>
Message-ID: <16484.33775.177516.337303@montanaro.dyndns.org>


    Phillip> I guess my main concern is that I don't want to see PEP 318
    Phillip> crippled by a syntax that only works (and then just barely!)
    Phillip> for classmethod or another single-word decorator.

Ditto.  Even though we might not be able to think of all the potential uses
today doesn't mean we should lock ourselves out of potential future uses.
Who would have predicted when function docstrings were added to Python that
they would have been coopted for use as grammar productions (as John Aycock
did in SPARK)?  That's not what they were intended for, but given the
language as defined in 1998, it was a very elegant solution to the problem
at hand, IMHO.

Skip

From skip at pobox.com  Fri Mar 26 14:33:00 2004
From: skip at pobox.com (Skip Montanaro)
Date: Fri Mar 26 14:33:10 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <DE1CF2B4FEC4A342BF62B6B2B334601E561F62@opus.amorhq.net>
References: <DE1CF2B4FEC4A342BF62B6B2B334601E561F62@opus.amorhq.net>
Message-ID: <16484.34156.466513.150290@montanaro.dyndns.org>


    Robert> However, the idea of putting such attributes at the same level
    Robert> as the rest of the function block is ugly to me--newbies are
    Robert> going to go mad the first few times, trying to figure out *when*
    Robert> such statements are evaluated. 

An excellent point.  It seems to me that @statements would have to be
restricted to occur only at the beginning of a function (immediately after
the def line or the docstring).  If not, you'll probably get people (newbies
at least) trying to write stuff like this:

    def foo(a):
        if a > 5:
            @attr: "bob"
        elif a > 0:
            @attr: "skip"
        else:
            @attr: "guido"

The other forms (via a decorator, using explicit attribute assignments after
the definition) make it explicit that these attribute assignments don't
occur in the context of function execution.

Skip

From jack at performancedrivers.com  Fri Mar 26 14:39:10 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Fri Mar 26 14:39:15 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261837.i2QIbwl22829@guido.python.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<200403261510.i2QFAZ522251@guido.python.org>
	<5.1.0.14.0.20040326124534.02336510@mail.telecommunity.com>
	<200403261837.i2QIbwl22829@guido.python.org>
Message-ID: <20040326193910.GC12983@performancedrivers.com>

On Fri, Mar 26, 2004 at 10:37:58AM -0800, Guido van Rossum wrote:
> Let me try to explain what bothers me.
> 
> If we were going to use this mostly for decorators spelled with a
> single work, like classmethod, I would favor a syntax where the
> decorator(s) are put as early as reasonable in the function
> definition, in particular, before the argument list.  After seeing all
> the examples, I still worry that this:
> 
>   def foobar(cls, blooh, blah) [classmethod]:
> 
> hides a more important fact for understanding it (classmethod) behind
> some less important facts (the argument list). 

Until now the fact that the first argument was 'cls' instead of 'self'
was the signal to the user that it was a classmethod.  If there wasn't
a foobar = classmethod(foobar) a dozen lines down people would actually
be suprised and consider the declaration broken.

classmethod is currently a red-headed stepchild of a modifier, hidden
far from the defition but people still manage.  I don't think because it
has been neglected for so long we need to make it _more_ important that
the arg list, as a reparation.

> I don't see a way to address both separate concerns (hiding the most
> important fact after the signature, and the ugliness of long complex
> lists of decorators) with a single syntactic alternative.  The two
> concern are in conflict with each other.  That's why I'm trying to
> pull the proposal apart into two directions: put small decorators in
> front, put large function attribute sets in the body.

I don't think the decorators are the most important part of the
declaration, so I prefer them after.  The docstring being after the
function declartion doesn't bother me, but in a previous life
I spent doing C++ not having the documentation before the function/class
was a hanging offense.

> (For those worried that the function attribute sets appear to belong
> to the body, I point to the precedent of the docstring.  IMO the start
> of the function body is a perfectly fine place for metadata about a
> function.)

I think people are talking about different kinds of metadata.
functional metadata looks good inline (classmethod, register, memoize)
with the function definition
documentation metadata (author, and pydoc stuff like @returns) looks
good after the declaration.  People are used to seeing it there too,
because that's where docstrings are. 

I'm happy w/ docstrings for documentation metadata, I would only
use decorators for functional metadata (changing behavior).  So
I like the inline w/ the function def.

-jackdied


From mollitor at earthlink.net  Fri Mar 26 14:55:12 2004
From: mollitor at earthlink.net (Robert Mollitor)
Date: Fri Mar 26 14:55:58 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <5.1.0.14.0.20040326124534.02336510@mail.telecommunity.com>
Message-ID: <7E7593F2-7F5F-11D8-80D6-00039314A688@earthlink.net>


On Friday, March 26, 2004, at 12:56  PM, Phillip J. Eby wrote:
> At the same time, I realize Python also generally prefers to have 
> different syntax for different use cases.  But, how different are 
> these use cases, really?  They're all changing the function in some 
> way.

Actually, not really, and that is the problem as I see it.  (I should 
preface the following
with "I'm not an expert and may have things completely wrong".)

Consider

	class C:
		def m(self,a,b): pass
		m.hat =  "top"
		m.shoes = "wingtip"

		def cm(cls,a,b): pass
		cm.hat = "baseball"
		cm = classmethod(cm)
		cm.shoes = "sneakers"

Assuming this even worked (which it doesn't because classmethod has 
read-only
attributes), there are two objects named "cm" in this picture.  The 
latter one wraps
the earlier one.  This means that the 'hat' and 'shoes' attributes 
would not be "on" the
same object.  Now this might be ok if you access the attribute through 
the classmethod
instance because it could redirect the 'getattr'.  However "C.cm" != 
"C.__dict__['cm']".
In fact, as far as I can tell, due to the way the binding occurs, once 
you do the binding
("C.cm"), there is no way to get back to the classmethod instance (the 
object actually
stored in C's __dict__).  To keep things sane, we probably want all 
function "attributes"
to be on the true function object anyway.  The wrapper object might 
have its own
attributes which control the binding operation, but those are separate 
things.

So the main issue, I think, is that syntactically we want 
"transformers" like
classmethod and staticmethod (and potentially others) to be 
front-and-center in
the main function definition line, but we want them to be "applied" 
last.  For this
reason, we actually do need a new "decoration" syntax (wherever it ends 
up
lexically) which to set up any function attributes (metadata like 
docstrings) on the
function object itself.  This is because the 'old way' of "cm.shoes = 
'sneakers'" won't
work (because it won't be before the wrapping), though perhaps it could 
be made
to work if the classmethod redirects the 'setattr' to the contained 
function object.

There is a secondary issue of how to handle chained transformers.  
classmethod
and staticmethod are mutually exclusive, but in general transformers 
need not be.
Let's say there was a 'synchronized' transformer that we want for class 
methods, too.
Not only would we want "synchronized(classmethod(f))" to work, but we 
would
probably want "classmethod(synchronized(f))" to work identically.  This 
is not easy
unless either the implementation of each was aware of the other (or at 
least the
possibility of the other), or all transformers must be written to a 
tight specification.
However, I admit that I don't fully understand the whole descriptor 
thing yet.


Robert Mollitor


From casey at zope.com  Fri Mar 26 14:35:36 2004
From: casey at zope.com (Casey Duncan)
Date: Fri Mar 26 14:57:52 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <200403261919.i2QJJEs23075@guido.python.org>
References: <5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>
	<200403261644.i2QGi3822505@guido.python.org>
	<406462BA.60305@livinglogic.de>
	<200403261841.i2QIf3q22844@guido.python.org>
	<20040326140530.4ec3f870.casey@zope.com>
	<200403261919.i2QJJEs23075@guido.python.org>
Message-ID: <20040326143536.2f068c47.casey@zope.com>

On Fri, 26 Mar 2004 11:19:14 -0800
Guido van Rossum <guido@python.org> wrote:

> > > No, I want to reserve the leading dot for attribute assignment to
> > > a special object specified by a 'with' statement, e.g.
> > > 
> > >     with self:
> > >         .foo = [1, 2, 3]
> > >         .bar(4, .foo)
> > 
> > Have you been reading the Prothon docs? ;^)
> 
> No, I've expressed this idea before, and they probably saw it.  Plus,
> this is in VB so neither of us can claim to have invented it. :-)
> 
> http://freespace.virgin.net/s.cowan/vbhowto/how_to/vb_controls/with_statement.htm
> 
> (And "with" was in Pascal 30+ years ago, without the leading dot.)

Pascal being my first compiled language I am well aware. As for VB, well
if it's good enough for VB, then hell.. ;^) 

For some reason I thought there was a philosophical objection to 'with'
in Python. Must have been an urban myth I guess.

-Casey

From skip at pobox.com  Fri Mar 26 15:10:41 2004
From: skip at pobox.com (Skip Montanaro)
Date: Fri Mar 26 15:10:51 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <20040326193910.GC12983@performancedrivers.com>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<5.1.0.14.0.20040326124534.02336510@mail.telecommunity.com>
	<200403261837.i2QIbwl22829@guido.python.org>
	<20040326193910.GC12983@performancedrivers.com>
Message-ID: <16484.36417.192543.754336@montanaro.dyndns.org>


    Jack> I don't think because it has been neglected for so long we need to
    Jack> make it _more_ important that the arg list, as a reparation.

Can you say "affirmative action"? <wink>  At any rate, I agree that moving
it up somewhere near the "def" is probably good enough.  I think the
function name and the arglist are should be more visually important.

I don't think we're going to wind up with everybody suddenly adding
decorations to every function they write (I have yet to use classmethod or
staticmethod).  Except in extreme cases (PEAK sort of sounds like it might
be one), most functions will be of the current type.  When I go back and
look at the beginning of a function's definition it is almost always to see
how to call it.  I'd like that to be just as easy for decorated functions as
the vanilla ones.

    Guido> (For those worried that the function attribute sets appear to
    Guido> belong to the body, I point to the precedent of the docstring.
    Guido> IMO the start of the function body is a perfectly fine place for
    Guido> metadata about a function.)

The docstring is a bit of a different beast.  They were added before general
function attributes, right?  As such, they are a partial solution to a more
general problem.  Had function attributes been available at that time the
special nature of "if the first object in the module/function/class is a
string literal, make it the docstring" wouldn't have been (as) necessary.

Skip

From python at rcn.com  Fri Mar 26 15:18:40 2004
From: python at rcn.com (Raymond Hettinger)
Date: Fri Mar 26 15:20:58 2004
Subject: [Python-Dev] RE: [Python-checkins] python/dist/src/Lib/test
	test_tcl.py, 1.3, 1.4
In-Reply-To: <E1B6syZ-0004KX-KN@sc8-pr-cvs1.sourceforge.net>
Message-ID: <00ab01c4136f$87a5bca0$52af958d@oemcomputer>

You've damaged me forever.  I used to read this as test-tee-see-ell, but
after your note I can't ignore the homonym ;-).

Raymond




From pje at telecommunity.com  Fri Mar 26 15:42:11 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri Mar 26 15:36:27 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <16484.36417.192543.754336@montanaro.dyndns.org>
References: <20040326193910.GC12983@performancedrivers.com>
	<200403261510.i2QFAZ522251@guido.python.org>
	<5.1.0.14.0.20040326124534.02336510@mail.telecommunity.com>
	<200403261837.i2QIbwl22829@guido.python.org>
	<20040326193910.GC12983@performancedrivers.com>
Message-ID: <5.1.0.14.0.20040326153506.025ecd90@mail.telecommunity.com>

At 02:10 PM 3/26/04 -0600, Skip Montanaro wrote:
>Except in extreme cases (PEAK sort of sounds like it might
>be one), most functions will be of the current type.

This is true even in PEAK.  'binding.Make' is used most often to wrap 
short, simple lambda: functions, rather than lengthy 'def' suites, and 
'events.taskFactory' is only used to wrap functions that will be run as 
pseudothreads in asynchronous, event-driven code.  'binding.classAttr' is 
really a wizard-level thing for framework builders, and so isn't seen in 
app-level code at all.

It's actually *much* more common to use *class* decorators in PEAK, 
actually, since they're used to declare interfaces and security, e.g.:

class Something(binding.Component):

     protocols.advise(
         instancesImplement=[ISomething]
     )

     security.declare(
         someAttr=SomePermission,
         otherAttr=OtherPermission
     )

     ...

So, if the PEP 318 class decorator syntax were available, these would move 
into an explicit declaration block, rather than just being inline function 
calls.


From fumanchu at amor.org  Fri Mar 26 15:51:41 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Fri Mar 26 15:53:35 2004
Subject: [Python-Dev] method decorators (PEP 318)
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561F6D@opus.amorhq.net>

Skip Montanaro wrote:
>     Guido> (For those worried that the function attribute 
> sets appear to
>     Guido> belong to the body, I point to the precedent of 
> the docstring.
>     Guido> IMO the start of the function body is a perfectly 
> fine place for
>     Guido> metadata about a function.)
> 
> The docstring is a bit of a different beast.  They were added 
> before general
> function attributes, right?  As such, they are a partial 
> solution to a more
> general problem.  Had function attributes been available at 
> that time the
> special nature of "if the first object in the 
> module/function/class is a
> string literal, make it the docstring" wouldn't have been 
> (as) necessary.

I can just picture the article:

"Unifying functions and classes in Python 2.5"

Introduction

"Python 2.5 introduces the first phase of "function/class unification".
This is a series of changes to Python intended to remove most of the
differences between functions and classes. Perhaps the most obvious one
is the restriction against declaring attributes of functions (such as
docstrings and other metadata) in a 'def' statement..."

:D


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From mike at nospam.com  Fri Mar 26 15:57:26 2004
From: mike at nospam.com (Mike Rovner)
Date: Fri Mar 26 15:57:27 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
References: <200403261510.i2QFAZ522251@guido.python.org><16484.26937.904019.643044@montanaro.dyndns.org>
	<406473B9.6040508@zope.com>
Message-ID: <c425fh$5ke$1@sea.gmane.org>

Given that attributes are part of function object dictionary,
and defined at function definition time like docstring,
why not:

def func(atr):
  {author="Guido", deprecated=True}
  '''doc'''
  pass

Presumably {attributes} will be like "doc": only allowed immediately after
def (interchangebly with "doc") and do no harm in other places.

Regards,
Mike




From jim.jewett at eds.com  Fri Mar 26 16:17:05 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 26 16:18:04 2004
Subject: [Python-Dev] method decorators (PEP 318)
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D413@USAHM010.amer.corp.eds.com>

Guido van Rossum:

> ... for decorators spelled with a single word, like classmethod,
> ... as early as reasonable in the function definition

>    def foobar(cls, blooh, blah) [classmethod]:

> hides a more important fact for understanding it (classmethod) behind
> some less important facts (the argument list).  I would much rather
> see this:

>    def foobar [classmethod] (cls, blooh, blah):

One alternative placed it earlier still, in a second header clause.

    using:
        classmethod
    def foobar(cls, blooh, blah):
        pass

This isn't as good for single short decorators, but it does scale up
better than anything which adds directly to the def clause.

    using:
        metadata(author="GvR",
                 version="1.0",
                 copyright="PSF",
                 ...)
        deprecated
    def foobar (self, blooh, blah):
        for bl, oh in blooh:
            print oh(blah(bl))

        for bl, oh in blooh:
            print oh(blah(bl))

It still risks hiding a few transformations in a large group
of annotations.

Using both the header and the suite creates more separation,
but it is an option.

    using classmethod:
        deprecated        
        metadata(author="GvR",
                 version="1.0",
                 copyright="PSF",
                 ...)
    def foobar (cls, blooh, blah):
        for bl, oh in blooh:
            print oh(blah(bl))

> I want to reserve the leading dot for attribute assignment to a
> special object specified by a 'with' statement, e.g.

>    with self:
>        .foo = [1, 2, 3]
>        .bar(4, .foo)

Would the about-to-be-defined object be available when specifying 
that special object?   (I've used "this" to separate it from the
enclosing class that "self" would refer to.)

    with this:
        .note = [1,2,3]
    def foo(args):
        pass

or even

    with proxy_dict(this):
        .note = [1,2,3]
    def foo(args):
        pass

How important is the annotation/transformation distinction?

    with this:
        .note = [1,2,3]
    using:
        classmethod
    def foo(args):
        pass

Phillip J. Eby:

> Okay, then how about:

>     def foobar(cls,blooh, blah):
>         [classmethod]
>         """This is a class method"""
>         # body

The above is already legal syntax, it just doesn't do much.  It 
does keep the """ """ from being a docstring, which in turn keeps 
it from being optimized out.

    def foobar(args):
        [1,2,3]
        """Here's proof you stole my code in a careless manner!"""
        pass

Breaking that particular usage might be amusing, but I'm not so 
happy about having to look farther and farther ahead to guess
where the magic ends.  @classmethod at least says "This part is
still magic."

From mollitor at earthlink.net  Fri Mar 26 16:32:14 2004
From: mollitor at earthlink.net (Robert Mollitor)
Date: Fri Mar 26 16:32:21 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <0C4A06B1-7F6D-11D8-80D6-00039314A688@earthlink.net>


On Friday, March 26, 2004, at 10:10  AM, Guido van Rossum wrote:
> (2) Invent some other notation for setting function attributes as part
>     of the function *body*, before the doc string even.
>
> For (2) I am thinking aloud here:
>
>    def foobar(self, arg):
>        @author: "Guido van Rossum"
>        @deprecated
>        pass

Ages ago I used to think there was some type of internationalization 
problem
with '@', but I guess its use in Java, not to mention in URLs, belies 
that notion.
Still, you might want to save that token for some future operator that 
can be
used amongst other operators in an expression.

(Also thinking out loud)  How about

	def foobar (self, arg):
		:doc """This is my doc string."""
		:author "Guido van Rossum"
		:deprecated
		pass

Syntactically, I believe this could work, though you would only want it 
for
funcdef and classdef and probably not in single-line suites.  So 
(maybe):

	def_suite: simpe_stmt | NEWLINE INDENT decoration* stmt+ DEDENT
	decoration: ':' test NEWLINE

The hand waving justification is that while a ':' at the end of a line 
or in the middle
means "here comes the good stuff", at the beginning of the line it 
could mean
"but first here is an aside".


Robert Mollitor



From jcarlson at uci.edu  Fri Mar 26 16:48:07 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri Mar 26 16:52:05 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <20040326193910.GC12983@performancedrivers.com>
References: <200403261837.i2QIbwl22829@guido.python.org>
	<20040326193910.GC12983@performancedrivers.com>
Message-ID: <20040326120520.11A5.JCARLSON@uci.edu>

Personally, I believe discussion over PEP 318 has wandered from
an implementation and practical purity, into a syntactical nightmare.

We went from a single decorator syntax that worked for all kinds of
decorators, to another decorator syntax...that still works for all kinds
of decorators.  Wait, decorators haven't changed...but something else
has.

In addition to the proposed decorator syntax, now there is this other
syntax that seems to want to be attached to 318 (that Guido seems to
want to use '@', others have suggested '.' and ':' to signify), for
inserting metadata attributes into the function body.  This syntax
doesn't allow if, while, etc. statements, and really could be considered
a subset of Python.

I just read Mike Rovner's idea of allowing a dictionary in the body,
just before or after the docstring, to update the function's __dict__
attribute. If there existed an in-body metadata syntax, the dict literal
fits the current docstring syntax.

PEP 318 is about general function and class decorator syntax, which most
everyone has basically agreed upon (some list just before or after the
arguments), and this metadata-specific extension, I believe should be
its own PEP (regardless of which idea is favored, and regardless of
whether it is a good idea).


 - Josiah

P.S. If dict literals are the future syntax of metadata, should
functions get implicit __slots__ with a list literal, ignoring all
metadata updates with unlisted slots? <wink>


From rfinn at opnet.com  Fri Mar 26 17:01:57 2004
From: rfinn at opnet.com (Russell Finn)
Date: Fri Mar 26 17:02:08 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <0C4A06B1-7F6D-11D8-80D6-00039314A688@earthlink.net>
References: <0C4A06B1-7F6D-11D8-80D6-00039314A688@earthlink.net>
Message-ID: <32F83457-7F71-11D8-AF1F-000393C96926@opnet.com>

Mike Rovner wrote:
> Given that attributes are part of function object dictionary,
> and defined at function definition time like docstring,
> why not:
>
> def func(atr):
>   {author="Guido", deprecated=True}
>   '''doc'''
>   pass

I came up with nearly the same idea after reading Guido's original 
post, except that I recommend using the existing dictionary literal 
syntax:

def func (args):
     { author: "Guido", deprecated: True}
     '''doc'''
     pass

Perhaps this was just a typo in Mike's post.

-- Russell

-- 
Russell S. Finn
Principal Software Engineer, Software Architecture and Design
OPNET Technologies, Inc.


From mike at nospam.com  Fri Mar 26 17:21:38 2004
From: mike at nospam.com (Mike Rovner)
Date: Fri Mar 26 17:21:38 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
References: <0C4A06B1-7F6D-11D8-80D6-00039314A688@earthlink.net>
	<32F83457-7F71-11D8-AF1F-000393C96926@opnet.com>
Message-ID: <c42adc$8pk$1@sea.gmane.org>

Russell Finn wrote:
> Perhaps this was just a typo in Mike's post.

Ooops. Thank you for correcting that out.

The idea was making it dict literal. It's backward compatible; it's familiar
already
and it reminds on changing funcs' dictionary.

Mike





From walter at livinglogic.de  Fri Mar 26 17:28:49 2004
From: walter at livinglogic.de (=?iso-8859-1?Q?Walter_D=F6rwald?=)
Date: Fri Mar 26 17:28:53 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261841.i2QIf3q22844@guido.python.org>
References: <200403261841.i2QIf3q22844@guido.python.org>
Message-ID: <1121.213.6.243.82.1080340129.squirrel@isar.livinglogic.de>

Guido van Rossum sagte:

> [Walter Doerwald]
>> For me '@' looks like something that the compiler shouldn't see.
>
> I don't understand.  Why?  Is that what @ means in other languages? Not
> in JDK 1.5 -- the compiler definitely sees it.

@ has no meaning in current Python and is seldom used in normal text, so
it seems to be perfect for an escape character that is used by a
documentation extractor or preprocessor, i.e. for markup that is somehow
"orthogonal" to the program source. But as part of a normal Python source
it feels like a wart.

>> How about:
>>
>>    def foobar(self, arg):
>>        .author = AuthorInfo(author="GvR", version="1.0",
>                              copyright="GPL", ...)
>>        .deprecated = True
>
> No, I want to reserve the leading dot for attribute assignment to a
> special object specified by a 'with' statement, e.g.
>
>    with self:
>        .foo = [1, 2, 3]
>        .bar(4, .foo)

I know, but inside a function the leading dot could default to function
attribute assigment in the absence of a with statement.

That makes me wonder, what a leading dot should mean at class or module
scope.

Bye,
   Walter D?rwald




From jim.jewett at eds.com  Fri Mar 26 17:36:43 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Fri Mar 26 17:37:08 2004
Subject: [Python-Dev] annotations and PEP 316
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D414@USAHM010.amer.corp.eds.com>

I've already run into problems when my docstrings get 
longer than a few lines -- the function signature and 
the function body are too far apart.

doctest makes this worse.

PEP 316 is another good idea -- that makes docstrings
even longer.

As Skip pointed out, docstrings are a special case; 
if we were starting fresh, it might make sense to 
create new attributes for docstrings (__doc__), 
contracts, extra tests to run, longer documentation,
etc, rather than cramming them all into the plaintext
first string.

Annotation decorators would support this, but would
definately be too long for squeezing in mid-def.  It
would also make the "real transforms get lost" problem
worse, if the two types of decorators were not 
separated.

Based on an example from PEP 316

    class circbuf:

        note:
            dbc = """
                  pre: not self.is_empty()
                  post[self.g, self.len]:
                      __return__ == self.buf[__old__.self.g]
                      self.len == __old__.self.len - 1
                  """
            changed = (2004, 3, 26, 17, 9, 47, 4, 86, 0)
        using:
            synchronized(self.lock)
        def get(self):
            """Pull an entry from a non-empty circular buffer."""

I like leaving the current declaration alone; it lets me copy and
paste to the calling code.  But the separation works with any 
proposed transformer syntax - and some that were rejected for not
scaling.  The key is pulling out annotations (which *will* get long) 
in hopes of leaving a short list of true transformers.

Note that while Design By Contract is intended to be executable, it
is also intended to be something that the user can optimize out.
Attaching a second entry point does not require replacing the original
object;  DBC-aware tracing tools can look for the DBC entry point
attribute at least as easily as they can parse docstrings.  

I suspect many of the other "non-annotation" wrappers could be reduced
to an annotation in a similar manner.

-jJ

From jacobs at theopalgroup.com  Fri Mar 26 18:25:26 2004
From: jacobs at theopalgroup.com (Kevin Jacobs)
Date: Fri Mar 26 18:25:29 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261841.i2QIf3q22844@guido.python.org>
References: <5.1.0.14.0.20040326104109.01ec7a70@mail.telecommunity.com>	<200403261644.i2QGi3822505@guido.python.org>
	<406462BA.60305@livinglogic.de>
	<200403261841.i2QIf3q22844@guido.python.org>
Message-ID: <4064BBE6.20404@theopalgroup.com>

Guido van Rossum wrote:

>[Walter Doerwald]
>  
>
>>For me '@' looks like something that the compiler shouldn't see.
>>    
>>
>
>I don't understand.  Why?  Is that what @ means in other languages?
>Not in JDK 1.5 -- the compiler definitely sees it.
>  
>

Gack!  @ is ugly as sin to me.  Why not allow function attributes as 
keyword arguments
using the decorator syntax.  e.g.,

  def foo(self, a) [classmethod, precondition=predicate1, 
postcondition=predicate2,
                                             bnf= 'FOO := left op right']:
     pass


Thoughts?

-Kevin


From pf_moore at yahoo.co.uk  Fri Mar 26 18:48:48 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Fri Mar 26 18:48:38 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
References: <200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <7jx7maov.fsf@yahoo.co.uk>

Guido van Rossum <guido@python.org> writes:

> We came to an interesting conclusion (which Robert had anticipated):
> there are two quite different groups of use cases.
>
> One group of use cases modifies the way the method works, its
> semantics.
[...]
> The other group of use cases merely attaches extra bits of metadata to
> the method, without changing its usage.

I don't see these as two distinct use cases for the decorator
mechanism. I would personally never use decorators for your second
case (it's possible to do so, but that's a different matter). For me,
your second use case is a use case for function attributes, not for
decorator syntax.

So in my view, your second use case is orthogonal to PEP 318, and
should not impact on it at all.

> (2) Invent some other notation for setting function attributes as part
>     of the function *body*, before the doc string even.

As I say, I think this is a red herring. Sure, having a specialised
notation for setting function attributes is a notion that has been
round for a long time. But any plausible implementation of "decorators
of the first category" is going to leave open the possibility of a
decorator like

    def attributes(**kw):
        def decorator(f):
            for k, v in kw.iteritems():
                setattr(f, k, v)
            return f
        return decorator

If the importance of the metadata use case is sufficient to justify
new syntax, then it is regardless of the existence of decorators. If
it isn't, then I can't see how the existence of decorators could make
it more important to have syntax for setting attributes.

> For (2) I am thinking aloud here:
>
>    def foobar(self, arg):
>        @author: "Guido van Rossum"
>        @deprecated
>        pass

To be perfectly honest, I think this is a lousy example. I can't see
any real reason I'd ever want to set an "author" attribute on a
function (or any other attribution-style data), other than for
documentation. And in that case, I'd probably prefer to define a
project standard for the layout of the docstring. That said, I don't
mind the syntax.

And as for the "deprecated" case, that's one I'd definitely use a
decorator for! I'd want the function wrapped to raise a "Deprecated"
warning. Just having an attribute set would do nothing at runtime. (In
another post, you mentioned a metaclass. But a decorator would work
for bare (non-method) functions, as well as methods, where a metaclass
could not).

If your only point is that decorators are not an appropriate mechanism
for setting function attributes, then I agree with this. But unless
you're proposing to somehow cripple decorators so that my attributes
decorator above is *impossible*, then that statement is simply a style
issue.

Let's keep the discussion on PEP 318 focused on decorators. I'm happy
to have it stated (in the PEP, if you like) that a decorator like
attributes is an abuse and is strongly discouraged. But I don't think
that the existence of an abuse is justification for rejection of the
feature. Heck, if that was true, then half of the features already in
Python would need removing...

My view is that the idiom of "wrapping" is a very general, and highly
useful, programming paradigm. Providing direct language support for
that idiom will make a certain class of programs easier to write, and
more understandable to read. Any general mechanism brings the risk of
abuse, but in my experience, Python programmers have always shown good
taste and restraint in the use of the language - I'd assume that the
same would be true of this feature.

Paul.
-- 
This signature intentionally left blank


From Peter.Otten at t-online.de  Fri Mar 26 15:28:39 2004
From: Peter.Otten at t-online.de (Peter Otten)
Date: Fri Mar 26 20:06:41 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261510.i2QFAZ522251@guido.python.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <200403262128.39232.__peter__@web.de>

Yet another variant that deals with "passive" metadata: make the first 
dictionary special just like the first string.

>>> def foobar():
...     { author = "van Rossum",
...       deprecated = True }
...
>>> foobar.deprecated
True

Even better, stash it all away in a another attribute:

>>> foobar.__meta__
{"author": "van Rossum", "deprecated": "not yet"}

This would work for both functions and classes. Fitting in nicely: the 
alternative spelling for all dictionaries:

{alpha="value", beta="another value"}

Peter Otten


From Peter.Otten at t-online.de  Fri Mar 26 17:05:49 2004
From: Peter.Otten at t-online.de (Peter Otten)
Date: Fri Mar 26 20:06:48 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261510.i2QFAZ522251@guido.python.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <200403262305.49622.__peter__@web.de>

Yet another variant that deals with "passive" metadata - make the first 
dictionary special just like the first string.

>>> def foobar():
...     { author = "van Rossum",
...       deprecated = True }
...
>>> foobar.deprecated
True

Even better, stash it all away in a another attribute:

>>> foobar.__meta__
{"author": "van Rossum", "deprecated": "not yet"}

This would work for functions, classes and modules alike. Fitting in nicely: 
the alternative spelling for all dictionaries.

{alpha="value", beta="another value"}

Peter Otten



From andrew-pythondev at puzzling.org  Fri Mar 26 20:42:18 2004
From: andrew-pythondev at puzzling.org (Andrew Bennetts)
Date: Fri Mar 26 20:42:29 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <16484.34156.466513.150290@montanaro.dyndns.org>
References: <DE1CF2B4FEC4A342BF62B6B2B334601E561F62@opus.amorhq.net>
	<16484.34156.466513.150290@montanaro.dyndns.org>
Message-ID: <20040327014218.GC22848@frobozz>

On Fri, Mar 26, 2004 at 01:33:00PM -0600, Skip Montanaro wrote:
> 
>     Robert> However, the idea of putting such attributes at the same level
>     Robert> as the rest of the function block is ugly to me--newbies are
>     Robert> going to go mad the first few times, trying to figure out *when*
>     Robert> such statements are evaluated. 
> 
> An excellent point.  It seems to me that @statements would have to be
> restricted to occur only at the beginning of a function (immediately after
> the def line or the docstring).  If not, you'll probably get people (newbies
> at least) trying to write stuff like this:
> 
>     def foo(a):
>         if a > 5:
>             @attr: "bob"
>         elif a > 0:
>             @attr: "skip"
>         else:
>             @attr: "guido"
> 
> The other forms (via a decorator, using explicit attribute assignments after
> the definition) make it explicit that these attribute assignments don't
> occur in the context of function execution.

I agree -- anything that looks like code, but is run at defintion time
(rather than function call time), should occur before the colon in the def
statement.  To me, that colon is the boundary between declaration and code.

I realise that technically docstrings already break that -- but string
literals are no-ops in code, so it's not a big break.  But attribute
assignments that allow evaluation of arbitrary expressions aren't no-ops, so
I think Skip has a good point here.

I think I'd rather stick with just "def func(args...) [decorators]:" for now
-- it already makes setting function attributes much easier than is possible
now, and will hopefully give us some actual experience in what people want
to do, rather than just speculation.  Adding "@foo: bar" feels like a very
big (and controversial!) addition for very minimal benefit over what PEP 318
already allows.

-Andrew.


From barry at python.org  Sat Mar 27 00:27:33 2004
From: barry at python.org (Barry Warsaw)
Date: Sat Mar 27 00:27:37 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261510.i2QFAZ522251@guido.python.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
Message-ID: <1080365252.29455.23.camel@geddy.wooz.org>

On Fri, 2004-03-26 at 10:10, Guido van Rossum wrote:

> (2) Invent some other notation for setting function attributes as part
>     of the function *body*, before the doc string even.
> 
> For (2) I am thinking aloud here:
> 
>    def foobar(self, arg):
>        @author: "Guido van Rossum"
>        @deprecated
>        pass

The way I /want/ to spell this is something like:

def foobar(self, arg):
    foobar.author = 'Guido van Rossum'
    foobar.deprecated = True

I know all the practical problems with this suggestion, but to me,
that's the most natural spelling.  I suppose something like this might
also not be horrible:

def foobar [
    .author = 'Guido van Rossum'
    .deprecated = True
    classmethod
    ] (self, arg)

Naw, these probably don't mix well.  Hmm,

def foobar {
    author = 'Guido van Rossum'
    deprecated = True
    } [classmethod, spoogemethod, insanity
    ] (self, arg):
    # Now what the heck does this thing do?

Blech.

OTOH, I'm all for trying to get a more natural way of spelling function
attributes, but decorators can also do the trick in a nasty way:

def foobar [
    lambda f: f.author = 'Guido van Rossum',
    lambda f: f.deprecated = True,
    classmethod] (self, arg):
    # Now what?

It's way past my bedtime <wink> so ignore my babbling.
-Barry



From aleaxit at yahoo.com  Sat Mar 27 03:09:21 2004
From: aleaxit at yahoo.com (Alex Martelli)
Date: Sat Mar 27 03:09:36 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <1080365252.29455.23.camel@geddy.wooz.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<1080365252.29455.23.camel@geddy.wooz.org>
Message-ID: <0D7007C8-7FC6-11D8-AD6E-000A95EFAE9E@yahoo.com>


On 2004 Mar 27, at 06:27, Barry Warsaw wrote:
    ...
> attributes, but decorators can also do the trick in a nasty way:
>
> def foobar [
>     lambda f: f.author = 'Guido van Rossum',
>     lambda f: f.deprecated = True,
>     classmethod] (self, arg):
>     # Now what?

Not necessarily all that nasty:

def foobar [ with_attributes(
         author="Guido van Rossum",
         deprecated=True),
     classmethod] (cls, args):
         pass

with a built-in 'with_attributes' equivalent to:

def with_attributes(f, **kwds):
     for k, v in kwds.iteritems():
         setattr(f, k, v)
     return f


Alex


From pf_moore at yahoo.co.uk  Sat Mar 27 06:03:22 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Sat Mar 27 06:03:13 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
References: <200403261510.i2QFAZ522251@guido.python.org>
	<16484.26937.904019.643044@montanaro.dyndns.org>
	<406473B9.6040508@zope.com> <c425fh$5ke$1@sea.gmane.org>
Message-ID: <1xnesgat.fsf@yahoo.co.uk>

"Mike Rovner" <mike@nospam.com> writes:

> Given that attributes are part of function object dictionary,
> and defined at function definition time like docstring,
> why not:
>
> def func(atr):
>   {author="Guido", deprecated=True}
>   '''doc'''
>   pass
>
> Presumably {attributes} will be like "doc": only allowed immediately after
> def (interchangebly with "doc") and do no harm in other places.

I quite like this (although it ought to be a real dictionary literal,
ie {'author': 'Guido', 'deprecated': True}, which doesn't read quite
as well.

But I still think this is a separate issue from PEP 318.

Paul
-- 
This signature intentionally left blank


From mollitor at earthlink.net  Sat Mar 27 08:17:01 2004
From: mollitor at earthlink.net (Robert Mollitor)
Date: Sat Mar 27 08:15:48 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <0C4A06B1-7F6D-11D8-80D6-00039314A688@earthlink.net>
Message-ID: <087188E2-7FF1-11D8-9701-000393100E1A@earthlink.net>


On Friday, March 26, 2004, at 04:32 PM, Robert Mollitor wrote:
> (Also thinking out loud)  How about
>
> 	def foobar (self, arg):
> 		:doc """This is my doc string."""
> 		:author "Guido van Rossum"
> 		:deprecated
> 		pass
>
> Syntactically, I believe this could work, though you would only want 
> it for
> funcdef and classdef and probably not in single-line suites.  So 
> (maybe):
>
> 	def_suite: simpe_stmt | NEWLINE INDENT decoration* stmt+ DEDENT
> 	decoration: ':' test NEWLINE
>
> The hand waving justification is that while a ':' at the end of a line 
> or in the middle
> means "here comes the good stuff", at the beginning of the line it 
> could mean
> "but first here is an aside".

Oops, that should have been

	decoration: ':' NAME test NEWLINE

or along the same lines:

	decoration: ':' NAME ':' test NEWLINE

or

	decoration: ':' NAME '=' test NEWLINE

Also, these would only be for functions and not classes.  Its only the 
"transformations" that
people want or need for classes.

Another idea (leaving exact grammar and exact choice of keywords out of 
it for now):

	def foobar [classmethod] (self, arg) meta:
		:doc """This is my doc string."""
		:author "Guido van Rossum"
		:deprecated
	def:
		pass

or

	def foobar [classmethod] (self, arg) has:
		:doc """This is my doc string."""
		:author "Guido van Rossum"
		:deprecated
	is:
		pass

Here is my take on some recent questions.

Q: Why can't we just accept PEP 318 and just restrict it to 
non-function-attribute-setting transformations?

A: There is no enforceable way to restrict what the transformations do. 
  People will probably write transformations
that simply add attributes to functions, because (1) there will still 
be a desire to get such settings "into" the function
and (2) the old way of "f.author = 'me'" after the function will not 
work in general when a transformation happens
earlier.  The second issue in particular is why this PEP needs to be 
coordinated with a second PEP.

Q: So what if people write transformations that set function attributes?

A: Well, now the issues of size, ugliness, and burying important 
information come into play.  As things currently
are there is also the issue of order.  Are we prepared to say that the 
'classmethod' must come after all the other
decorations like 'author', 'release', 'html_version_of_doc'?  Now we 
probably will need to deal with the order
issue for transformations anyway, but the solution might make the 
specification of a new transformation non-trivial.
This is ok if the set of transformations is small, but otherwise might 
mean that the same largish boilerplate is
replicated throughout people's codebases.

Q: Why can't we just have a dictionary literal in place of the 
docstring at the beginning of the function body?

A: Technically, I think we could and that was my first thought, too.  
However, I see one big problem with this.
Even if we try to educate people (non-advanced python users) that that 
first bit of code will not actually be placed
in the function's code, I still think people will get confused about 
why function arguments or, say, super() can't be
used in that dictionary's definition.  These attribute settings need to 
be syntactically distinct and they may as well
be as pretty as possible.

Q: Won't these function attributes just move the code even further from 
the function name.

A: Perhaps, but it could also provide a useful and not outrageously 
ugly idiom for lessening it.  Consider

	class C:

		docstring = """This docstring is really long, maybe even 40 lines 
long."""

		def method1 (self, arg):
			:doc docstring
			pass

		docstring = """This is a different string set to the same scratch 
class variable"""

		def method2 (self, arg):
			:doc docstring
			pass


Robert Mollitor


From bh at intevation.de  Sat Mar 27 12:39:25 2004
From: bh at intevation.de (Bernhard Herzog)
Date: Sat Mar 27 12:39:36 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D413@USAHM010.amer.corp.eds.com>
	(Jim J. Jewett's message of "Fri, 26 Mar 2004 16:17:05 -0500")
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D413@USAHM010.amer.corp.eds.com>
Message-ID: <s9zy8pmqjea.fsf@salmakis.intevation.de>

"Jewett, Jim J" <jim.jewett@eds.com> writes:

> Guido van Rossum:
>
>> ... for decorators spelled with a single word, like classmethod,
>> ... as early as reasonable in the function definition
>
>>    def foobar(cls, blooh, blah) [classmethod]:
>
>> hides a more important fact for understanding it (classmethod) behind
>> some less important facts (the argument list).  I would much rather
>> see this:
>
>>    def foobar [classmethod] (cls, blooh, blah):
>
> One alternative placed it earlier still, in a second header clause.
>
>     using:
>         classmethod
>     def foobar(cls, blooh, blah):
>         pass

Has somebody suggested the following yet?

    def [classmethod] foo(cls, blooh, bla):
        pass

or for a longer list of decorators

    def [classmethod, 
         attributes(deprecated=True)
        ] foo(cls, blooh, bla):
        pass


It might be more readable if a newline were allowed between the closing
] and the function name:

    def [classmethod, attributes(deprecated=True)]
        foo(cls, blooh, bla):
        pass


Putting the decorators between the "def" and the function name has some
advantages over the proposals I've seen so far:

    - crucial information about the function and the meaning of the
      parameters is given very early (this was an argument made for
      putting the decorat before the argument list)

    - The name and the argument are together, so it still looks similar
      to the function invocation (an argument made for putting the
      decorators after the function list).

If the decorators come before the function name, the decorators should
perhaps be applied in reverse order so that 

  def [classmethod, deprecated] foo(cls, bar):
      pass

is similar to

  def foo(cls, bar):
      pass
  foo = classmethod(deprecated(foo))



  Bernhard


-- 
Intevation GmbH                                 http://intevation.de/
Skencil                                http://sketch.sourceforge.net/
Thuban                                  http://thuban.intevation.org/

From skip at pobox.com  Sat Mar 27 13:11:40 2004
From: skip at pobox.com (Skip Montanaro)
Date: Sat Mar 27 13:11:46 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <0D7007C8-7FC6-11D8-AD6E-000A95EFAE9E@yahoo.com>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<1080365252.29455.23.camel@geddy.wooz.org>
	<0D7007C8-7FC6-11D8-AD6E-000A95EFAE9E@yahoo.com>
Message-ID: <16485.50140.527111.263016@montanaro.dyndns.org>


    Barry> attributes, but decorators can also do the trick in a nasty way:
    Barry> 
    Barry> def foobar [
    Barry>     lambda f: f.author = 'Guido van Rossum',
    Barry>     lambda f: f.deprecated = True,
    Barry>     classmethod] (self, arg):
    Barry>     # Now what?

    Alex> Not necessarily all that nasty:

    Alex> def foobar [ with_attributes(
    Alex>          author="Guido van Rossum",
    Alex>          deprecated=True),
    Alex>      classmethod] (cls, args):
    Alex>          pass

Both cases demonstrate why I don't like the decorator-before-args option.
Where are the args?  Like I said in an earlier post most of the time I look
at a function def wanting a quick reminder how to call it.  If the argument
list doesn't at least start on the same line as the def I think that will be
tougher.

    Alex> with a built-in 'with_attributes' equivalent to:

    Alex> def with_attributes(f, **kwds):
    Alex>      for k, v in kwds.iteritems():
    Alex>          setattr(f, k, v)
    Alex>      return f

Not quite.  It has to be written as a function which returns another
function, e.g.:

    def attrs(**kwds):
        def decorate(f):
            f.__dict__.update(kwds)
            return f
        return decorate

Skip

From skip at pobox.com  Sat Mar 27 13:26:41 2004
From: skip at pobox.com (Skip Montanaro)
Date: Sat Mar 27 13:26:46 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <s9zy8pmqjea.fsf@salmakis.intevation.de>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D413@USAHM010.amer.corp.eds.com>
	<s9zy8pmqjea.fsf@salmakis.intevation.de>
Message-ID: <16485.51041.235900.502085@montanaro.dyndns.org>


    Bernhard> Has somebody suggested the following yet?

    Bernhard>     def [classmethod] foo(cls, blooh, bla):
    Bernhard>         pass

Yeah, I believe it's come up.

    Bernhard> Putting the decorators between the "def" and the function name
    Bernhard> has some advantages over the proposals I've seen so far:

    Bernhard> - crucial information about the function and the meaning of
    Bernhard>   the parameters is given very early (this was an argument
    Bernhard>   made for putting the decorat before the argument list)

I would propose a counterargument.  Placing it between the argument list and
the colon puts it "close enough" to the "def".  I did a quick grep through
the Python distribution just now and found 22198 function or method
definitions.  Of those, 20282 had the name and the entire argument list on
the same line as the def keyword.  That means in all but a small number of
cases decorators would start on the same line as the def keyword.

    Bernhard> - The name and the argument are together, so it still looks
    Bernhard>   similar to the function invocation (an argument made for
    Bernhard>   putting the decorators after the function list).

Yes, but possibly several lines later, making it more difficult to find (you
couldn't reliably grep for "^ *def foo" to locate a function definition) and
forcing tools like the Emacs tags facility to be rewritten to handle the
case where "def" and the function name don't appear on the same line.

Skip

From alexis.layton at post.harvard.edu  Sat Mar 27 13:05:53 2004
From: alexis.layton at post.harvard.edu (Alexis Layton)
Date: Sat Mar 27 13:30:46 2004
Subject: [Python-Dev] method decorators (PEP 318)
Message-ID: <63558AEA-8019-11D8-ABE2-000A957D895E@post.harvard.edu>

Hi folks.

If there is going to be a distinction between ?transforming? and 
?annotative?
decorators, then I think this syntax makes the most sense:

def [classmethod] my_wonderful_method(arg1, arg2):
	:author: "Eric Idle"
	:skit: "The Python Decorator Skit"
	:xyzzy: getNextXyzzy()
	:doc: """doc"""
	pass

Wether these annotations are put into the function dict, or 
function.__meta__
is an implementation detail.  I would like to suggest that either a 
traditional
doc string or the annotation block appear after a def, but not both.

Annotation names should be limited to identifier syntax, probably?

I understand Guido has stated that he doesn't like to separate the def 
from the funcname
because it could be hard to find, especially with long decorator 
chains.  But if only
transforming decorators are used here, it's my impression that they 
will tend to be short.
Secondly, with transforming decorators, the meaning of the function 
depends on the
transformation; I find "def [classmethod]" reads quite well.
----
Alexis Layton
2130 Massachusetts Ave.
Apt. 6E
Cambridge, MA  02140-1917
alexis.layton@post.harvard.edu


From marktrussell at btopenworld.com  Sat Mar 27 13:47:09 2004
From: marktrussell at btopenworld.com (Mark Russell)
Date: Sat Mar 27 13:47:11 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <16485.51041.235900.502085@montanaro.dyndns.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D413@USAHM010.amer.corp.eds.com>
	<s9zy8pmqjea.fsf@salmakis.intevation.de>
	<16485.51041.235900.502085@montanaro.dyndns.org>
Message-ID: <1080413228.2626.13.camel@localhost>

On Sat, 2004-03-27 at 18:26, Skip Montanaro wrote:
> I would propose a counterargument.  Placing it between the argument list and
> the colon puts it "close enough" to the "def". 

Absolutely.  Another point: the most common use of a function signature
is reading it to see how to call the function.  For this, classmethod is
not that important -- it doesn't affect how you call the function, only
how it is defined.  The only effect on the call is "can I call this with
just a class rather than an instance" - someone wanting an answer to
that question is quite capable of reading to the end of a def line.  

Given that classmethod and staticmethod are probably about as
significant as function modifiers get, putting them right next to the
def is giving them way too much prominence (and I use classmethod quite
a lot).

Mark Russell


From aleaxit at yahoo.com  Sat Mar 27 14:47:44 2004
From: aleaxit at yahoo.com (Alex Martelli)
Date: Sat Mar 27 14:47:56 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <16485.50140.527111.263016@montanaro.dyndns.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<1080365252.29455.23.camel@geddy.wooz.org>
	<0D7007C8-7FC6-11D8-AD6E-000A95EFAE9E@yahoo.com>
	<16485.50140.527111.263016@montanaro.dyndns.org>
Message-ID: <9DABDA1C-8027-11D8-AD6E-000A95EFAE9E@yahoo.com>


On 2004 Mar 27, at 19:11, Skip Montanaro wrote:

>     Barry> attributes, but decorators can also do the trick in a nasty 
> way:
>     Barry>
>     Barry> def foobar [
>     Barry>     lambda f: f.author = 'Guido van Rossum',
>     Barry>     lambda f: f.deprecated = True,
>     Barry>     classmethod] (self, arg):
>     Barry>     # Now what?
>
>     Alex> Not necessarily all that nasty:
>
>     Alex> def foobar [ with_attributes(
>     Alex>          author="Guido van Rossum",
>     Alex>          deprecated=True),
>     Alex>      classmethod] (cls, args):
>     Alex>          pass
>
> Both cases demonstrate why I don't like the decorator-before-args 
> option.
> Where are the args?  Like I said in an earlier post most of the time I 
> look
> at a function def wanting a quick reminder how to call it.  If the 
> argument
> list doesn't at least start on the same line as the def I think that 
> will be
> tougher.

As long as the decorators don't come between the def and the function 
name (that WOULD be far too early and impede looking for 'def foo' to 
easily locate the definition) I'll be OK, but, sure, I do agree that 
right before the colon is the best place for the decorators.  I was 
just using the same hypothetical syntax as Barry was using in his own 
snippet.

>     Alex> with a built-in 'with_attributes' equivalent to:
>
>     Alex> def with_attributes(f, **kwds):
>     Alex>      for k, v in kwds.iteritems():
>     Alex>          setattr(f, k, v)
>     Alex>      return f
>
> Not quite.  It has to be written as a function which returns another
> function, e.g.:
>
>     def attrs(**kwds):
>         def decorate(f):
>             f.__dict__.update(kwds)
>             return f
>         return decorate

Yes, you're right, this is what the semantics must be (assuming that 
"setattr(f, n, v)" does absolutely nothing more than "f.__dict__
[n] = v", a hypothesis I wasn't ready to make -- I'm not sure it's 
currently guaranteed by the language reference manual -- but if I'm 
wrong and it IS so guaranteed and I just missed the spot thus 
guaranteeing, the bulk update is of course a speedier approach than the 
loop I had).


Alex


From barry at python.org  Sat Mar 27 15:05:59 2004
From: barry at python.org (Barry Warsaw)
Date: Sat Mar 27 15:06:06 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <1xnesgat.fsf@yahoo.co.uk>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<16484.26937.904019.643044@montanaro.dyndns.org>
	<406473B9.6040508@zope.com> <c425fh$5ke$1@sea.gmane.org>
	<1xnesgat.fsf@yahoo.co.uk>
Message-ID: <1080417959.5262.80.camel@anthem.wooz.org>

On Sat, 2004-03-27 at 06:03, Paul Moore wrote:
> "Mike Rovner" <mike@nospam.com> writes:
> 
> > Given that attributes are part of function object dictionary,
> > and defined at function definition time like docstring,
> > why not:
> >
> > def func(atr):
> >   {author="Guido", deprecated=True}
> >   '''doc'''
> >   pass
> >
> > Presumably {attributes} will be like "doc": only allowed immediately after
> > def (interchangebly with "doc") and do no harm in other places.
> 
> I quite like this (although it ought to be a real dictionary literal,
> ie {'author': 'Guido', 'deprecated': True}, which doesn't read quite
> as well.

Not advocating this syntax one way or the other, but just pointing out
that if it is allowed, we have to allow either the docstring or the attr
dict in either order.

> But I still think this is a separate issue from PEP 318.

Me too.
-Barry



From barry at python.org  Sat Mar 27 15:10:07 2004
From: barry at python.org (Barry Warsaw)
Date: Sat Mar 27 15:10:15 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <s9zy8pmqjea.fsf@salmakis.intevation.de>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D413@USAHM010.amer.corp.eds.com>
	<s9zy8pmqjea.fsf@salmakis.intevation.de>
Message-ID: <1080418207.5262.85.camel@anthem.wooz.org>

On Sat, 2004-03-27 at 12:39, Bernhard Herzog wrote:

> Has somebody suggested the following yet?
> 
>     def [classmethod] foo(cls, blooh, bla):
>         pass

Yes, but I'm not sure who <wink>.

Subversive instantly rejected idea of the day: syntactically allow the
square brackets in any of these three locations:

def [decorator] method [decorator] (args) [decorator]

You'd probably want to disallow more than one [decorator] per definition
(but maybe not).  Each of these locations is supported by a different
valid use case, IMO, so why not allow any of them?

code-consistency-be-damned-ly y'rs,
-Barry



From bob at redivi.com  Sat Mar 27 15:19:07 2004
From: bob at redivi.com (Bob Ippolito)
Date: Sat Mar 27 15:15:40 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <1080418207.5262.85.camel@anthem.wooz.org>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D413@USAHM010.amer.corp.eds.com>
	<s9zy8pmqjea.fsf@salmakis.intevation.de>
	<1080418207.5262.85.camel@anthem.wooz.org>
Message-ID: <FFF775E4-802B-11D8-AF39-000A95686CD8@redivi.com>


On Mar 27, 2004, at 3:10 PM, Barry Warsaw wrote:

> On Sat, 2004-03-27 at 12:39, Bernhard Herzog wrote:
>
>> Has somebody suggested the following yet?
>>
>>     def [classmethod] foo(cls, blooh, bla):
>>         pass
>
> Yes, but I'm not sure who <wink>.
>
> Subversive instantly rejected idea of the day: syntactically allow the
> square brackets in any of these three locations:
>
> def [decorator] method [decorator] (args) [decorator]
>
> You'd probably want to disallow more than one [decorator] per 
> definition
> (but maybe not).  Each of these locations is supported by a different
> valid use case, IMO, so why not allow any of them?

Or, of course, the module or class variable:

__metadecorator__ = [decorator]

-bob


From sross at connect.carleton.ca  Sat Mar 27 15:10:06 2004
From: sross at connect.carleton.ca (Sean Ross)
Date: Sat Mar 27 15:34:05 2004
Subject: [Python-Dev] PEP318 metaclass approach
Message-ID: <5662952.1080418206609.JavaMail.sross@connect.carleton.ca>

Hi.
I've posted this code to python-list, but I thought I might post it 
here as well. This is a metaclass approach to dealing with the issues 
PEP318 is meant to address. It's a quick hack that allows you to put 
the method decoration and annotation information before the method 
definition. I just wanted to see if this could be done without changing 
the language syntax. Looks like it's atleast a possibility, though this 
only works with methods, and there are likly other issues I'm not aware 
of. Anyway, here's the code:



import sys

def delayed_decorate(method, *decorators, **attributes):
    "Decorate method during class instantation"
    method.__dict__.update(attributes)
    for decorator in decorators:
        method = decorator(method)
    return method

def decorate(funcname, *decorators, **attributes):
    "Mark method for delayed decoration"
    clsdict = sys._getframe(1).f_locals
    clsdict.setdefault("decorated", {})[funcname] = \
        (decorators, attributes)


class MetaDecorate(type):
    "Enables delayed decoration and annotation of methods"
    def __new__(klass, name, bases, _dict):
        if "decorated" in _dict:
            for m, (d, a) in _dict["decorated"].iteritems():
                _dict[m] = delayed_decorate(_dict[m], *d, **a)
            del _dict["decorated"]
        return type.__new__(klass, name, bases, _dict)
    
class Decorated(object):
    __metaclass__ = MetaDecorate

class C(Decorated):
    a = "A"

    decorate("f", classmethod, author="Sean Ross", version="0.9.2")
    def f(klass):
        return klass.a
    
    decorate("g", staticmethod)
    def g():
        return "G"
    
print "C.f.author =", C.f.author
print "C.f.version =", C.f.version
print "C.f() =>", C.f()
print "C.g() =>", C.g()

From paul at prescod.net  Sat Mar 27 18:00:27 2004
From: paul at prescod.net (Paul Prescod)
Date: Sat Mar 27 18:04:29 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <32F83457-7F71-11D8-AF1F-000393C96926@opnet.com>
References: <0C4A06B1-7F6D-11D8-80D6-00039314A688@earthlink.net>
	<32F83457-7F71-11D8-AF1F-000393C96926@opnet.com>
Message-ID: <4066078B.4090201@prescod.net>

Russell Finn wrote:
> Mike Rovner wrote:
> 
> 
> I came up with nearly the same idea after reading Guido's original post, 
> except that I recommend using the existing dictionary literal syntax:
> 
> def func (args):
>     { author: "Guido", deprecated: True}
>     '''doc'''
>     pass
> 
> Perhaps this was just a typo in Mike's post.

What should happen in this case?

mod_author = "Guido"
author = 5

def func(self, author, mod_author):
	{ author: mod_author}

func("Paul", "Bill")

Does the func get:

	func.author = "Guido"

	or

	func.5 = "Guido"

	or

	func.Paul = "Bill"

	or

	...

A better solution is:

def func(self, author, mod_author){
	author: mod_author
	}:

Although this particular example is confusing because of the reused 
argument names, it is still reasonably clear that the decorator is 
evaluated at _definition time_ not _function run time_.

  Paul Prescod


From paul at prescod.net  Sat Mar 27 18:15:00 2004
From: paul at prescod.net (Paul Prescod)
Date: Sat Mar 27 18:19:31 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <1080365252.29455.23.camel@geddy.wooz.org>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<1080365252.29455.23.camel@geddy.wooz.org>
Message-ID: <40660AF4.9090702@prescod.net>

Barry Warsaw wrote:

>...
> I know all the practical problems with this suggestion, but to me,
> that's the most natural spelling.  I suppose something like this might
> also not be horrible:
> 
> def foobar [
>     .author = 'Guido van Rossum'
>     .deprecated = True
>     classmethod
>     ] (self, arg)
> 
> Naw, these probably don't mix well.  Hmm,
> 
> def foobar {
>     author = 'Guido van Rossum'
>     deprecated = True
>     } [classmethod, spoogemethod, insanity
>     ] (self, arg):
>     # Now what the heck does this thing do?

I think that Guido's point is that some things belong before the def 
because they really change the behaviour of the function and others 
belong after because they are "just" annotations:

def foobar [classmethod, spoogemethod] (self, arg) {
		author = "Guido van Rossum"
		zope_public = "True"
	}:
	pass

  Paul Prescod



From paul at prescod.net  Sat Mar 27 18:30:43 2004
From: paul at prescod.net (Paul Prescod)
Date: Sat Mar 27 18:33:42 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <7jx7maov.fsf@yahoo.co.uk>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<7jx7maov.fsf@yahoo.co.uk>
Message-ID: <40660EA3.3070601@prescod.net>

Paul Moore wrote:

> Guido van Rossum <guido@python.org> writes:
> 
> 
>>We came to an interesting conclusion (which Robert had anticipated):
>>there are two quite different groups of use cases.
>>
>>One group of use cases modifies the way the method works, its
>>semantics.
> 
> [...]
> 
>>The other group of use cases merely attaches extra bits of metadata to
>>the method, without changing its usage.
> 
> 
> I don't see these as two distinct use cases for the decorator
> mechanism. 

You've phrased your mail as if you were disagreeing, but it sounds to me 
like you are agreeing.

You see some use cases as being appropriate for the decorator syntax and 
some for another function attribute syntax.

Guido's proposal explicitly defined two unrelated syntaxes for two 
unrelated features dealing with these two unrelated use cases. Rather 
than arguing with him, you should be supporting him. The people you 
should be arguing with are people who want to use one syntax for both, 
because this will lead to a design which is suboptimal for what you call 
"true decorators".

 >...
> Let's keep the discussion on PEP 318 focused on decorators. I'm happy
> to have it stated (in the PEP, if you like) that a decorator like
> attributes is an abuse and is strongly discouraged. But I don't think
> that the existence of an abuse is justification for rejection of the
> feature.

Who is talking about rejecting any feature?

Anyhow, I do not believe that the two features are as unrelated as you 
think.

First: we do not want to introduce a new feature that will ENCOURAGE 
abuse. Better to introduce two features and head that off at the pass.

Second: I see use cases like this:

def foo [spark_callback] (self):
	@spark_rule = "DOC := HEAD BODY FOOT"
	...

def foo [publish_to_web](self):
	@url = "/cgi-bin/directory/directory"

Otherwise you get into a lot of this stuff:

def foo[publish_to_web("http://directory/directory")](self):
	...


  Paul Prescod



From mollitor at earthlink.net  Sat Mar 27 21:03:03 2004
From: mollitor at earthlink.net (Robert Mollitor)
Date: Sat Mar 27 21:01:50 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <40660EA3.3070601@prescod.net>
Message-ID: <0BD99B00-805C-11D8-9C37-000393100E1A@earthlink.net>


On Saturday, March 27, 2004, at 06:30 PM, Paul Prescod wrote:
> Anyhow, I do not believe that the two features are as unrelated as you 
> think.
>
> First: we do not want to introduce a new feature that will ENCOURAGE 
> abuse. Better to introduce two features and head that off at the pass.

I agree with this.


> Second: I see use cases like this:
>
> def foo [spark_callback] (self):
> 	@spark_rule = "DOC := HEAD BODY FOOT"
> 	...
>
> def foo [publish_to_web](self):
> 	@url = "/cgi-bin/directory/directory"

Excellent example.  Annotations would probably be set before 
transformers are invoked.


> Otherwise you get into a lot of this stuff:
>
> def foo[publish_to_web("http://directory/directory")](self):

It would be nice if transformer decorations were never allowed 
"arguments".  It would keep that list as short
and as tidy as possible.


Robert Mollitor


From tismer at stackless.com  Sat Mar 27 23:13:39 2004
From: tismer at stackless.com (Christian Tismer)
Date: Sat Mar 27 23:13:20 2004
Subject: [Python-Dev] Last chance!
In-Reply-To: <200312191531.hBJFVgw27430@c-24-5-183-134.client.comcast.net>
References: <3FDE6D35.3090100@tismer.com> <3FE2C5B5.8080208@tismer.com>
	<200312191531.hBJFVgw27430@c-24-5-183-134.client.comcast.net>
Message-ID: <406650F3.2090808@stackless.com>

Guido van Rossum wrote:

>>since I didn't get *any* reply to this request,
>>either the request was bad or there is really
>>nobody using f_tstate in a way that makes it
>>urgent to keep.
>>I will wait a few hours and then make the change
>>to Stackless, and I'd like to propose to do the
>>same to the Python core.
> 
> 
> I saved the message, but haven't had the time yet to think things
> through.
> 
> I *did* notice at least one case where using f_tstate might actually
> be a mistake: theoretically it's possible that two or more threads
> alternate calling next() on a generator (if they wrap it in a critical
> section); AFAICT the f_tstate is never updated.

I've been running Stackless Python without f_tstate for more than
three months now, in various applications.
May I check in a patch to evict f_tstate?

ciao - chris
-- 
Christian Tismer             :^)   <mailto:tismer@stackless.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  mobile +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


From tim.one at comcast.net  Sat Mar 27 23:26:03 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sat Mar 27 23:26:01 2004
Subject: [Python-Dev] Last chance!
In-Reply-To: <406650F3.2090808@stackless.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDGJNAB.tim.one@comcast.net>

[Christian Tismer]
> I've been running Stackless Python without f_tstate for more than
> three months now, in various applications.
> May I check in a patch to evict f_tstate?

Yup!  Please do.

From python at rcn.com  Sun Mar 28 00:33:20 2004
From: python at rcn.com (Raymond Hettinger)
Date: Sun Mar 28 00:35:46 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <4060E835.50805@interlink.com.au>
Message-ID: <003401c41486$2e6ec100$2fb4958d@oemcomputer>

> Raymond Hettinger wrote:
> > The PEP for Py2.4 got updated and the dates were pushed back to
> > September.  Is there any reason that we couldn't get a first alpha
out
> > in early June or late May?

[Anthony]
> I _really_ want to give ast-compiler and generator expressions time to
> bake. genexps in particular have the potential to have all sorts of
> nasty corner cases.
> 
> Once we release 2.4.0, changing genexps to fix any previously-
> unforseen uglies is going to be much, much harder.

[Tim Peters]
> The ast-branch compiler is likely to have its share of subtle
glitches.
> They may be more serious, since it looks like "the plan" for genexps
now
> is
> to say that genexps work -- whatever they happen to do <wink>.
> 
> Raymond, the 2.4 PEP was updated after a 2.4 planning meeting during
the
> PyCon sprints on Sunday.  There was plenty of debate there.


The discussion took place before we learned that our most active
developer won't be distracted by pesky board duties ;-)  So I would like
to re-open the discussion on the timing for Py2.4.

My thoughts on the above are:

1)  "Baking" is not happening.  There are only a tiny handful of us
actively developing and using Py2.4.  Move the release date back five
months and that's five more months that no one is looking at the
product.  The best way to get people exercise the code base is to have
an alpha release.

As evidence, look at the features I've added over the last nine months.
I get feedback while they are being developed and a few days afterwards.
After that, there is *zero* feedback (okay, Guido once asked a question
about set comparisons, but that was it).

2)  As it stands now, the current CVS is the most stable it has ever
been prior to an alpha release.  It is already in excellent shape and is
chock full of features and optimizations that people can be putting to
work right now.

3) For folks who haven't been actively using Py2.4, a little FUD is
natural.  However, passage of time will not make that go away.  The only
valid reason I can see for a much later release date is if someone has a
specific plan to do a major piece of development or review during the
summer months.  Right now, I know of no one who has made that
commitment.  Worse, if you move the data back far enough, then folks are
less likely to set aside time now to even look at the current CVS, much
less contribute to it.

4) If some feature set turns out to be worrisome, I would rather delay
it for Py2.5 than hold up Py2.4 indefinitely.  The AST people know best
whether their work is fit for commitment.  If it's not ready, then let
it cook on its own time.  Basically, it's ready when it's ready.


5) Genexps won't be committed until thoroughly tested and the code
thoroughly reviewed.  All of the posted examples will be tried and
matched against expectations.  I will take every script I've ever
written and substitute genexps for list comps and then run the test
suites.  IOW, from the very first, I expect genexps to be more solid
than most of the Py2.2 features ever were.  Also, unlike new-style class
gizmos, most everybody gets the idea of genexps -- that means they will
be more thoroughly understood and tried out by more people once an alpha
goes out.


6) Since Py2.2 went out, we've had a much better discipline of writing
thorough unittest suites and good documentation (that is why the new
Py2.3 modules had so few bug reports; except for logging, most of the
bug reports are for things predating Py2.3).  Walter has been
systematically increasing coverage so we are on much more solid ground
than ever.



Does anyone have a good reason that the first alpha cannot go out in
May?

We don't have to feature freeze (i.e. Decimal can go in when it's
ready). If an area of concern surfaces, we can always go through
additional alpha/beta cycles.


Raymond


From python at rcn.com  Sun Mar 28 01:20:05 2004
From: python at rcn.com (Raymond Hettinger)
Date: Sun Mar 28 01:22:34 2004
Subject: [Python-Dev] Some changes to logging
In-Reply-To: <008301c40dca$67e78ec0$652b6992@alpha>
Message-ID: <003a01c4148c$b60b7e40$2fb4958d@oemcomputer>

[Vinay Sajip]
> I've had feedback from numerous sources that the logging package is
harder
> to configure than it needs to be, for the common use case of a simple
> script
> which needs to log to a file. I propose to change the convenience
function
> basicConfig(), which is currently the one-shot convenience function
for
> simple scripts to use. 

There is no rush to jump straight to this solution.  I recommend kicking
the idea around for a while on comp.lang.python where there have been
several discussions.  Perhaps send notes to those who've shown an
interest in the module; otherwise, they may not be looking when the
discussion starts.

Also, the logging documentation needs an example section (early in the
docs) modeled after the one for unittests which comparable because the
unittest docs are equally voluminous and option intensive, and yet they
both are actually easy to use once you see what to do.  The goal for the
unittest example page was to cover enough knowledge to handle 90% of
most peoples needs and to get them started.  A single page of examples
was all it took to flatten a vertical learning curve.

If you make an example page, posting it to a wiki or to comp.lang.python
is likely to result in feedback that will improve it more than just
directly updating CVS and waiting forever <wink> for the Py2.4 release.



> def basicConfig(**kwargs):
>     """
>     Do basic configuration for the logging system.
> 
>     This function does nothing if the root logger already has handlers
>     configured. It is a convenience method intended for use by simple
> scripts
>     to do one-shot configuration of the logging package.
> 
>     The default behaviour is to create a StreamHandler which writes to
>     sys.stderr, set a formatter using the BASIC_FORMAT format string,
and
>     add the handler to the root logger.
> 
>     A number of optional keyword arguments may be specified, which can
> alter
>     the default behaviour.
> 
>     filename  Specifies that a FileHandler be created, using the
specified
>               filename, rather than a StreamHandler.
>     filemode  Specifies the mode to open the file, if filename is
> specified
>               (if filemode is unspecified, it defaults to "a").
>     format    Use the specified format string for the handler.
>     level     Set the root logger level to the specified level.
>     stream    Use the specified stream to initialize the
StreamHandler.
> Note
>               that this argument is incompatible with 'filename' - if
both
>               are present, 'stream' is ignored.
> 
>     Note that you could specify a stream created using open(filename,
> mode)
>     rather than passing the filename and mode in. However, it should
be
>     remembered that StreamHandler does not close its stream (since it
may
> be
>     using sys.stdout or sys.stderr), whereas FileHandler closes its
stream
>     when the handler is closed.
>     """

This proposed function is much more complicated than what was requested.
While it does compress multiple lines into a single call, I think it
does not achieve simplification. 

I suggest leaving out some options and/or splitting some of those that
remain into separate functions.  Also, consider changing to name to
something active instead of configuration related.  The idea is making
something like the following possible:

>>> startfilelog(sys.stderr)
>>> log('danger will robinson; radiation levels not safe for humans')

>>> startstreamlog()
>>> log('mission accomplished; returning to base')

I can see the utility of having an optional format argument, but it
immediately raises the question of how to specify the format.  Perhaps
the docstring should say what the default is and give a couple of
alternatives:

   format defaults to:
         "%(levelname)s:%(name)s:%(message)s"
   other format options include:  
         "%(lineno)d"
         "%(asctime)s"
	   "%(thread)d"

Note, this style of docstring tells you most of what you need to know.
The previous style requires that you've read about Formatter objects,
remember all the codes, and know what the BASIC_FORMAT default is.  Of
course, all of us know this by heart, but there may be one or two users
who don't ;-)

For similar reasons, consider removing anything from the docstring that
implies a detailed understanding of the whole logging class structure
(i.e. the references to FileHandler, StreamHandler, root logger, etc).
The idea is that you shouldn't need cooking lessons before being able to
order a pizza. 



Raymond


#################################################################
#################################################################
#################################################################
#####
#####
#####
#################################################################
#################################################################
#################################################################

From guido at python.org  Sun Mar 28 05:05:55 2004
From: guido at python.org (Guido van Rossum)
Date: Sun Mar 28 05:06:08 2004
Subject: [Python-Dev] Last chance!
In-Reply-To: Your message of "Sun, 28 Mar 2004 06:13:39 +0200."
	<406650F3.2090808@stackless.com> 
References: <3FDE6D35.3090100@tismer.com> <3FE2C5B5.8080208@tismer.com>
	<200312191531.hBJFVgw27430@c-24-5-183-134.client.comcast.net> 
	<406650F3.2090808@stackless.com> 
Message-ID: <200403281005.i2SA5td26850@guido.python.org>

[Christian]
> >>since I didn't get *any* reply to this request,
> >>either the request was bad or there is really
> >>nobody using f_tstate in a way that makes it
> >>urgent to keep.
> >>I will wait a few hours and then make the change
> >>to Stackless, and I'd like to propose to do the
> >>same to the Python core.

[Guido]
> > I saved the message, but haven't had the time yet to think things
> > through.
> > 
> > I *did* notice at least one case where using f_tstate might actually
> > be a mistake: theoretically it's possible that two or more threads
> > alternate calling next() on a generator (if they wrap it in a critical
> > section); AFAICT the f_tstate is never updated.

[Christian]
> I've been running Stackless Python without f_tstate for more than
> three months now, in various applications.
> May I check in a patch to evict f_tstate?

Sure!  Let stackless lead the way. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Sun Mar 28 05:12:06 2004
From: guido at python.org (Guido van Rossum)
Date: Sun Mar 28 05:12:18 2004
Subject: [Python-Dev] Some changes to logging
In-Reply-To: Your message of "Sun, 28 Mar 2004 01:20:05 EST."
	<003a01c4148c$b60b7e40$2fb4958d@oemcomputer> 
References: <003a01c4148c$b60b7e40$2fb4958d@oemcomputer> 
Message-ID: <200403281012.i2SAC6L26868@guido.python.org>

> This proposed function is much more complicated than what was requested.
> While it does compress multiple lines into a single call, I think it
> does not achieve simplification. 
> 
> I suggest leaving out some options and/or splitting some of those that
> remain into separate functions.  Also, consider changing to name to
> something active instead of configuration related.  The idea is making
> something like the following possible:

As a rather intense user of the logging module, I have to disagree
with Richard here.  The basicConfig() signature that Vinay proposes
covers pretty much everything I would like to be able to do except
setting an explicit handler, which it clearly shouldn't handle.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From aleaxit at yahoo.com  Sun Mar 28 06:06:12 2004
From: aleaxit at yahoo.com (Alex Martelli)
Date: Sun Mar 28 06:06:28 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <0BD99B00-805C-11D8-9C37-000393100E1A@earthlink.net>
References: <0BD99B00-805C-11D8-9C37-000393100E1A@earthlink.net>
Message-ID: <ECAC9BE0-80A7-11D8-AD6E-000A95EFAE9E@yahoo.com>


On 2004 Mar 28, at 04:03, Robert Mollitor wrote:
    ...
>> def foo[publish_to_web("http://directory/directory")](self):
>
> It would be nice if transformer decorations were never allowed 
> "arguments".  It would keep that list as short
> and as tidy as possible.
>
-1.  Forcing what should naturally be arguments to decorators into 
attributes of the function object would be extremely awkward and risky.

def foo [bar, baz] (a, b , c):
     @mystery = 23

this is totally unreadable, with no hints whatsoever of whether 
foo.mystery affects bar, baz, neither, or BOTH -- the latter 
possibility is an accident waiting to happen, i.e. a programmer knowing 
that bar uses foo.mystery and not having studied baz enough to gauge 
that IT, too, will cue on foo.mystery if it's defined.  If one wants to 
keep the list short and tidy (slightly less of an issue if the 
decorators went right before the : , my preferred syntax) might be 
something like:

mybar = bar(mystery=23)
mybaz = baz(mystery=45)

def foo [mybar, mybaz] (a, b, c):
    ...

in preference to:

def foo [bar(mystery=23), baz(mystery=45)] (a, b, c):
    ...

Not ideal, but IMHO still beats by a mile the alternative of having bar 
and baz cue off the function's attributes, and quite possibly tripping 
on each other's feet.

I think that, while often the stylistically preferable choice will be 
to assign identifiers to the results of "metadecorators which produce 
decorators when called with suitable keyword arguments" (rather than 
using the metadecorator-call expression right in the decorators list), 
forcing the issue by prohibiting expressions in the decorators list and 
mandating only identifiers there is still an inferior design choice.  
Python already has a slight issue with having to "compute things 
beforehand and assign a name to the result" -- rather than leaving the 
option of an inline expression -- in such cases as function objects 
(given lambda's limits); in that case, there are enough problems with 
removing the limitation to make it understandable.  But extending the 
limitation to this case, purely for forcing the stylistic hand of 
Python programmers, would appear rather arbitrary and ill-advised to 
me.  The fact that it would encourage the problematic usage of relying 
on function attributes in lieu of decorator arguments seems to me to 
underline this ill-advisedness strongly.


Alex


From pf_moore at yahoo.co.uk  Sun Mar 28 06:51:59 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Sun Mar 28 06:51:48 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
References: <200403261510.i2QFAZ522251@guido.python.org>
	<7jx7maov.fsf@yahoo.co.uk> <40660EA3.3070601@prescod.net>
Message-ID: <vfkp2nq8.fsf@yahoo.co.uk>

Paul Prescod <paul@prescod.net> writes:

> Paul Moore wrote:
>
>> Guido van Rossum <guido@python.org> writes:
>>
>>>We came to an interesting conclusion (which Robert had anticipated):
>>>there are two quite different groups of use cases.
>>>
>>>One group of use cases modifies the way the method works, its
>>>semantics.
>> [...]
>>
>>>The other group of use cases merely attaches extra bits of metadata to
>>>the method, without changing its usage.
>> I don't see these as two distinct use cases for the decorator
>> mechanism.
>
> You've phrased your mail as if you were disagreeing, but it sounds to
> me like you are agreeing.

Hmm, maybe. I was disagreeing with the idea that the two issues need
to be treated together. I'm OK with the idea that using a decorator to
set function attributes being an abuse. But that didn't seem to me to
justify complicating the existing PEP with issues of new syntax for
setting function attributes.

> Guido's proposal explicitly defined two unrelated syntaxes for two
> unrelated features dealing with these two unrelated use cases. Rather
> than arguing with him, you should be supporting him. The people you
> should be arguing with are people who want to use one syntax for both,
> because this will lead to a design which is suboptimal for what you
> call "true decorators".
>
>  >...
>> Let's keep the discussion on PEP 318 focused on decorators. I'm happy
>> to have it stated (in the PEP, if you like) that a decorator like
>> attributes is an abuse and is strongly discouraged. But I don't think
>> that the existence of an abuse is justification for rejection of the
>> feature.
>
> Who is talking about rejecting any feature?

You have a good point here. But Guido is on record as feeling vaguely
uncomfortable about decorators. I read his proposal as implying that
the existing decorator proposal needed to be somehow restricted, so
that it couldn't be used for the unrelated case of attribute setting.

I may have read too much into Guido's proposal.

> Anyhow, I do not believe that the two features are as unrelated as you
> think.
>
> First: we do not want to introduce a new feature that will ENCOURAGE
> abuse. Better to introduce two features and head that off at the pass.

*That* is where the "rejection" implication comes in. We've just
started a whole new thread on syntax for setting function attributes.
If that issue gets too complex to resolve before 2.4, would you
support holding off on decorators until it is resolved? I wouldn't.

> Second: I see use cases like this:
>
> def foo [spark_callback] (self):
> 	@spark_rule = "DOC := HEAD BODY FOOT"
> 	...
>
> def foo [publish_to_web](self):
> 	@url = "/cgi-bin/directory/directory"
>
> Otherwise you get into a lot of this stuff:
>
> def foo[publish_to_web("http://directory/directory")](self):
> 	...

That is a very good point - in effect, most parametrised decorators
can be converted into a combination of a non-parametrised decorator
and one or more function attributes. (And with a suitable attribute
syntax, the result is more readable).

I'd fully support this.

But, I repeat: if attribute setting syntax gets bogged down in endless
discussion, I'd rather see decorators in *now*, and it should be made
clear to people who write parametrised decorators that they should
change to an attribute-based approach once the syntax becomes
available.

After all, the syntax and semantics of decorators won't change, so why
wait?

Actually, there are 3 options:

1. Available as soon as decorators are:

    def foo[publish("http://directory/directory")](self):
        ...

2. Available as soon as decorators are, but still has significant
   information at the end of the function body

    def foo [publish](self):
        ...
    foo.url = "http://directory/directory"

3. Needs new attribute syntax.

    def foo [publish](self):
        @url = "http://directory/directory"
        ...

My preference is for (2), with (3) being the long-term solution when
new syntax is available. I agree that (1) is ugly and difficult to
read, but I'm not sure that will stop some people :-)

Can the PEP be updated to show option (2) in use, and recommend it
over option (1). Then, the possibility of implementing option (3)
could be noted in the "Open Issues" section.

Paul.
-- 
This signature intentionally left blank


From pf_moore at yahoo.co.uk  Sun Mar 28 06:54:40 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Sun Mar 28 07:01:12 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
References: <40660EA3.3070601@prescod.net>
	<0BD99B00-805C-11D8-9C37-000393100E1A@earthlink.net>
Message-ID: <r7vd2nlr.fsf@yahoo.co.uk>

Robert Mollitor <mollitor@earthlink.net> writes:

> It would be nice if transformer decorations were never allowed
> "arguments".  It would keep that list as short
> and as tidy as possible.

That's the sort of restriction I imagined that Guido was tending
towards. While it's justifiable in this context, I would prefer to
leave the option of using arguments available, in case someone comes
up with a use where function attributes are inappropriate.

Paul.
-- 
This signature intentionally left blank


From pedronis at bluewin.ch  Sun Mar 28 07:36:43 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Sun Mar 28 07:31:47 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <vfkp2nq8.fsf@yahoo.co.uk>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<7jx7maov.fsf@yahoo.co.uk> <40660EA3.3070601@prescod.net>
Message-ID: <5.2.1.1.0.20040328143314.02c10208@pop.bluewin.ch>

At 12:51 28.03.2004 +0100, Paul Moore wrote:

>1. Available as soon as decorators are:
>
>     def foo[publish("http://directory/directory")](self):
>         ...

the in-between one seems really the most confusing placement, and I don't 
know of many languages tha put modifiers in such a syntactic position.

>2. Available as soon as decorators are, but still has significant
>    information at the end of the function body
>
>     def foo [publish](self):
>         ...
>     foo.url = "http://directory/directory"

in general depending on when the decorator needs the value, this may or may 
not work. 


From skip at pobox.com  Sun Mar 28 08:40:48 2004
From: skip at pobox.com (Skip Montanaro)
Date: Sun Mar 28 08:40:53 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <40660EA3.3070601@prescod.net>
References: <200403261510.i2QFAZ522251@guido.python.org>
	<7jx7maov.fsf@yahoo.co.uk> <40660EA3.3070601@prescod.net>
Message-ID: <16486.54752.565545.542406@montanaro.dyndns.org>


    Paul> Guido's proposal explicitly defined two unrelated syntaxes for two
    Paul> unrelated features dealing with these two unrelated use
    Paul> cases. Rather than arguing with him, you should be supporting
    Paul> him. The people you should be arguing with are people who want to
    Paul> use one syntax for both, because this will lead to a design which
    Paul> is suboptimal for what you call "true decorators".

Given that what can go inside the square brackets will be pretty general (no
matter what you think should go there), annotations can be applied to
functions and classes with the proposed syntax.  If a general-purpose
decorator syntax is added to the language before something like Guido's '@'
expressions (which I find not at all Pythonic), my guess is that will make
it easy enough for people to annotate functions and that will become the
preferred mechanism.  Let go long enough, it will supplant the need for '@'
expressions altogether.

My argument against '@' expressions is simply that they will be confusing.
They will appear to be part of the function body, and unlike doc strings
will almost certainly involve evaluation of general expressions, at least in
some cases.  I think that will make it confusing (for newbies, at least)
that those expressions are actually evaluated at function definition time
and not at function execution time.

    >> Let's keep the discussion on PEP 318 focused on decorators. I'm happy
    >> to have it stated (in the PEP, if you like) that a decorator like
    >> attributes is an abuse and is strongly discouraged. But I don't think
    >> that the existence of an abuse is justification for rejection of the
    >> feature.

    Paul> Who is talking about rejecting any feature?

I would be happy to see the '@' expression proposal disappear. ;-)

    Paul> Anyhow, I do not believe that the two features are as unrelated as
    Paul> you think.

    Paul> First: we do not want to introduce a new feature that will
    Paul> ENCOURAGE abuse. Better to introduce two features and head that
    Paul> off at the pass.

That requires another PEP.  PEP 318 isn't about a syntax for function
attributes.  It's about more general function, method and class decorators

    Paul> Second: I see use cases like this:

    Paul> def foo [spark_callback] (self):
    Paul>       @spark_rule = "DOC := HEAD BODY FOOT"
    Paul>       ...

    Paul> def foo [publish_to_web](self):
    Paul>       @url = "/cgi-bin/directory/directory"

    Paul> Otherwise you get into a lot of this stuff:

    Paul> def foo[publish_to_web("http://directory/directory")](self):
    Paul>       ...

Your example only scratches the surface and doesn't obviously show
decorators used to set function attributes.  Besides, if you mean that
publish_to_web be called with the value of foo.url set, it's not clear to me
that foo.url will be available when publish_to_web(foo) is called.  You use
simple '@' notation examples with string literal values and a decorator
example which is not directly related to those examples.  I think

    def foo(self) [attrs(spark_rule="DOC := HEAD BODY FOOT",
                         url="/cgi-bin/directory/directory"),
                   publish_to_web]:

looks perfectly reasonable.  It places the attribute definitions near the
top of the function definition without it becoming visually part of the
function body, and there's less uncertainty about when attrs() is called in
relation to publish_to_web().

Skip

From pf_moore at yahoo.co.uk  Sun Mar 28 09:37:52 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Sun Mar 28 09:37:38 2004
Subject: [Python-Dev] Re: Timing for Py2.4
References: <4060E835.50805@interlink.com.au>
	<003401c41486$2e6ec100$2fb4958d@oemcomputer>
Message-ID: <fzbt2g1r.fsf@yahoo.co.uk>

"Raymond Hettinger" <python@rcn.com> writes:

> 1)  "Baking" is not happening.  There are only a tiny handful of us
> actively developing and using Py2.4.  Move the release date back five
> months and that's five more months that no one is looking at the
> product.  The best way to get people exercise the code base is to have
> an alpha release.

On Windows, the killer issue is the availability of pywin32. I don't
believe you'll get much take-up of Python 2.4 by Windows users without
a pywin32 binary release. Is there any chance of prevailing upon Mark
to produce a 2.4-compatible binary?

I'll see how I can do building other modules using mingw, and put the
results on a web page somewhere.

[Later...]
Not much luck so far. cx_Oracle won't build with mingw (MSVC-specific
library headers). Nor will ctypes (MSVC-specific code). Pysqlite will.
I'm not even going to *try* wxPython, as I'd need to build wxWindows
with mingw - it's apparently possible, but looks like a lot of work
(and I don't use wxPython much). I'm scared of trying PIL - I found it
hard enough with MSVC thanks to all the graphics library dependencies.

I'd advocate an early alpha for another reason - Windows binary
builders will need as much time as possible to work through any issues
with the new requirement for MSVC7.

Paul.
-- 
This signature intentionally left blank


From pje at telecommunity.com  Sun Mar 28 09:45:14 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun Mar 28 09:39:17 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <r7vd2nlr.fsf@yahoo.co.uk>
References: <40660EA3.3070601@prescod.net>
	<0BD99B00-805C-11D8-9C37-000393100E1A@earthlink.net>
Message-ID: <5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>

At 12:54 PM 3/28/04 +0100, Paul Moore wrote:
>Robert Mollitor <mollitor@earthlink.net> writes:
>
> > It would be nice if transformer decorations were never allowed
> > "arguments".  It would keep that list as short
> > and as tidy as possible.
>
>That's the sort of restriction I imagined that Guido was tending
>towards. While it's justifiable in this context, I would prefer to
>leave the option of using arguments available, in case someone comes
>up with a use where function attributes are inappropriate.

It's inappropriate to use attributes of a function for attributes that 
logically belong to the decorator.  For 
example  'synchronized(lockattr="baz")'.  The 'lockattr' logically belongs 
to the synchronizing decoration.  Declaring it in a separate location makes 
the whole thing harder to read/understand.




From bob at redivi.com  Sun Mar 28 10:15:55 2004
From: bob at redivi.com (Bob Ippolito)
Date: Sun Mar 28 10:11:57 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>
References: <40660EA3.3070601@prescod.net>
	<0BD99B00-805C-11D8-9C37-000393100E1A@earthlink.net>
	<5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>
Message-ID: <CF8B772C-80CA-11D8-AF39-000A95686CD8@redivi.com>

On Mar 28, 2004, at 9:45 AM, Phillip J. Eby wrote:

> At 12:54 PM 3/28/04 +0100, Paul Moore wrote:
>> Robert Mollitor <mollitor@earthlink.net> writes:
>>
>> > It would be nice if transformer decorations were never allowed
>> > "arguments".  It would keep that list as short
>> > and as tidy as possible.
>>
>> That's the sort of restriction I imagined that Guido was tending
>> towards. While it's justifiable in this context, I would prefer to
>> leave the option of using arguments available, in case someone comes
>> up with a use where function attributes are inappropriate.
>
> It's inappropriate to use attributes of a function for attributes that 
> logically belong to the decorator.  For example  
> 'synchronized(lockattr="baz")'.  The 'lockattr' logically belongs to 
> the synchronizing decoration.  Declaring it in a separate location 
> makes the whole thing harder to read/understand.

Not to mention the fact that you'll have to start prefixing your 
function attributes so that you don't clash between decorators.. 
because of the flat namespace.

-bob
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2357 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20040328/80551618/smime-0001.bin
From paul at prescod.net  Sun Mar 28 10:19:32 2004
From: paul at prescod.net (Paul Prescod)
Date: Sun Mar 28 10:24:52 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>
References: <40660EA3.3070601@prescod.net>
	<0BD99B00-805C-11D8-9C37-000393100E1A@earthlink.net>
	<5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>
Message-ID: <4066ED04.6090305@prescod.net>

Phillip J. Eby wrote:

> At 12:54 PM 3/28/04 +0100, Paul Moore wrote:
> 
>> Robert Mollitor <mollitor@earthlink.net> writes:
>>
>> > It would be nice if transformer decorations were never allowed
>> > "arguments".  It would keep that list as short
>> > and as tidy as possible.
>>
>> That's the sort of restriction I imagined that Guido was tending
>> towards. While it's justifiable in this context, I would prefer to
>> leave the option of using arguments available, in case someone comes
>> up with a use where function attributes are inappropriate.
> 
> 
> It's inappropriate to use attributes of a function for attributes that 
> logically belong to the decorator.  For example  
> 'synchronized(lockattr="baz")'.  The 'lockattr' logically belongs to the 
> synchronizing decoration.  Declaring it in a separate location makes the 
> whole thing harder to read/understand.

I came up with the idea of decorators and function attributes working 
together, but I chose at that time NOT to propose that decorators be 
restricted to bare words. (i.e. I disagree with Robert)

In some cases, the attribute really is an input to the decorator and 
other times it is an input to the framework. The most obvious example of 
the former is if a given decorator is allowed multiple times with 
various arguments:

def foo[bar(1,"a"), bar(5, "d")] func(self, x):
	pass

(Of course that's getting pretty nasty to parse visually but maybe it is 
worth it for some use cases)

But in the more common case, it is often the case _today_ that 
frameworks allow programmers to associate information with functions and 
then "register" that function (or the class it is in) completely 
separately. I don't believe that this separation causes big problems.

It isn't that you are artificially passing a decorator argument as a 
function attribute. Rather, the decorator is designed to work on a 
function attribute the same way that Python itself is designed to work 
on magic attributes like __doc__ or __name__ or __slots__.

  Paul Prescod



From pedronis at bluewin.ch  Sun Mar 28 10:30:09 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Sun Mar 28 10:25:46 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <CF8B772C-80CA-11D8-AF39-000A95686CD8@redivi.com>
References: <5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>
	<40660EA3.3070601@prescod.net>
	<0BD99B00-805C-11D8-9C37-000393100E1A@earthlink.net>
	<5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>
Message-ID: <5.2.1.1.0.20040328172552.03a654d0@pop.bluewin.ch>

At 10:15 28.03.2004 -0500, Bob Ippolito wrote:
>On Mar 28, 2004, at 9:45 AM, Phillip J. Eby wrote:
>
>>At 12:54 PM 3/28/04 +0100, Paul Moore wrote:
>>>Robert Mollitor <mollitor@earthlink.net> writes:
>>>
>>> > It would be nice if transformer decorations were never allowed
>>> > "arguments".  It would keep that list as short
>>> > and as tidy as possible.
>>>
>>>That's the sort of restriction I imagined that Guido was tending
>>>towards. While it's justifiable in this context, I would prefer to
>>>leave the option of using arguments available, in case someone comes
>>>up with a use where function attributes are inappropriate.
>>
>>It's inappropriate to use attributes of a function for attributes that 
>>logically belong to the decorator.  For example
>>'synchronized(lockattr="baz")'.  The 'lockattr' logically belongs to the 
>>synchronizing decoration.  Declaring it in a separate location makes the 
>>whole thing harder to read/understand.
>
>Not to mention the fact that you'll have to start prefixing your function 
>attributes so that you don't clash between decorators.. because of the 
>flat namespace.

yes, in fact C# and Java use first class objects, namely in their case 
classes/interfaces to solve the naming problem.

A comparable approach in Python would be:

author = object() # may be imported from somewhere else

def foo(arg):
   @author: "fooauthor" # === foo.__dict__[author] = "fooauthor"

so @ syntax would be "@expr : expr" and not "@name : expr".











From nbastin at opnet.com  Sun Mar 28 10:38:51 2004
From: nbastin at opnet.com (Nick Bastin)
Date: Sun Mar 28 10:38:57 2004
Subject: [Python-Dev] Last chance!
In-Reply-To: <200403281005.i2SA5td26850@guido.python.org>
References: <3FDE6D35.3090100@tismer.com> <3FE2C5B5.8080208@tismer.com>
	<200312191531.hBJFVgw27430@c-24-5-183-134.client.comcast.net>
	<406650F3.2090808@stackless.com>
	<200403281005.i2SA5td26850@guido.python.org>
Message-ID: <032B33E6-80CE-11D8-B559-000393CBDF94@opnet.com>


On Mar 28, 2004, at 5:05 AM, Guido van Rossum wrote:

>>> I saved the message, but haven't had the time yet to think things
>>> through.
>>>
>>> I *did* notice at least one case where using f_tstate might actually
>>> be a mistake: theoretically it's possible that two or more threads
>>> alternate calling next() on a generator (if they wrap it in a 
>>> critical
>>> section); AFAICT the f_tstate is never updated.
>
> [Christian]
>> I've been running Stackless Python without f_tstate for more than
>> three months now, in various applications.
>> May I check in a patch to evict f_tstate?
>
> Sure!  Let stackless lead the way. :-)

This may screw up the work I'm doing to get the profiler to work 
transparently with threads.  Since I can't promise that the profiler 
will be in the same thread as the code being profiled, I can't 
guarantee that PyThreadState_GET() will give the correct thread state, 
so I grab the thread state from the frame object.  Of course, this work 
is also in the super-early stages of development, so I may go some 
other direction in the future when I find out that this doesn't work 
correctly...just pointing out a potential user (victim).

--
Nick


From tim.one at comcast.net  Sun Mar 28 11:09:41 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sun Mar 28 11:09:38 2004
Subject: [Python-Dev] tick_counter?
Message-ID: <LNBBLJKPBEHFEDALKOLCAEEPJNAB.tim.one@comcast.net>

Anyone know what the purpose of PyThreadState.tick_counter might be?
AFAICT, it's initialized to 0, incremented by the eval loop now & again, and
otherwise never referenced.


From perky at i18n.org  Sun Mar 28 11:24:41 2004
From: perky at i18n.org (Hye-Shik Chang)
Date: Sun Mar 28 11:24:47 2004
Subject: [Python-Dev] tick_counter?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEEPJNAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCAEEPJNAB.tim.one@comcast.net>
Message-ID: <20040328162441.GA67759@i18n.org>

On Sun, Mar 28, 2004 at 11:09:41AM -0500, Tim Peters wrote:
> Anyone know what the purpose of PyThreadState.tick_counter might be?
> AFAICT, it's initialized to 0, incremented by the eval loop now & again, and
> otherwise never referenced.
> 

According to SF #617311 written by Armin:
> 
> tstate->tick_counter is incremented whenever the check_interval
> ticker reaches zero.
> 
> The purpose is to give a useful measure of the number of interpreted
> bytecode instructions in a given thread. This extremely lightweight
> statistic collector can be of interest to profilers (like psyco.jit()).
> 
> We can safely guess that a single integer increment every 100
> interpreted bytecode instructions will go entierely unnoticed in
> any performance measure. [This is true for pystone.py.]


Hye-Shik

From dmorton at bitfurnace.com  Sun Mar 28 12:09:42 2004
From: dmorton at bitfurnace.com (Damien Morton)
Date: Sun Mar 28 12:13:17 2004
Subject: [Python-Dev] method decorators
Message-ID: <406706D6.1040309@bitfurnace.com>

Has anyone proposed the current c-sharp syntax for method decorators?
That is, the decorators appears before the method delcaration? Seems to
me eminently more readable than trying to squeeze the decorators into
the main part of the method declaration, especially so if the decorator
declaration gets long.

[decorator]
def method(args):
   body

[foo, bar, baz, fubar, blah, blah, blah]
def method(args):
   body

With decorators and attributes and a docstring, youd have something like
this:

[foo, bar, baz, fubar, blah, blah, blah]
def method(args):
   { author:"guido", version:2.1, purpose:"illustrative" }
   """ docstring """
   body


From martin at v.loewis.de  Sun Mar 28 12:20:15 2004
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun Mar 28 12:20:39 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <003401c41486$2e6ec100$2fb4958d@oemcomputer>
References: <003401c41486$2e6ec100$2fb4958d@oemcomputer>
Message-ID: <4067094F.3000504@v.loewis.de>

> 4) If some feature set turns out to be worrisome, I would rather delay
> it for Py2.5 than hold up Py2.4 indefinitely.  The AST people know best
> whether their work is fit for commitment.  If it's not ready, then let
> it cook on its own time.  Basically, it's ready when it's ready.

While I wholeheartedly agree with the principles you elaborate and the
conclusions drawn from them, you should be aware that, at PyCon, we
have set firm deadlines for a number of potential changes. The AST
branch either gets merged by May 1, or it does not get merged for
Python 2.4. Its authors are very much interested in completing it and
feel that it may never get completed unless a concerted effort is made
in that direction now.

> Does anyone have a good reason that the first alpha cannot go out in
> May?

I can accept the position of contributors that they really want to see
their work released, especially if they are willing to abide by
deadlines they have agreed to.

I must accept (and will actively support) a schedule that the release
manager has set. While I don't quite remember it been actually
mentioned, I  somehow got the impression that Anthony's personal
schedule (as well as the upcoming release of 2.3.4) have contributed to
initiating a 2.4 release a few months later than May.

Regards,
Martin


From tjreedy at udel.edu  Sun Mar 28 14:22:16 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Sun Mar 28 14:22:16 2004
Subject: [Python-Dev] Re: method decorators
References: <406706D6.1040309@bitfurnace.com>
Message-ID: <c478l0$gid$1@sea.gmane.org>


"Damien Morton" <dmorton@bitfurnace.com> wrote in message
news:406706D6.1040309@bitfurnace.com...
> Has anyone proposed the current c-sharp syntax for method decorators?
> That is, the decorators appears before the method delcaration? Seems to
> me eminently more readable than trying to squeeze the decorators into
> the main part of the method declaration, especially so if the decorator
> declaration gets long.
>
> [decorator]
> def method(args):
>    body

I agree that this might look nicer than some of the current proposals.
However, it is legal syntax today, with a different meaning, even if silly:

>>> def f(): pass
...
>>> [f]
[<function f at 0x00868088>]

then list is then decref'ed to 0 and gc'ed whenever.  So you need something
currently not legal.

Terry J. Reedy





From paul at prescod.net  Sun Mar 28 14:43:21 2004
From: paul at prescod.net (Paul Prescod)
Date: Sun Mar 28 14:46:35 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <CF8B772C-80CA-11D8-AF39-000A95686CD8@redivi.com>
References: <40660EA3.3070601@prescod.net>
	<0BD99B00-805C-11D8-9C37-000393100E1A@earthlink.net>
	<5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>
	<CF8B772C-80CA-11D8-AF39-000A95686CD8@redivi.com>
Message-ID: <40672AD9.80307@prescod.net>

Bob Ippolito wrote:

>...
> 
> Not to mention the fact that you'll have to start prefixing your 
> function attributes so that you don't clash between decorators.. because 
> of the flat namespace.

This is already the case today. __doc__ is prefixed with "__". Python 
COM and PyXPCOM both use a single _. It is just the nature of function 
attributes and has nothing to do with whether they are used by 
decorators or frameworks invoked in some other way.

  Paul Prescod



From mollitor at earthlink.net  Sun Mar 28 15:00:22 2004
From: mollitor at earthlink.net (Robert Mollitor)
Date: Sun Mar 28 14:59:12 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>
Message-ID: <8BB3B2C5-80F2-11D8-924E-000393100E1A@earthlink.net>


On Sunday, March 28, 2004, at 09:45 AM, Phillip J. Eby wrote:

> At 12:54 PM 3/28/04 +0100, Paul Moore wrote:
>> Robert Mollitor <mollitor@earthlink.net> writes:
>>
>> > It would be nice if transformer decorations were never allowed
>> > "arguments".  It would keep that list as short
>> > and as tidy as possible.
>>
>> That's the sort of restriction I imagined that Guido was tending
>> towards. While it's justifiable in this context, I would prefer to
>> leave the option of using arguments available, in case someone comes
>> up with a use where function attributes are inappropriate.
>
> It's inappropriate to use attributes of a function for attributes that 
> logically belong to the decorator.  For example  
> 'synchronized(lockattr="baz")'.  The 'lockattr' logically belongs to 
> the synchronizing decoration.  Declaring it in a separate location 
> makes the whole thing harder to read/understand.

The following is an example in PEP 318:

	def accepts(*types):
		def check_accepts(f):
			assert len(types) == f.func_code.co_argcount
			def new_f(*args, **kwds):
				for (a, t) in zip(args, types):
					assert isinstance (a, t), \
						"arg %r does not match %s" % (a, t)
				return f(*args, **kwds)
			return new_f
		return f(*args, **kwds)

	def returns(rtype):
		def check_returns(f):
			def new_f(*args, **kwds):
				result = f(*args, **kwds)
				assert isinstance(result, rtype), \
					"return value %r does not match %s" % (result,  rtype)
				return result
			return new_f
		return check_returns

that is two functions that return a function that returns a function.  
Why?  Because

	def func(arg1, arg2) [accepts(int, (int, float)), returns((int, 
float))]:
		pass

expands roughly to "returns((int, float)) (accepts(int, (int, float)) 
(func))". Whether or not this
is the best implementation, it is reasonable if you view the parameters 
as logically belonging
to the decorator instead of logically belonging to the function.  With 
transformer plus annotations,
this could be recast as

	def func [check_types] (arg1, arg2):
		:accepts (int, (int, float))
		:returns (int, float)
		pass

	def check_types(f):
		if hasattr(f, 'accepts'):
			assert len(types) == f.func_code.co_argcount	
		def new_f(*args, **kwds):
			if hasattr(f, 'accepts'):
				for (a, t) in zip(args, f.accepts):
					assert isinstance (a, t), \
						"arg %r does not match %s" % (a, t)
			result = f(*args, **kwds)
			if hasattr(f,'returns'):
				assert isinstance(result, f.returns), \
					"return value %r does not match %s" % (result,  f.returns)
			return result
		return new_f

As an added bonus, the function attributes are available for other 
inspecting operations such as
generating documentation, say.

My point here is we may want to aim for making the information kept on 
the function object itself as rich as possible
and make the "decorations" do the work of pulling information from 
whatever the function "publishes".

Even if you have a case like you mention of  
'synchronized(lockattr="baz")', where perhaps you might want to say
that nobody outside of this particular transformer implementation would 
ever want to know which attribute the function
is synchronized on, there is a trivial workaround if we restrict the 
transformer list to identifiers:

		sync = synchronized(lockattr="baz")

		def func [sync] (arg1, arg2):
			pass

However, I think that in general people will decide to be "generous" 
and choose to publish those parameters
as true function attributes on 'func', so this work-around should not 
be necessary.

It is true that there is a potential namespace issue with function 
attribute names, but this is a general multiple
inheritance issue.  In fact, it might not be a bad idea to view 
transformer decorations as "mix-ins" (though with
non-trivial differences).  Despite the fact that most of the examples 
in PEP 318 are functions, the only existing
transformers, classmethod and staticmethod are NOT functions.  They are 
types.  In fact, it may turn out that in
order to ensure commutability we may require that all transformers be 
types/classes that follow a specific pattern.

Now, if the transformers are classes, then they are not functions that 
return a function or (in the parameterized
case) functions that return a function that returns a function.

A non-parameterized transformer would be something like

	class check_types:
		def __init__ (self, f):
			self.f = f
		def __call__(self, < some syntax involving asterisks>):
			do something with self.f

A parameterized transformer could  be something like

	class synchronized:
		def __init__(self, lockattr):
			self.lockattr = lockattr
		def __call__(self, <again with the asterisks>):
			def new_f(f):
				do something with f
			return new_f

But there is perhaps one crucial difference, the non-parameterized one 
returns a non-function object instance that
may have other methods besides __call__ (to make the transformer play 
nice commutability-wise, say), whereas
the parameterized one is returning a true function object (which would 
be treated by an outer transformer as a
unwrapped function, probably).  To make them analogous, you would need 
something like

	class synchronized(self, lockattr):
		def __init__(self, lockattr):
			self.lockattr = lockattr
		def __call__(self, <...>):
			class synchronzied_wrapper:
				def __init__(self, f):
					self.f = f
				def __call__(self, <...>):
					do something with self.f

So "type(f)" (given "def f [synchonized(...)] (...): ...") would not be 
'synchronized' but 'synchronized_wrapper'.


robt


From tjreedy at udel.edu  Sun Mar 28 15:49:08 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Sun Mar 28 15:49:07 2004
Subject: [Python-Dev] Re: method decorators (PEP 318): elevate ':'?
References: <200403261510.i2QFAZ522251@guido.python.org><7jx7maov.fsf@yahoo.co.uk>
	<40660EA3.3070601@prescod.net>
	<16486.54752.565545.542406@montanaro.dyndns.org>
Message-ID: <c47dnt$rlg$1@sea.gmane.org>


"Skip Montanaro" <skip@pobox.com> wrote in message
news:16486.54752.565545.542406@montanaro.dyndns.org...
> My argument against '@' expressions is simply that they will be
confusing.
> They will appear to be part of the function body, and unlike doc strings
> will almost certainly involve evaluation of general expressions, at least
in
> some cases.  I think that will make it confusing (for newbies, at least)
> that those expressions are actually evaluated at function definition time
> and not at function execution time.

I think this and similar proposals for def-time code in the body would be
very confusing for beginners and at least a little for most everybody
(including me).  Indeed, I don't especially like definition-time doc
strings in what is otherwise the run-time body.  (This is not a problem for
modules and classes since their definition time *is* their run time.)

So I would instead propose that ':' be elevated from visual enhancement to
essential separator of definition-time text/code from run-time code.

If this were done, then attribute assignment could be done with either a
bare dict literal or perhaps better, as with classes, normal assignment
statements.  Post-processing calls could also be normal calls, which would
eliminate the implicit l2r vx r2l issue.  Multiple decorator call would be
either explicitly nested or explicitly sequential  on separate lines or
separated by ';', just as now.  For visuall emphasis, and to help the
compiler and other tools, I also propose '::' as a marker for the presence
of the optional definition-time post-processing suite.  Example:

def meth(self, b, d)::
  'a method with attribute and postprocessing'
   # retain abbreviation for __doc__ = 'something'
  author = 'me'         # attached to original meth
  meth=classmethod(meth)
  boo = 'yes'            # gets attached to new wrapper meth, not original
function
  :  #dedent?
  pass

PEP 318 and the various proposals therein do two things: 1) move the
decoration code from after the body to before the body and 2) change it
from normal, explicit, executable code to some new form of implicit
declaration syntax , maybe with embedded expressions, maybe not.  I suggest
consideration, at least as a baseline, of a minimal change that would allow
the relocation of decorator code as is.

Starting from this, it would be possible, though not necessary, to define a
bare identifier such as 'classmethod', in this context, as a defaulted call
and reassignment, as proposed.  But I am not sure the keystroke saving is
worth the inconsistency with the usual rules and the certain stimulation of
requests for similar tricks elsewhere in the language.  A plus for the
abbreviation would be minimal decoration like this:

def f(self):: classmethod:

Perhaps the abbreviation could be defined valid only if it follows on the
same line.


Doc strings: the current

def f():
  'doc'
  pass

would remain, perhaps indefinitely, and be defined as an abbreviation for

def f()::
  'doc' # which abbreviates _doc__ = 'doc'
 :
 pass

Terry J. Reedy




From skip at pobox.com  Sun Mar 28 16:38:26 2004
From: skip at pobox.com (Skip Montanaro)
Date: Sun Mar 28 16:38:55 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <8BB3B2C5-80F2-11D8-924E-000393100E1A@earthlink.net>
References: <5.1.0.14.0.20040328094036.02bb04c0@mail.telecommunity.com>
	<8BB3B2C5-80F2-11D8-924E-000393100E1A@earthlink.net>
Message-ID: <16487.17874.682992.946082@montanaro.dyndns.org>


    Robert> My point here is we may want to aim for making the information
    Robert> kept on the function object itself as rich as possible and make
    Robert> the "decorations" do the work of pulling information from
    Robert> whatever the function "publishes".

You can do that with just the decorator syntax:

    def attrs(**kwds):
        def set_attributes(f):
            for k in kwds:
                setattr(f, k, kwds[k])
            return f
        return set_attributes

    def func(arg1, arg2) [attrs(accepts=(int, (int,float)),
                                returns=((int,float))),
                          check_types]:
        pass

    Robert> ... there is a trivial workaround if we restrict the transformer
    Robert> list to identifiers:

    Robert> sync = synchronized(lockattr="baz")
    Robert> def func [sync] (arg1, arg2):
    Robert>     pass

I think restricting decorators to only be identifiers would be shortsighted.
I can understand having to create workarounds for unforseen situations, but
it's clear at this point in the discussion that decorator functions might
need to accept parameters.  Why not let them?

Skip

From mollitor at earthlink.net  Sun Mar 28 18:27:19 2004
From: mollitor at earthlink.net (Robert Mollitor)
Date: Sun Mar 28 18:26:34 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <16487.17874.682992.946082@montanaro.dyndns.org>
Message-ID: <74F19896-810F-11D8-916E-000393100E1A@earthlink.net>


On Sunday, March 28, 2004, at 04:38 PM, Skip Montanaro wrote:

>
>     Robert> My point here is we may want to aim for making the 
> information
>     Robert> kept on the function object itself as rich as possible and 
> make
>     Robert> the "decorations" do the work of pulling information from
>     Robert> whatever the function "publishes".
>
> You can do that with just the decorator syntax:
>
>     def attrs(**kwds):
>         def set_attributes(f):
>             for k in kwds:
>                 setattr(f, k, kwds[k])
>             return f
>         return set_attributes
>
>     def func(arg1, arg2) [attrs(accepts=(int, (int,float)),
>                                 returns=((int,float))),
>                           check_types]:
>         pass

Upon which object are the attributes set in each of the following cases?

	def cm(cls,arg1, arg2) [attrs(accepts=(int, (int,float)),  
returns=((int,float))), check_types, classmethod]:
		pass

	def cm(cls,arg1, arg2) [classmethod, attrs(accepts=(int, (int,float)), 
  returns=((int,float))), check_types]:
		pass

classmethod(cm).foo = "abc" is currently disallowed, and if it wasn't, 
we would need to somehow redirect things
so 'foo' is placed in the original 'cm' function object's __dict__, no?

Even ignoring classmethod/staticmethod, what is the docstring of 'func' 
after you do

	def printout(f):
		def new_f(*args, **kwds):
			print "Running f"
			return f(*args, **kwds)
		return new_f

	def func(arg1, arg2) [printout]:
		"""My function."""
		print "Inside f"

That is, what would "help(func)" generate?

			
>     Robert> ... there is a trivial workaround if we restrict the 
> transformer
>     Robert> list to identifiers:
>
>     Robert> sync = synchronized(lockattr="baz")
>     Robert> def func [sync] (arg1, arg2):
>     Robert>     pass
>
> I think restricting decorators to only be identifiers would be 
> shortsighted.
> I can understand having to create workarounds for unforseen 
> situations, but
> it's clear at this point in the discussion that decorator functions 
> might
> need to accept parameters.  Why not let them?

It is easier to expand a public grammar than it is to shrink one.


Robert Mollitor


From tdelaney at avaya.com  Sun Mar 28 18:37:28 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Sun Mar 28 18:37:35 2004
Subject: [Python-Dev] PEP 318 - posting draft
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE01535B97@au3010avexu1.global.avaya.com>

> From: Guido van Rossum
> 
> OTOH, for long argument lists, the "preferred" form has the reverse
> problem: the decorators are separated from the function name by the
> long argument list.
> 
> The question then becomes, what is more likely: long argument lists or
> long lists of decorators?  I *know* that (at least in certain code
> bases :) long argument lists are common.  Will long lists of
> decorators ever become popular?  I think we'll have to look for
> languages that already have decorators (especially C#) for what the
> future might give.  I'll be talking to folks at PyCon about this.

I've come back to this, because I've finally solidified one reason why I think the form:

    def func (args) [decorators]:

is my preference. I've seen this expressed obliquely by other people, but not explicitly, so I want to add it as an explicit argument now.

The first part of the definition:

    def func (args)

is *required* for every function definition. However, the [decorators] part is *optional*. I think this distinction is very important - we can guarantee that if we use the format:

    def func (args) [decorators]:

then every function signature looks the same - name; arguments; optional decorators. OTOH, if we go with:

   def func [decorators] (args):

then function signatures look different in different places.

A side issue is that due to the conventions used in command lines and the python documentation, the first form actually *reads* like "def name arguments optional decorators" - c.f. range([start,] stop[, step]) - which may or may not be a good thing. Would it contribute to people thinking the [] weren't part of the decorator list, but merely a documentation thing?

Tim Delaney

From anthony at interlink.com.au  Sun Mar 28 20:00:30 2004
From: anthony at interlink.com.au (Anthony Baxter)
Date: Sun Mar 28 19:59:51 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <4067094F.3000504@v.loewis.de>
References: <003401c41486$2e6ec100$2fb4958d@oemcomputer>
	<4067094F.3000504@v.loewis.de>
Message-ID: <4067752E.7000903@interlink.com.au>

Martin v. L?wis wrote:
>> 4) If some feature set turns out to be worrisome, I would rather delay
>> it for Py2.5 than hold up Py2.4 indefinitely.  The AST people know best
>> whether their work is fit for commitment.  If it's not ready, then let
>> it cook on its own time.  Basically, it's ready when it's ready.
> 
> 
> While I wholeheartedly agree with the principles you elaborate and the
> conclusions drawn from them, you should be aware that, at PyCon, we
> have set firm deadlines for a number of potential changes. The AST
> branch either gets merged by May 1, or it does not get merged for
> Python 2.4. Its authors are very much interested in completing it and
> feel that it may never get completed unless a concerted effort is made
> in that direction now.
> 
>> Does anyone have a good reason that the first alpha cannot go out in
>> May?
> 
> 
> I can accept the position of contributors that they really want to see
> their work released, especially if they are willing to abide by
> deadlines they have agreed to.

I don't recall _any_ deadline being mentioned before this for 2.4. 
Certainly the only indications I've had were that we'd be aiming for a 
November release - 18ish months after 2.3. 2.3 was released on 29th
July 2003.

> I must accept (and will actively support) a schedule that the release
> manager has set. While I don't quite remember it been actually
> mentioned, I  somehow got the impression that Anthony's personal
> schedule (as well as the upcoming release of 2.3.4) have contributed to
> initiating a 2.4 release a few months later than May.

As Martin said - I plan a 2.3.4 release for May. There's a couple of
crashing bugs that Zope3 found back at the start of the year that have
been fixed, and I've not seen any new ones crop up since then, so I'm
happy that I'm not going to be cutting a release-of-the-month because
Zope3 found Yet Another Weird Weakref Interaction (YAWWI).

But, at the end of the day, if anyone else _really_ _really_ wants to
do the release earlier, and is willing and able to do it, I'm more than
happy to let them do it.

Anthony

-- 
Anthony Baxter     <anthony@interlink.com.au>
It's never too late to have a happy childhood.

From martin at v.loewis.de  Sun Mar 28 20:27:43 2004
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun Mar 28 20:27:56 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <4067752E.7000903@interlink.com.au>
References: <003401c41486$2e6ec100$2fb4958d@oemcomputer>
	<4067094F.3000504@v.loewis.de> <4067752E.7000903@interlink.com.au>
Message-ID: <40677B8F.3040405@v.loewis.de>

Anthony Baxter wrote:
> I don't recall _any_ deadline being mentioned before this for 2.4. 
> Certainly the only indications I've had were that we'd be aiming for a 
> November release - 18ish months after 2.3. 2.3 was released on 29th
> July 2003.

PEP 320 mentions an alpha release in July. I also recall that Jeremy
agree to either complete the AST branch by May, or not have it be part
of 2.4. The PEP mentions that generator expressions are meant to be
completed in the first week of May.

Regards,
Martin


From greg at cosc.canterbury.ac.nz  Sun Mar 28 20:34:29 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 28 20:34:59 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <16484.36417.192543.754336@montanaro.dyndns.org>
Message-ID: <200403290134.i2T1YTf8027245@cosc353.cosc.canterbury.ac.nz>

Skip Montanaro <skip@pobox.com>:

> Had function attributes been available at that time the
> special nature of "if the first object in the module/function/class is a
> string literal, make it the docstring" wouldn't have been (as)
> necessary.

Maybe we could do with some more special cases concerning
literals of other kinds?

  def foo():
    {'author': "Guido", 'deprecated': 1}
    ...

or

  def foo():
    class __attributes__:
      author = "Guido"
      deprecated = 1
    ...

Everything inside these would have to be constant (if you
want them computed at run time, you have to use the old
fashioned method).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From tim.one at comcast.net  Sun Mar 28 21:41:18 2004
From: tim.one at comcast.net (Tim Peters)
Date: Sun Mar 28 21:41:25 2004
Subject: [Python-Dev] tick_counter?
In-Reply-To: <20040328162441.GA67759@i18n.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEHDJNAB.tim.one@comcast.net>

[Tim]
>> Anyone know what the purpose of PyThreadState.tick_counter might be?
>> AFAICT, it's initialized to 0, incremented by the eval loop now &
>> again, and otherwise never referenced.

[Hye-Shik Chang]
>> According to SF #617311 written by Armin:
>>
>> tstate->tick_counter is incremented whenever the check_interval
>> ticker reaches zero.
>>
>> The purpose is to give a useful measure of the number of interpreted
>> bytecode instructions in a given thread. This extremely lightweight
>> statistic collector can be of interest to profilers (like
>> psyco.jit()).
>> ...

Thanks!  That rings a vague bell, and I checked in a code comment so this
doesn't continue to look like a mistake.

I know the Python source has a habit of not commenting the purpose of struct
members, but that's not something to emulate, and comments about something
this obscure definitely belong in the code instead of the checkin comment.


From greg at cosc.canterbury.ac.nz  Sun Mar 28 22:07:43 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 28 22:08:42 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261837.i2QIbwl22829@guido.python.org>
Message-ID: <200403290307.i2T37hs4027396@cosc353.cosc.canterbury.ac.nz>

Guido:

> If we were going to use this mostly for decorators spelled with a
> single work, like classmethod, I would favor a syntax where the
> decorator(s) are put as early as reasonable in the function definition

Not only single-word decorators, but an extremely small number of them
applied to a given function.

> After seeing all the examples, I still worry that this:
> 
>   def foobar(cls, blooh, blah) [classmethod]:
> 
> hides a more important fact for understanding it (classmethod) behind
> some less important facts (the argument list).

Classmethod and staticmethod seem to be somewhat special in this
regard. In most of the other use cases put forward, the decorators
aren't so important for knowing how to use the function. In other
words, they're more a part of the function's implementation than its
interface.

Maybe there's a case for providing an even more special syntax for
classmethod and staticmethod, and not trying to shoehorn them into a
general decorator syntax?

If we were designing Python 3.0 here, I would probably suggest the
following syntax for classmethods:

  def foo(class cls, x, y):
    ...

and wouldn't bother with staticmethods at all. How often do you
actually need a staticmethod in particular and *not* a classmethod?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Sun Mar 28 22:07:24 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 28 22:08:51 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403261837.i2QIbwl22829@guido.python.org>
Message-ID: <200403290307.i2T37OJY027390@cosc353.cosc.canterbury.ac.nz>

Guido:

> (1) Put decorators of the first category early in the def statement,
>     for example here:
> 
>     def foobar [classmethod] (cls, foo, bar):
>         ...
> 
> (2) Invent some other notation for setting function attributes as part
>     of the function *body*, before the doc string even.

I wouldn't mind (2) if it's done nicely enough, but I don't like
(1). I'm still strongly against putting anything between the function
name and the arglist.

> I find the mark-up in your example about the worst possible mark-up;
> in practice, these things can get quite voluminous

Something about the idea of functions having voluminous amounts of
metadata disturbs me, especially for small functions. I don't want to
have to wade through a dozen lines of metadata to get to a few lines
of actual code that tells me what the function does.

Docstrings can be quit large, but they don't seem to lead to this
problem, probably because they look quite different from code. The
proposed metadata syntaxes are going to be quite hard to distinguish
from code at first glance.

If using large amounts of metadata becomes fashionable, I think it
will be important for editors to be able to recognise it easily and
display it in a different colour.

> (For those worried that the function attribute sets appear to belong
> to the body, I point to the precedent of the docstring.  IMO the start
> of the function body is a perfectly fine place for metadata about a
> function.)

But docstrings are constants, so there are no evaluation-time or
scope issues.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Sun Mar 28 22:07:51 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 28 22:10:34 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <16484.34156.466513.150290@montanaro.dyndns.org>
Message-ID: <200403290307.i2T37phl027402@cosc353.cosc.canterbury.ac.nz>

Skip Montanaro <skip@pobox.com>:

> If not, you'll probably get people (newbies
> at least) trying to write stuff like this:
> 
>     def foo(a):
>         if a > 5:
>             @attr: "bob"
>         elif a > 0:
>             @attr: "skip"
>         else:
>             @attr: "guido"
> 
> The other forms (via a decorator, using explicit attribute assignments after
> the definition) make it explicit that these attribute assignments don't
> occur in the context of function execution.

Hmmm... What about

  def foo(a):
    attributes:
      author = "Guido"
      deprecated = 1
    body:
      b = sqrt(a)
      return 3 * b / 2

Would that provide sufficient clues that the attributes are
*not* part of the body?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Sun Mar 28 22:07:56 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 28 22:10:46 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <7E7593F2-7F5F-11D8-80D6-00039314A688@earthlink.net>
Message-ID: <200403290307.i2T37ubx027408@cosc353.cosc.canterbury.ac.nz>

Robert Mollitor <mollitor@earthlink.net>:

> Not only would we want "synchronized(classmethod(f))" to work, but we
> would probably want "classmethod(synchronized(f))" to work
> identically.  This is not easy unless either the implementation of
> each was aware of the other (or at least the possibility of the
> other), or all transformers must be written to a tight specification.

For that to work, all transformers would have to be prepared to
receive a descriptor instead of a callable, extract the underlying
callable, transform that, and then wrap the result back up in a
descriptor of the same type.

While that could be done for specific types of descriptor, it couldn't
be done in general because there's no standard for callable-wrapping
descriptors concerning how to extract the wrapped callable.

Also, it would still only be possible to have at most *one*
descriptor-creating transformer in the chain. That's probably
unavoidable, since in general different kinds of descriptors are going
to be fundamentally incompatible with each other.  This probably isn't
a disadvantage, since it wouldn't make sense to combine such
descriptors anyway (e.g a method can't be both a classmethod and a
staticmethod).

It seems we have two quite different kinds of transformer here: (1)
those that take a callable and return another callable; (2) those that
take a callable and return a descriptor.

The differences between them are substantial. Type 1 transformers are
easily chained, and it seems people will often want to do so. It also
seems that for the most part it will be easy to design them so that
transformers with orthogonal effects can be chained in any order.

On the other hand, it seems that it will usually make sense only to
have *one* transformer of type 2 in any given definition, and it
*must* be applied last.

Particularly considering the latter restriction, I'm wondering whether
these should be treated differently in the syntax. Maybe something
like

  def [type2trans] foo(args) [type1trans, type1trans...]:
    ...

or

  def foo [type2trans] (args) [type1trans, type1trans...]:
    ...

This might also help satisfy Guido's concerns about the positioning
of 'classmethod' and 'staticmethod', since, being type 2 transformers,
they would go up front.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From dave at nullcube.com  Sun Mar 28 22:16:52 2004
From: dave at nullcube.com (Dave Harrison)
Date: Sun Mar 28 22:15:03 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403290307.i2T37hs4027396@cosc353.cosc.canterbury.ac.nz>
References: <200403261837.i2QIbwl22829@guido.python.org>
	<200403290307.i2T37hs4027396@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040329031652.GB28444@dave@alana.ucc.usyd.edu.au>

> If we were designing Python 3.0 here, I would probably suggest the
> following syntax for classmethods:
> 
>   def foo(class cls, x, y):
>     ...
> 
> and wouldn't bother with staticmethods at all. How often do you
> actually need a staticmethod in particular and *not* a classmethod?

It's quite useful to be able to develop, essentially stateless, static
method utility libraries that dont need to be class based.  Why create
an object if you simply don't need one ?

If this was an option, there would have to be a way to use it without
the class element, or at least with there being a keyword to use which
allowed the class element to be ignored.

-- 
Dave Harrison
Nullcube
dave@nullcube.com
http://www.nullcube.com

From greg at cosc.canterbury.ac.nz  Sun Mar 28 22:15:23 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 28 22:16:36 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <20040326143536.2f068c47.casey@zope.com>
Message-ID: <200403290315.i2T3FND1027439@cosc353.cosc.canterbury.ac.nz>

Casey Duncan <casey@zope.com>:

> For some reason I thought there was a philosophical objection to 'with'
> in Python. Must have been an urban myth I guess.

I think it's considered to have quite a low priority,
since it wouldn't buy you much. Binding a nice short
temporary name to the with-ed object is almost as
concise and arguably clearer.

By the way, I happen to think there are better uses
to which the word 'with' could be put, e.g. the
suggestion of making

  with lock(foo):
    do_something()

mean something like

  _x = lock(foo)
  _x.__enter__()
  try:
    do_something()
  finally:
    _x.__exit()

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From tdelaney at avaya.com  Sun Mar 28 22:33:53 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Sun Mar 28 22:34:00 2004
Subject: [Python-Dev] method decorators (PEP 318)
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE01535CA9@au3010avexu1.global.avaya.com>

> From: Greg Ewing
> 
> Particularly considering the latter restriction, I'm wondering whether
> these should be treated differently in the syntax. Maybe something
> like
> 
>   def [type2trans] foo(args) [type1trans, type1trans...]:
>     ...
> 
> or
> 
>   def foo [type2trans] (args) [type1trans, type1trans...]:
>     ...
> 
> This might also help satisfy Guido's concerns about the positioning
> of 'classmethod' and 'staticmethod', since, being type 2 transformers,
> they would go up front.

Hmm - this seems like it has possibilities. Since we could only have one decorator that returned a descriptor, we could dispense with the [] on those, leading to:

    def descriptor_trans func_name (func_args) [callable_trans, callable_trans, ...]:
        ...

e.g.

    def classmethod foo (cls, arg1, arg2) [synchronized(lock)]:
        pass

which I think satisfies most the (IMO) most important requirements that various people have put forwards, namely:

1. `classmethod` and `staticmethod` are very important and should be very obvious (Guido seems absolutely adamant on this);

2. The name of the method should be near `def`;

3. There should be nothing between the name of the function and the arguments.

Tim Delaney

From greg at cosc.canterbury.ac.nz  Sun Mar 28 23:33:15 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Sun Mar 28 23:33:20 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <20040329031652.GB28444%dave@alana.ucc.usyd.edu.au>
Message-ID: <200403290433.i2T4XF6S027543@cosc353.cosc.canterbury.ac.nz>

Dave Harrison <dave@nullcube.com>:

> > and wouldn't bother with staticmethods at all. How often do you
> > actually need a staticmethod in particular and *not* a classmethod?
> 
> It's quite useful to be able to develop, essentially stateless, static
> method utility libraries that dont need to be class based.  Why create

I'm not sure I understand. Why not make them module-level functions?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From dave at nullcube.com  Sun Mar 28 23:49:48 2004
From: dave at nullcube.com (Dave Harrison)
Date: Sun Mar 28 23:48:00 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403290433.i2T4XF6S027543@cosc353.cosc.canterbury.ac.nz>
References: <20040329031652.GB28444%dave@alana.ucc.usyd.edu.au>
	<200403290433.i2T4XF6S027543@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040329044948.GB31923@dave@alana.ucc.usyd.edu.au>

> > > and wouldn't bother with staticmethods at all. How often do you
> > > actually need a staticmethod in particular and *not* a classmethod?
> > 
> > It's quite useful to be able to develop, essentially stateless, static
> > method utility libraries that dont need to be class based.  Why create
> 
> I'm not sure I understand. Why not make them module-level functions?

Let me give an example.  Lets say I want to do a generic transformation
of a dict, like reversing the key and value positions

a = {'a':1, 'b':2}

and I want to invert it to 

b = {1:'a', 2:'b'}

using my function invert()

That's generic enough that I could use it in heaps of places, but I dont
want to have to either put it in an object, or cut and paste the code
across, so as you say, I have a utility module, but it doesn't need to
be a class (I understood from your email that all functions would
require a class item with your suggested syntax - as below).

> def foo(class cls, x, y)

Example isn't fantastic *shrug*, but there are time when you have
something you need to do in a bunch of places, it's not in the current
libraries, and you don't need it to be a in a class.

It might be that I've missed the point entirely, and that you were only
referring to methods within a class.  In which case I'll be quiet ;-)
In fact in re-reading your mail, that might be the case entirely
*chuckle*

-- 
Dave Harrison
Nullcube
dave@nullcube.com
http://www.nullcube.com

From jcarlson at uci.edu  Sun Mar 28 23:56:03 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Sun Mar 28 23:59:47 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403290433.i2T4XF6S027543@cosc353.cosc.canterbury.ac.nz>
References: <20040329031652.GB28444%dave@alana.ucc.usyd.edu.au>
	<200403290433.i2T4XF6S027543@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040328204529.F89C.JCARLSON@uci.edu>

> > > and wouldn't bother with staticmethods at all. How often do you
> > > actually need a staticmethod in particular and *not* a classmethod?
> > 
> > It's quite useful to be able to develop, essentially stateless, static
> > method utility libraries that dont need to be class based.  Why create
> 
> I'm not sure I understand. Why not make them module-level functions?

Namespaces my friend, namespaces (I don't know if other people use this,
but I have on occasion).

#example.py
class functions_a:
    def foo(inp1, inp2) [staticmethod]:
        #do something
    #more functions that do something

class functions_b:
    def foo(inp1, inp2) [staticmethod]:
        #do something slightly different
    #more functions that do something different

#end example.py

We can use the classes as mini namespaces, and only need to import and
distribute a single module.  With class decorators (which seem to be in
favor right now), we can go even a step farther with the following
(which is another reason why class decorators are a good idea)...

def make_all_static(cls):
    for i,j in cls.__dict__.iteritems():
        if isinstance(j, instancemethod)
        cls.__dict__[i] = staticmethod(j)

class functions_c [make_all_static]:
    def foo(inp1, inp2):
        #do something a bit more different
    #more functions that do something a bit more different


Once again, you can use classes as namespaces in a single module.

 - Josiah


From greg at cosc.canterbury.ac.nz  Mon Mar 29 01:22:40 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 29 01:22:53 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <20040329044948.GB31923%dave@alana.ucc.usyd.edu.au>
Message-ID: <200403290622.i2T6Me7E027688@cosc353.cosc.canterbury.ac.nz>

Dave Harrison <dave@nullcube.com>:

> It might be that I've missed the point entirely, and that you were only
> referring to methods within a class.

Yes, I was only talking about staticmethods vs. classmethods.
I wasn't suggesting that functions outside of classes be
eliminated from the language!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Mon Mar 29 01:26:13 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 29 01:26:25 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <20040328204529.F89C.JCARLSON@uci.edu>
Message-ID: <200403290626.i2T6QDIY027700@cosc353.cosc.canterbury.ac.nz>

Josiah Carlson <jcarlson@uci.edu>:

> > I'm not sure I understand. Why not make them module-level functions?
> 
> Namespaces my friend, namespaces (I don't know if other people use this,
> but I have on occasion).

My point was, that in any likely use I can think of for
staticmethods, it wouldn't do any *harm* to use a classmethod
instead.

> def make_all_static(cls):
>     for i,j in cls.__dict__.iteritems():
>         if isinstance(j, instancemethod)
>         cls.__dict__[i] = staticmethod(j)

You'd still be able to do that, just as you'd still be able
to use the old method of creating a staticmethod. There
just wouldn't be any special syntax just for staticmethods
analogous to the one I suggested for classmethods.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From anthony at interlink.com.au  Mon Mar 29 01:41:53 2004
From: anthony at interlink.com.au (Anthony Baxter)
Date: Mon Mar 29 02:10:50 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <40677B8F.3040405@v.loewis.de>
References: <003401c41486$2e6ec100$2fb4958d@oemcomputer>	<4067094F.3000504@v.loewis.de>
	<4067752E.7000903@interlink.com.au> <40677B8F.3040405@v.loewis.de>
Message-ID: <4067C531.2000902@interlink.com.au>

Martin v. L?wis wrote:
> Anthony Baxter wrote:
> 
>> I don't recall _any_ deadline being mentioned before this for 2.4. 
>> Certainly the only indications I've had were that we'd be aiming for a 
>> November release - 18ish months after 2.3. 2.3 was released on 29th
>> July 2003.
> 
> 
> PEP 320 mentions an alpha release in July. I also recall that Jeremy
> agree to either complete the AST branch by May, or not have it be part
> of 2.4. The PEP mentions that generator expressions are meant to be
> completed in the first week of May.

Sorry, I was unclear - I meant no release date _before_ the discussion
at PyCon.

Anthony


-- 
Anthony Baxter     <anthony@interlink.com.au>
It's never too late to have a happy childhood.

From jcarlson at uci.edu  Mon Mar 29 02:24:11 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Mon Mar 29 02:27:51 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403290626.i2T6QDIY027700@cosc353.cosc.canterbury.ac.nz>
References: <20040328204529.F89C.JCARLSON@uci.edu>
	<200403290626.i2T6QDIY027700@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040328224716.F89F.JCARLSON@uci.edu>

> You'd still be able to do that, just as you'd still be able
> to use the old method of creating a staticmethod. There
> just wouldn't be any special syntax just for staticmethods
> analogous to the one I suggested for classmethods.

If you are talking about the split descriptor/callable returning
decorators, I'm not convinced that having decorators split based on
which (one) produce descriptors and which (ones) produce callables is
necessarily a good idea.  Certainly it is explicit, but so is having it
be the /last/ decorator (if we use left-to-right application or first if
we use right-to-left application) when we have a single decorator list.

> > > I'm not sure I understand. Why not make them module-level functions?
> > 
> > Namespaces my friend, namespaces (I don't know if other people use this,
> > but I have on occasion).
> 
> My point was, that in any likely use I can think of for
> staticmethods, it wouldn't do any *harm* to use a classmethod
> instead.

Perhaps not, except in the case when you expect your implementation to
document itself (I do most of the time).  The use of a classmethod
implies something, just as the use of a staticmethod or instancemethod
implies other things.  Using one for another may be confusing to anyone
reading code later.


> > def make_all_static(cls):
> >     for i,j in cls.__dict__.iteritems():
> >         if isinstance(j, instancemethod):
> >             cls.__dict__[i] = staticmethod(j)

Oops, I forgot the colon and indentation, it is fixed now.

 - Josiah


From mike.spam.filter at day8.com.au  Mon Mar 29 02:11:09 2004
From: mike.spam.filter at day8.com.au (Mike Thompson)
Date: Mon Mar 29 06:11:18 2004
Subject: [Python-Dev] Yet Another Decorator Syntax Suggestion  (YADSS)
Message-ID: <c48i6g$c1k$1@sea.gmane.org>


I wonder if, as a lurker, I might make YADSS ... I'm not confident 
enough to argue my case too strongly, but thought I'd toss it into the 
ring as part of the creative process.

Background
----------

I believe that cognitively speaking, when a function is first being 
understood by a reader, decorators need to be seen BEFORE the other 
details of the function (even its name).  Understanding that a function 
is a 'classmethod', provides crucial context for understanding the 
function's name, argument list, documentation and body. So, I believe 
'classmethod' should be seen first in the function definition.

I believe this requirement has motivated some to support syntax where 
the decorator list is promoted to be 'early' in the 'def':
     def  [classmethod] func(args)
OR  def func [classmethod] func(args)
etc.

That would be fine except that there is another 'competing' use case: 
once a function is known and understood, a programmer often needs to 
quickly look up its argument list.

This use case has tended to support syntax which places the function's 
decorators 'late' in the 'def', out the way, after the arguments:

     def func(args) [ decorators ]:
OR  def func(args) as decorators:

I believe the 'syntactic tension' between these two use cases(1. first 
understanding, 2. later argument look up) will be impossible to resolve 
with any 'inline' syntax solution.  (By 'inline' I mean syntax which 
tries to cram decorators into the 'def' line as is).

Finally, both the 'early' and 'late' decorator syntax have another 
problem:  they don't scale nicely.  The argument and decorator lists 
could be long; far too long for one line.  That means the 'def' line has 
to be split and while solutions have been found for this in 'late' 
syntax form, I can't believe that anyone is really *that* happy with the 
look of the them.  'early' syntax and long decorator lists look 
particularly ugly, IMO.

YADSS
-----

SO.  For all these reason, I propose another approach.  I'd argue that a 
multi line, more stuructured 'def' solution is required.

I propose that the 'as' keyword be used together with whitespace 
indentation before the 'def' to *optionally* decorate the def, something 
like this:

     as:
         classmethod
     def func(arg):
         pass

This solution scales up reasonably to multiple decorators:

     as:
         runOnExit
         staticmethod
         syncronisedWith(attr='lock')
         attributedBy(
             author='MT',
             release='5.4'
         )
         returns(int)
         takes((int, int))
     def func(args):
         pass

Can be used with classes as well ...

     as:
         providerOfInterface(IBar)
         syncronsiedWith(attr='_lock')
         singleton
     class K
         pass

For simple functions, perhaps the entire 'function header' can be folded
onto two lines:

     as: classmethod
     def func(args)
         pass

OR, perhaps even one line (possible with parser?):

     as: classmethod def func(arg):
         pass

Not too sure about this last suggestion.

Pros
----

     - use of white space indentation and ':' consistent with
       other blocks in Python.
     - simple cases present simply, but it scales well to complex cases
     - places decorator upfront, but leaves function arg list easy
       to scan
     - does not require new keyword

Cons
----

     - 'as' might not be the perfect word.  Then again what would be?
     - some won't like that 'def' is not the first word in a definition


I hope this is useful food for thought, amongst those that know more 
(and do more for) Python than I.

<later>

Damn. I just re-read PEP-318 and found it included a syntax similar to 
that which I propose. Something from Quixote which uses 'using' where I 
have used 'as'.

I'm going to post this anyway because:
      1.  This approach is not getting much air-play and I
          belive it should; and
      2.  Because the use of 'as' I propose does not require a new
          keyword, which is a significant bonus over making a new
          keyword 'using'.

--
Mike


From skip at pobox.com  Mon Mar 29 07:22:51 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar 29 07:23:07 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403290307.i2T37OJY027390@cosc353.cosc.canterbury.ac.nz>
References: <200403261837.i2QIbwl22829@guido.python.org>
	<200403290307.i2T37OJY027390@cosc353.cosc.canterbury.ac.nz>
Message-ID: <16488.5403.971879.325760@montanaro.dyndns.org>


    >> I find the mark-up in your example about the worst possible mark-up;
    >> in practice, these things can get quite voluminous

    Greg> Something about the idea of functions having voluminous amounts of
    Greg> metadata disturbs me, especially for small functions. I don't want
    Greg> to have to wade through a dozen lines of metadata to get to a few
    Greg> lines of actual code that tells me what the function does.

Indeed.  If you're tempted to add voluminous amounts of metadata to a
function perhaps you should write a class instead, especially if you're
tempted to use *any* of that data at runtime.

Skip

From skip at pobox.com  Mon Mar 29 07:25:52 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar 29 07:26:07 2004
Subject: [Python-Dev] new pep?
Message-ID: <16488.5584.38175.674515@montanaro.dyndns.org>


All this discussion about function attribute syntax has been staged loosely
in the context of PEP 318 but it doesn't belong there in my opinion.  These
ideas probably need a PEP so they don't get lost.

Skip


From skip at pobox.com  Mon Mar 29 07:28:48 2004
From: skip at pobox.com (Skip Montanaro)
Date: Mon Mar 29 07:28:55 2004
Subject: [Python-Dev] PEP 318 and syntax coloring
In-Reply-To: <338366A6D2E2CA4C9DAEAE652E12A1DE01535CA9@au3010avexu1.global.avaya.com>
References: <338366A6D2E2CA4C9DAEAE652E12A1DE01535CA9@au3010avexu1.global.avaya.com>
Message-ID: <16488.5760.150774.736186@montanaro.dyndns.org>


    Greg> def [type2trans] foo(args) [type1trans, type1trans...]:
    Greg> ...
    Greg> 
    Greg> or
    Greg> 
    Greg> def foo [type2trans] (args) [type1trans, type1trans...]:

    Tim> Hmm - this seems like it has possibilities. Since we could only
    Tim> have one decorator that returned a descriptor, we could dispense
    Tim> with the [] on those ...

Has anyone considered the side effect of any of these proposals would have
on auxiliary tools like syntax-directed editors, other code colorizers, etc?

Skip

From barry at python.org  Mon Mar 29 09:05:17 2004
From: barry at python.org (Barry Warsaw)
Date: Mon Mar 29 09:05:30 2004
Subject: [Python-Dev] PEP 318 and syntax coloring
In-Reply-To: <16488.5760.150774.736186@montanaro.dyndns.org>
References: <338366A6D2E2CA4C9DAEAE652E12A1DE01535CA9@au3010avexu1.global.avaya.com>
	<16488.5760.150774.736186@montanaro.dyndns.org>
Message-ID: <1080569117.19578.65.camel@anthem.wooz.org>

On Mon, 2004-03-29 at 07:28, Skip Montanaro wrote:

> Has anyone considered the side effect of any of these proposals would have
> on auxiliary tools like syntax-directed editors, other code colorizers, etc?

I have <wink>.  Using square brackets makes things easier, and in fact
current cvs for python-mode.el supports the style Guido was favoring at
pycon.

-Barry



From pje at telecommunity.com  Mon Mar 29 09:23:52 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon Mar 29 09:17:54 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <74F19896-810F-11D8-916E-000393100E1A@earthlink.net>
References: <16487.17874.682992.946082@montanaro.dyndns.org>
Message-ID: <5.1.0.14.0.20040329091953.03c15950@mail.telecommunity.com>

At 06:27 PM 3/28/04 -0500, Robert Mollitor wrote:
>
>>     Robert> ... there is a trivial workaround if we restrict the transformer
>>     Robert> list to identifiers:
>>
>>     Robert> sync = synchronized(lockattr="baz")
>>     Robert> def func [sync] (arg1, arg2):
>>     Robert>     pass
>>
>>I think restricting decorators to only be identifiers would be shortsighted.
>>I can understand having to create workarounds for unforseen situations, but
>>it's clear at this point in the discussion that decorator functions might
>>need to accept parameters.  Why not let them?
>
>It is easier to expand a public grammar than it is to shrink one.

And it's better to cripple a syntax extension in order to justify making a 
second syntax extension that's a crufty workaround for the crippling?  That 
doesn't make any sense to me.


From pje at telecommunity.com  Mon Mar 29 09:28:41 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon Mar 29 09:22:43 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <20040329031652.GB28444@dave@alana.ucc.usyd.edu.au>
References: <200403290307.i2T37hs4027396@cosc353.cosc.canterbury.ac.nz>
	<200403261837.i2QIbwl22829@guido.python.org>
	<200403290307.i2T37hs4027396@cosc353.cosc.canterbury.ac.nz>
Message-ID: <5.1.0.14.0.20040329092719.02ec8a40@mail.telecommunity.com>

At 01:16 PM 3/29/04 +1000, Dave Harrison wrote:
> > If we were designing Python 3.0 here, I would probably suggest the
> > following syntax for classmethods:
> >
> >   def foo(class cls, x, y):
> >     ...
> >
> > and wouldn't bother with staticmethods at all. How often do you
> > actually need a staticmethod in particular and *not* a classmethod?
>
>It's quite useful to be able to develop, essentially stateless, static
>method utility libraries that dont need to be class based.  Why create
>an object if you simply don't need one ?

Just define a function, then.  All staticmethod does is let you put a 
function in a class and have it remain a function instead of a method.


>If this was an option, there would have to be a way to use it without
>the class element,

There is: just define a function.  No staticmethod required.



From jim.jewett at eds.com  Mon Mar 29 10:18:40 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Mon Mar 29 10:19:07 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D416@USAHM010.amer.corp.eds.com>

Paul Moore 

> If the importance of the metadata use case is sufficient to justify
> new syntax, then it is regardless of the existence of decorators. If
> it isn't, then I can't see how the existence of decorators could make
> it more important to have syntax for setting attributes.

When scanning a library, there is an important difference between 
"information about X" and "this changes how you call X".

Information -- even something as important as deprecated -- should
not accidentally hide changes to the signature.

If we make wrappers easy, they will be used.  If we make annotations
and transformations use the same syntax, then transformations will be 
lost in the crowd.  

If we encourage different syntax right from the start ("this just adds
something to the object; that might change the object's type entirely"),
then people will not get into so many bad habits.

-jJ


From guido at python.org  Mon Mar 29 10:23:29 2004
From: guido at python.org (Guido van Rossum)
Date: Mon Mar 29 10:23:37 2004
Subject: [Python-Dev] Last chance!
In-Reply-To: Your message of "Sun, 28 Mar 2004 10:38:51 EST."
	<032B33E6-80CE-11D8-B559-000393CBDF94@opnet.com> 
References: <3FDE6D35.3090100@tismer.com> <3FE2C5B5.8080208@tismer.com>
	<200312191531.hBJFVgw27430@c-24-5-183-134.client.comcast.net>
	<406650F3.2090808@stackless.com>
	<200403281005.i2SA5td26850@guido.python.org> 
	<032B33E6-80CE-11D8-B559-000393CBDF94@opnet.com> 
Message-ID: <200403291523.i2TFNTC05810@guido.python.org>

> This may screw up the work I'm doing to get the profiler to work 
> transparently with threads.  Since I can't promise that the profiler 
> will be in the same thread as the code being profiled, I can't 
> guarantee that PyThreadState_GET() will give the correct thread state, 
> so I grab the thread state from the frame object.  Of course, this work 
> is also in the super-early stages of development, so I may go some 
> other direction in the future when I find out that this doesn't work 
> correctly...just pointing out a potential user (victim).

Since the profiler is being invoked from the thread being profiled,
how could it end up not being in the same thread?

(If I am missing something, I must be missing a *lot* about your
design, so you might want to say quite a bit more on how it works.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From shane at zope.com  Mon Mar 29 10:27:47 2004
From: shane at zope.com (Shane Hathaway)
Date: Mon Mar 29 10:27:57 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <200403290315.i2T3FND1027439@cosc353.cosc.canterbury.ac.nz>
References: <20040326143536.2f068c47.casey@zope.com>
	<200403290315.i2T3FND1027439@cosc353.cosc.canterbury.ac.nz>
Message-ID: <40684073.1060707@zope.com>

Greg Ewing wrote:
> By the way, I happen to think there are better uses
> to which the word 'with' could be put, e.g. the
> suggestion of making
> 
>   with lock(foo):
>     do_something()
> 
> mean something like
> 
>   _x = lock(foo)
>   _x.__enter__()
>   try:
>     do_something()
>   finally:
>     _x.__exit()

That would be nice, but here is what I would like it to mean:

   def _f():
     do_something()
   lock(foo)(_f)

That would allow the object of the "with" statement to execute the code 
block once, many times, or not at all.

Shane

From jack at performancedrivers.com  Mon Mar 29 10:38:03 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Mon Mar 29 10:38:12 2004
Subject: [Python-Dev] Yet Another Decorator Syntax Suggestion  (YADSS)
In-Reply-To: <c48i6g$c1k$1@sea.gmane.org>
References: <c48i6g$c1k$1@sea.gmane.org>
Message-ID: <20040329153803.GA31354@performancedrivers.com>

YASTBG (Yet Another Syntax That Breaks Grep)

On Mon, Mar 29, 2004 at 05:11:09PM +1000, Mike Thompson wrote:
> I believe that cognitively speaking, when a function is first being 
> understood by a reader, decorators need to be seen BEFORE the other 
> details of the function (even its name).  Understanding that a function 
> is a 'classmethod', provides crucial context for understanding the 
> function's name, argument list, documentation and body. So, I believe 
> 'classmethod' should be seen first in the function definition.
I don't think they are the most important, we've lived with them a mile
away this long and people seem to get by *wink*.

> I believe this requirement has motivated some to support syntax where 
> the decorator list is promoted to be 'early' in the 'def':
>     def  [classmethod] func(args)
> OR  def func [classmethod] func(args)
> etc.
grep 'func(' *.py   # finds most calls to func, and it's definition
grep 'def func' *.py  # which file has the definition and what is it?

The more a proposal breaks grep, the less I like it.  Most decorator
lists will be single decorators (if there is one at all).  Your suggestion
(below) will always 'break' grep.  IDEs would alleviate the problem,
but I've never used an IDE and I have no desire to start.

> 
>     as:
>         classmethod
>     def func(arg):
>         pass
un-grepable! we'll never see the decorators.

def func(arg) [classmethod]: pass

grepable!

> This solution scales up reasonably to multiple decorators:
>     as:
>         runOnExit
>         staticmethod
>         syncronisedWith(attr='lock')
>         attributedBy(
>             author='MT',
>             release='5.4'
>         )
>         returns(int)
>         takes((int, int))
>     def func(args):
>         pass
un-grepable!  regular funcs and decorated funcs grep the same.

def func(args) [runOnExit,
                ..
grepable! the unterminated square brackets let us know we aren't
seeing the full definition.

This is one of the reasons I dislike the decorator list between the
'def' and the function name.  I could grep for 'def.*?func(' but that
is an annoyance from the command line, and a major pain in most editors.

Even the current way is made grepable by popular convention,
if the first argument is 'cls' instead of 'self' you expect it to be
a classmethod.  If the first argument is neither 'cls' or 'self' it
is a plain function or staticmethod (and you don't care which).
Other decorators than these two are currently un-grepable.

-jackdied

From nbastin at opnet.com  Mon Mar 29 10:52:38 2004
From: nbastin at opnet.com (Nick Bastin)
Date: Mon Mar 29 10:52:33 2004
Subject: [Python-Dev] Last chance!
In-Reply-To: <200403291523.i2TFNTC05810@guido.python.org>
References: <3FDE6D35.3090100@tismer.com> <3FE2C5B5.8080208@tismer.com>
	<200312191531.hBJFVgw27430@c-24-5-183-134.client.comcast.net>
	<406650F3.2090808@stackless.com>
	<200403281005.i2SA5td26850@guido.python.org>
	<032B33E6-80CE-11D8-B559-000393CBDF94@opnet.com>
	<200403291523.i2TFNTC05810@guido.python.org>
Message-ID: <1ADEDC0F-8199-11D8-9EC0-000393CBDF94@opnet.com>


On Mar 29, 2004, at 10:23 AM, Guido van Rossum wrote:

> Since the profiler is being invoked from the thread being profiled,
> how could it end up not being in the same thread?
>
> (If I am missing something, I must be missing a *lot* about your
> design, so you might want to say quite a bit more on how it works.)

...previous reply to this email deleted after some testing...

Nevermind, the specific situation that I was concerned about actually 
can't occur.  I wasn't sure that I could make a guarantee that the 
profiler would be invoked from the same thread as the code being 
profiled in all cases of thread destruction, but that appears to be an 
unwarranted concern.

--
Nick


From fumanchu at amor.org  Mon Mar 29 12:42:05 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Mon Mar 29 12:44:03 2004
Subject: [Python-Dev] Yet Another Decorator Syntax Suggestion  (YADSS)
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561F7C@opus.amorhq.net>

Jack Diederich wrote:
> The more a proposal breaks grep, the less I like it.  Most decorator
> lists will be single decorators (if there is one at all).  
> Your suggestion
> (below) will always 'break' grep.  IDEs would alleviate the problem,
> but I've never used an IDE and I have no desire to start.
> 
> > 
> >     as:
> >         classmethod
> >     def func(arg):
> >         pass
> un-grepable! we'll never see the decorators.
> 
> def func(arg) [classmethod]: pass
> 
> grepable!
> 
> > This solution scales up reasonably to multiple decorators:
> >     as:
> >         runOnExit
> >         staticmethod
> >         syncronisedWith(attr='lock')
> >         attributedBy(
> >             author='MT',
> >             release='5.4'
> >         )
> >         returns(int)
> >         takes((int, int))
> >     def func(args):
> >         pass
> un-grepable!  regular funcs and decorated funcs grep the same.
> 
> def func(args) [runOnExit,
>                 ..
> grepable! the unterminated square brackets let us know we aren't
> seeing the full definition.
> 
> This is one of the reasons I dislike the decorator list between the
> 'def' and the function name.  I could grep for 'def.*?func(' but that
> is an annoyance from the command line, and a major pain in 
> most editors.
> 
> Even the current way is made grepable by popular convention,
> if the first argument is 'cls' instead of 'self' you expect it to be
> a classmethod.  If the first argument is neither 'cls' or 'self' it
> is a plain function or staticmethod (and you don't care which).
> Other decorators than these two are currently un-grepable.

Then Mike's proposal, to put decorators before the def, doesn't help or
hinder your current mechanism. You won't have to rewrite your greps at
all. Or are you proposing 'easier greps' as a high-priority requirement
for decorators? ;)


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From fumanchu at amor.org  Mon Mar 29 12:45:24 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Mon Mar 29 12:47:21 2004
Subject: [Python-Dev] Yet Another Decorator Syntax Suggestion  (YADSS)
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561F7D@opus.amorhq.net>

Mike Thompson wrote:
> I propose that the 'as' keyword be used together with whitespace 
> indentation before the 'def' to *optionally* decorate the 
> def, something like this:
> 
>      as:
>          classmethod
>      def func(arg):
>          pass
> 
> Damn. I just re-read PEP-318 and found it included a syntax 
> similar to 
> that which I propose. Something from Quixote which uses 
> 'using' where I have used 'as'.

I've been +1 on this solution (or a variant of it ;) for some time now.
>From a readability standpoint, I think it wins hands-down. Until someone
demonstrates it's technically impossible... <wink> it gets my vote. I'd
want something other than 'as', however. Crazy thought: how about
'decorator'?


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From jack at performancedrivers.com  Mon Mar 29 13:09:41 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Mon Mar 29 13:09:46 2004
Subject: [Python-Dev] Yet Another Decorator Syntax Suggestion  (YADSS)
In-Reply-To: <DE1CF2B4FEC4A342BF62B6B2B334601E561F7C@opus.amorhq.net>
References: <DE1CF2B4FEC4A342BF62B6B2B334601E561F7C@opus.amorhq.net>
Message-ID: <20040329180941.GC31354@performancedrivers.com>

On Mon, Mar 29, 2004 at 09:42:05AM -0800, Robert Brewer wrote:
> Jack Diederich wrote:
> > The more a proposal breaks grep, the less I like it.  Most decorator
> > lists will be single decorators (if there is one at all).  
> > Your suggestion
> > (below) will always 'break' grep.  IDEs would alleviate the problem,
> > but I've never used an IDE and I have no desire to start.
> > 
> > > 
> > >     as:
> > >         classmethod
> > >     def func(arg):
> > >         pass
> > un-grepable! we'll never see the decorators.
> > 
> > def func(arg) [classmethod]: pass
> > 
> > grepable!
> > 
> > > This solution scales up reasonably to multiple decorators:
> > >     as:
> > >         runOnExit
> > >         staticmethod
> > >         syncronisedWith(attr='lock')
> > >         attributedBy(
> > >             author='MT',
> > >             release='5.4'
> > >         )
> > >         returns(int)
> > >         takes((int, int))
> > >     def func(args):
> > >         pass
> > un-grepable!  regular funcs and decorated funcs grep the same.
> > 
> > def func(args) [runOnExit,
> >                 ..
> > grepable! the unterminated square brackets let us know we aren't
> > seeing the full definition.
> > 
> > This is one of the reasons I dislike the decorator list between the
> > 'def' and the function name.  I could grep for 'def.*?func(' but that
> > is an annoyance from the command line, and a major pain in 
> > most editors.
> > 
> > Even the current way is made grepable by popular convention,
> > if the first argument is 'cls' instead of 'self' you expect it to be
> > a classmethod.  If the first argument is neither 'cls' or 'self' it
> > is a plain function or staticmethod (and you don't care which).
> > Other decorators than these two are currently un-grepable.
> 
> Then Mike's proposal, to put decorators before the def, doesn't help or
> hinder your current mechanism. You won't have to rewrite your greps at
> all. Or are you proposing 'easier greps' as a high-priority requirement
> for decorators? ;)
> 
Correct, it won't be any worse for classmethod and staticmethod decorators.
grepping for decorators isn't currently a big deal, there aren't many to
grep for.  So as long as we're trying to put decorators where they will
be noticed, we could umm, put decorators where they will be noticed.
I was also taking shots at the 'def [classmethod] func(*args)' which makes
parsing difficult by way of trying to make parsing intuitive.  or something.

-jackdied

From tismer at stackless.com  Mon Mar 29 13:15:32 2004
From: tismer at stackless.com (Christian Tismer)
Date: Mon Mar 29 13:15:13 2004
Subject: [Python-Dev] Last chance!
In-Reply-To: <1ADEDC0F-8199-11D8-9EC0-000393CBDF94@opnet.com>
References: <3FDE6D35.3090100@tismer.com>
	<3FE2C5B5.8080208@tismer.com>	<200312191531.hBJFVgw27430@c-24-5-183-134.client.comcast.net>	<406650F3.2090808@stackless.com>	<200403281005.i2SA5td26850@guido.python.org>	<032B33E6-80CE-11D8-B559-000393CBDF94@opnet.com>	<200403291523.i2TFNTC05810@guido.python.org>
	<1ADEDC0F-8199-11D8-9EC0-000393CBDF94@opnet.com>
Message-ID: <406867C4.8010706@stackless.com>

Nick Bastin wrote:

> 
> On Mar 29, 2004, at 10:23 AM, Guido van Rossum wrote:
> 
>> Since the profiler is being invoked from the thread being profiled,
>> how could it end up not being in the same thread?
>>
>> (If I am missing something, I must be missing a *lot* about your
>> design, so you might want to say quite a bit more on how it works.)
> 
> 
> ...previous reply to this email deleted after some testing...
> 
> Nevermind, the specific situation that I was concerned about actually 
> can't occur.  I wasn't sure that I could make a guarantee that the 
> profiler would be invoked from the same thread as the code being 
> profiled in all cases of thread destruction, but that appears to be an 
> unwarranted concern.

You are the first person since months who claimed a possible
use of f_tstate. Please make sure that you really don't
need it.
It needs to be changed, anyway: Either it must be updated for
generators for every run, or it should be dropped.

cheers - chris
-- 
Christian Tismer             :^)   <mailto:tismer@stackless.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  mobile +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


From nbastin at opnet.com  Mon Mar 29 13:36:21 2004
From: nbastin at opnet.com (Nick Bastin)
Date: Mon Mar 29 13:36:21 2004
Subject: [Python-Dev] Last chance!
In-Reply-To: <406867C4.8010706@stackless.com>
References: <3FDE6D35.3090100@tismer.com>
	<3FE2C5B5.8080208@tismer.com>	<200312191531.hBJFVgw27430@c-24-5-183-134.client.comcast.net>	<406650F3.2090808@stackless.com>	<200403281005.i2SA5td26850@guido.python.org>	<032B33E6-80CE-11D8-B559-000393CBDF94@opnet.com>	<200403291523.i2TFNTC05810@guido.python.org>
	<1ADEDC0F-8199-11D8-9EC0-000393CBDF94@opnet.com>
	<406867C4.8010706@stackless.com>
Message-ID: <F9B01F8A-81AF-11D8-9EC0-000393CBDF94@opnet.com>


On Mar 29, 2004, at 1:15 PM, Christian Tismer wrote:

> Nick Bastin wrote:
>
>> On Mar 29, 2004, at 10:23 AM, Guido van Rossum wrote:
>>> Since the profiler is being invoked from the thread being profiled,
>>> how could it end up not being in the same thread?
>>>
>>> (If I am missing something, I must be missing a *lot* about your
>>> design, so you might want to say quite a bit more on how it works.)
>> ...previous reply to this email deleted after some testing...
>> Nevermind, the specific situation that I was concerned about actually 
>> can't occur.  I wasn't sure that I could make a guarantee that the 
>> profiler would be invoked from the same thread as the code being 
>> profiled in all cases of thread destruction, but that appears to be 
>> an unwarranted concern.
>
> You are the first person since months who claimed a possible
> use of f_tstate. Please make sure that you really don't
> need it.

I'll do some more testing, but it could take me a couple of days.  I do 
remember Martin L?wis mentioning to me that he thought it was possible 
to temporarily get two threadstate objects for the same thread, maybe 
when calling into a C function, and if that were the case, that could 
potentially cause problems for the profiler.  Somebody else might have 
some more knowledgeable input there.  One of the problems with this 
discussion occurring now is that while I'm intimately familiar with the 
profiler and existing tracing functionality, I'm not all that familiar 
with Python threads, so I need to learn a few more things before I can 
really jump into this work, and I'm concerned that something I don't 
know about is going to cause me to need the threadstate object in the 
frame.

My current plan is to pass a python interface to the threadstate object 
to the trace function, so it can build up it's own table of stats data 
per thread (or in the debugger's case, a thread-specific context).  I 
had a concern about the threadstate not being relevant because of the 
profiler being called in a different thread than the traced code, which 
I now think is unlikely to occur in a way that I can't catch.  However, 
if Martin is right and there can effectively be multiple threadstate 
objects for the current thread, that may present a problem for the 
profiler, since it's indexing its' stats data based on the previous 
threadstate object.  My thought was that if the global threadstate 
object changes, hopefully at least the surrounding frame tstate member 
that I pass to the profiler would still be intact, which would be a 
reason for keeping the threadstate reference in the frame, although 
perhaps my assumption is false.

--
Nick


From shane.holloway at ieee.org  Mon Mar 29 15:20:12 2004
From: shane.holloway at ieee.org (Shane Holloway (IEEE))
Date: Mon Mar 29 15:20:42 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <200403290134.i2T1YTf8027245@cosc353.cosc.canterbury.ac.nz>
References: <200403290134.i2T1YTf8027245@cosc353.cosc.canterbury.ac.nz>
Message-ID: <406884FC.20401@ieee.org>

I think one important thing to keep in mind is that many decorated 
methods end up being very similar, and hence you would want the syntax 
to allow for compressing them.  For instance, one would want to be able 
to write the following code block:

     class Example(object):
         def foo(self, *args) [
		synchronized(lock),
		attributes(author='SWH', protocol=IMyProtocol),
		myArgumentWireMarshaller,
		]:
             pass # Code goes here
         def bar(self, *args) [
		synchronized(lock),
		attributes(author='SWH', protocol=IMyProtocol),
		myArgumentWireMarshaller,
		]:
             pass # Code goes here
         def baz(self, *args) [
		synchronized(lock),
		attributes(author='SWH', protocol=IMyProtocol),
		myArgumentWireMarshaller,
		]:
             pass # Code goes here

more like:

     class Example(object):
	IMyProtocolMethod = [
		synchronized(lock),
		attributes(author='SWH', protocol=IMyProtocol),
		myArgumentWireMarshaller,
		]

         def foo(self, *args) [methodreturns(float)] + IMyProtocolMethod:
             pass # Code goes here
         def bar(self, *args) [methodreturns(str)] + IMyProtocolMethod:
             pass # Code goes here
         def baz(self, *args) [methodreturns(int)] + IMyProtocolMethod:
             pass # Code goes here

I think that as decorators get used more and more, we will want some way 
to shorthand a long list of common decorators.  Note that I'm not sold 
on the exact syntax, just the need for the shorthand in a maintainable 
system.  After all, copy and paste gets very sticky over time.

Thanks,
-Shane

From python at rcn.com  Mon Mar 29 15:28:25 2004
From: python at rcn.com (Raymond Hettinger)
Date: Mon Mar 29 15:28:54 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <4067752E.7000903@interlink.com.au>
Message-ID: <016801c415cc$63cff020$e43bc797@oemcomputer>

[Martin v. L?wis]
> > I must accept (and will actively support) a schedule that the
release
> > manager has set. While I don't quite remember it been actually
> > mentioned, I  somehow got the impression that Anthony's personal
> > schedule (as well as the upcoming release of 2.3.4) have contributed
to
> > initiating a 2.4 release a few months later than May.

[Anthony Baxter]
> As Martin said - I plan a 2.3.4 release for May. There's a couple of
> crashing bugs that Zope3 found back at the start of the year that have
> been fixed, and I've not seen any new ones crop up since then, so I'm
> happy that I'm not going to be cutting a release-of-the-month because
> Zope3 found Yet Another Weird Weakref Interaction (YAWWI).
> 
> But, at the end of the day, if anyone else _really_ _really_ wants to
> do the release earlier, and is willing and able to do it, I'm more
than
> happy to let them do it.

If I volunteer to do the release, would there be any objections to
targeting late May or early June for the first alpha?


Raymond Hettinger


From shane.holloway at ieee.org  Mon Mar 29 15:47:13 2004
From: shane.holloway at ieee.org (Shane Holloway (IEEE))
Date: Mon Mar 29 15:47:46 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <406884FC.20401@ieee.org>
References: <200403290134.i2T1YTf8027245@cosc353.cosc.canterbury.ac.nz>
	<406884FC.20401@ieee.org>
Message-ID: <40688B51.4040708@ieee.org>

grr... spacing didn't quite come out right.  Blasted tabs.  Let's try 
this again:

     class Example(object):
         def foo(self, *args) [
                 synchronized(lock),
                 attributes(author='SWH', protocol=IMyProtocol),
                 myArgumentWireMarshaller,
                 ]:
             pass # Code goes here
         def bar(self, *args) [
                 synchronized(lock),
                 attributes(author='SWH', protocol=IMyProtocol),
                 myArgumentWireMarshaller,
                 ]:
             pass # Code goes here
         def baz(self, *args) [
                 synchronized(lock),
                 attributes(author='SWH', protocol=IMyProtocol),
                 myArgumentWireMarshaller,
                 ]:
             pass # Code goes here

more like:

     class Example(object):
         IMyProtocolMethod = [
             synchronized(lock),
             attributes(author='SWH', protocol=IMyProtocol),
             myArgumentWireMarshaller,
             ]

         def foo(self, *args) [methodreturns(float)] + IMyProtocolMethod:
             pass # Code goes here
         def bar(self, *args) [methodreturns(str)] + IMyProtocolMethod:
             pass # Code goes here
         def baz(self, *args) [methodreturns(int)] + IMyProtocolMethod:
             pass # Code goes here



From jf.pieronne at laposte.net  Mon Mar 29 15:56:29 2004
From: jf.pieronne at laposte.net (=?ISO-8859-1?Q?Jean-Fran=E7ois_Pi=E9ronne?=)
Date: Mon Mar 29 15:56:36 2004
Subject: [Python-Dev] OpenVMS file system and UNIVERSAL_NEWLINES support
In-Reply-To: <20040223183552.GA24056@panix.com>
References: <403A351B.6060007@laposte.net> <20040223183552.GA24056@panix.com>
Message-ID: <40688D7D.3030203@laposte.net>


> On Mon, Feb 23, 2004, Jean-Fran?ois Pi?ronne wrote:
> 
>>Building from time to time Python 2.4 from CVS snapshot, I have just 
>>noticed that all the conditional compilation against 
>>WITH_UNIVERSAL_NEWLINES has been removed.
>>
>>This is a major problem on OpenVMS.
> 
> 
> Please file a bug report on SF so that this doesn't get lost, then post
> the bug number here.


I have submit a fix for this problem, bug 903339

Cheers,

Jean-Fran?ois


From jack at performancedrivers.com  Mon Mar 29 16:40:27 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Mon Mar 29 16:40:43 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <40688B51.4040708@ieee.org>
References: <200403290134.i2T1YTf8027245@cosc353.cosc.canterbury.ac.nz>
	<406884FC.20401@ieee.org> <40688B51.4040708@ieee.org>
Message-ID: <20040329214027.GE31354@performancedrivers.com>

On Mon, Mar 29, 2004 at 01:47:13PM -0700, Shane Holloway (IEEE) wrote:
> grr... spacing didn't quite come out right.  Blasted tabs.  Let's try 
> this again:
> 
>     class Example(object):
>         def foo(self, *args) [
>                 synchronized(lock),
>                 attributes(author='SWH', protocol=IMyProtocol),
>                 myArgumentWireMarshaller,
>                 ]:
>             pass # Code goes here
>         def bar(self, *args) [
>                 synchronized(lock),
>                 attributes(author='SWH', protocol=IMyProtocol),
>                 myArgumentWireMarshaller,
>                 ]:
>             pass # Code goes here
>         def baz(self, *args) [
>                 synchronized(lock),
>                 attributes(author='SWH', protocol=IMyProtocol),
>                 myArgumentWireMarshaller,
>                 ]:
>             pass # Code goes here
> 
> more like:
> 
>     class Example(object):
>         IMyProtocolMethod = [
>             synchronized(lock),
>             attributes(author='SWH', protocol=IMyProtocol),
>             myArgumentWireMarshaller,
>             ]
> 
>         def foo(self, *args) [methodreturns(float)] + IMyProtocolMethod:
>             pass # Code goes here
>         def bar(self, *args) [methodreturns(str)] + IMyProtocolMethod:
>             pass # Code goes here
>         def baz(self, *args) [methodreturns(int)] + IMyProtocolMethod:
>             pass # Code goes here
> 
write IMyProtocolMethod as a decorator instead of a list of decorators:

def IMyProtocol(func):
  for (decor) in [synchronized(lock),
                  attributes(author='SWH', protocol=IMyProtocol),
                  myArgumentWireMarshaller,
                 ]:
    func = decor(func)
  return func

class Example(object):
  def foo(self, *args) [methodreturns(float), IMyProtocol]: pass
  def bar(self, *args) [methodreturns(str), IMyProtocol]: pass
  def baz(self, *args) [methodreturns(int), IMyProtocol]: pass

if you do that a lot, write a helper function

def make_decorator_chain(*decors):
  def anon_decorator(func):
    for (decor) in decors:
      func = decor(func)
    return func
  return anon_decorator

IMyProtocol = make_decorator_chain(synchronized(lock),
                                   attributes(author='SWH', protocol=IMyProtocol),
                                   myArgumentWireMarshaller)

-jackdied

From anthony at interlink.com.au  Mon Mar 29 18:00:12 2004
From: anthony at interlink.com.au (Anthony Baxter)
Date: Mon Mar 29 17:59:23 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <016801c415cc$63cff020$e43bc797@oemcomputer>
References: <016801c415cc$63cff020$e43bc797@oemcomputer>
Message-ID: <4068AA7C.5030403@interlink.com.au>

Raymond Hettinger wrote:

> If I volunteer to do the release, would there be any objections to
> targeting late May or early June for the first alpha?

It really depends on how many of the big-ticket items are going to be
in the release. Remember: a 2.4 that's broken is far, far, far worse
than a 2.4 that's 6-8 weeks later.

Python's release cycle has historically been cautious and measured.
I really don't see why we need to rush this one.

Anthony
-- 
Anthony Baxter     <anthony@interlink.com.au>
It's never too late to have a happy childhood.

From barry at python.org  Mon Mar 29 18:07:27 2004
From: barry at python.org (Barry Warsaw)
Date: Mon Mar 29 18:07:35 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <4068AA7C.5030403@interlink.com.au>
References: <016801c415cc$63cff020$e43bc797@oemcomputer>
	<4068AA7C.5030403@interlink.com.au>
Message-ID: <1080601646.15598.7.camel@anthem.wooz.org>

On Mon, 2004-03-29 at 18:00, Anthony Baxter wrote:
> Raymond Hettinger wrote:
> 
> > If I volunteer to do the release, would there be any objections to
> > targeting late May or early June for the first alpha?
> 
> It really depends on how many of the big-ticket items are going to be
> in the release. Remember: a 2.4 that's broken is far, far, far worse
> than a 2.4 that's 6-8 weeks later.
> 
> Python's release cycle has historically been cautious and measured.
> I really don't see why we need to rush this one.

Raymond,

Why don't you cut your teeth on a few micro releases, starting with
Python 2.3.4?  Then if you still want to do it <wink> you can be the
release manager for Python 2.5.

-Barry



From greg at cosc.canterbury.ac.nz  Mon Mar 29 18:42:24 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 29 18:44:40 2004
Subject: [Python-Dev] PEP 318 and syntax coloring
In-Reply-To: <16488.5760.150774.736186@montanaro.dyndns.org>
Message-ID: <200403292342.i2TNgOMd028899@cosc353.cosc.canterbury.ac.nz>

Skip Montanaro <skip@pobox.com>:

> Has anyone considered the side effect of any of these proposals would have
> on auxiliary tools like syntax-directed editors, other code colorizers, etc?

Yes, and those considerations point towards it being a bad idea to put
anything either between the 'def' and the function name, or between
the function name and the arg list. Doing either of those is likely to
break any existing tools that think they know what a function
definition looks like.

Teaching those tools about the new syntax could also be quite
difficult, since e.g. it may no longer be possible to express what is
being looked for using a regular expression.

So I think I'm against my own suggestion of putting the descriptor
decorator before the arg list. But I still think it may be worth
distinguishing it syntactically somehow, since it does something very
different from the others, and the user will have to know that.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From pje at telecommunity.com  Mon Mar 29 18:49:29 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon Mar 29 18:51:07 2004
Subject: [Python-Dev] PEP 318 and syntax coloring
In-Reply-To: <200403292342.i2TNgOMd028899@cosc353.cosc.canterbury.ac.nz>
References: <16488.5760.150774.736186@montanaro.dyndns.org>
Message-ID: <5.1.1.6.0.20040329184823.02410ad0@telecommunity.com>

At 11:42 AM 3/30/04 +1200, Greg Ewing wrote:
>So I think I'm against my own suggestion of putting the descriptor
>decorator before the arg list. But I still think it may be worth
>distinguishing it syntactically somehow, since it does something very
>different from the others, and the user will have to know that.

They'll know -- it always has to be the *last* decorator, or the code won't 
work properly.  :)


From greg at cosc.canterbury.ac.nz  Mon Mar 29 18:51:45 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 29 18:52:05 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <40684073.1060707@zope.com>
Message-ID: <200403292351.i2TNpjOL028923@cosc353.cosc.canterbury.ac.nz>

Shane Hathaway <shane@zope.com>:

> That would be nice, but here is what I would like it to mean:
> 
>    def _f():
>      do_something()
>    lock(foo)(_f)

But what about

  x = 17
  with lock(foo):
    x = 42
  print x

?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From guido at python.org  Mon Mar 29 18:54:21 2004
From: guido at python.org (Guido van Rossum)
Date: Mon Mar 29 18:54:31 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: Your message of "Mon, 29 Mar 2004 18:00:12 EST."
	<4068AA7C.5030403@interlink.com.au> 
References: <016801c415cc$63cff020$e43bc797@oemcomputer>  
	<4068AA7C.5030403@interlink.com.au> 
Message-ID: <200403292354.i2TNsLT07059@guido.python.org>

> Raymond Hettinger wrote:
> > If I volunteer to do the release, would there be any objections to
> > targeting late May or early June for the first alpha?

Anthony Baxter replied:
> It really depends on how many of the big-ticket items are going to be
> in the release. Remember: a 2.4 that's broken is far, far, far worse
> than a 2.4 that's 6-8 weeks later.
> 
> Python's release cycle has historically been cautious and measured.
> I really don't see why we need to rush this one.

Indeed.

In the past, the real test for any version has been the final release
-- the alpha and beta releases get very little exercise except by some
diehards.  They are still necessary because *some* important folks
take them seriously, but don't expect that a flawless alpha + beta
means a perfect final release.

I seem to recall that betas and release candidates are fairly good for
finding portability problems, but issues with new functionality
usually don't come to the light until months after the final release,
when people have started writing real code using them.  That's why
there will be a 2.4.1 release...

--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg at cosc.canterbury.ac.nz  Mon Mar 29 18:57:45 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 29 18:58:44 2004
Subject: [Python-Dev] method decorators (PEP 318)
In-Reply-To: <406884FC.20401@ieee.org>
Message-ID: <200403292357.i2TNvjcu028951@cosc353.cosc.canterbury.ac.nz>

"Shane Holloway (IEEE)" <shane.holloway@ieee.org>:

>   def foo(self, *args) [methodreturns(float)] + IMyProtocolMethod:
>       pass # Code goes here

Under the current proposal, that wouldn't be allowed.
You'd need a helper function of some sort that turned
a list of decorators into a single decorator:

  def foo(self, *args) [methodreturns(float),
                        decoratorlist(IMyProtocolMethod)]:

or, using the extension I proposed a while back,

  def foo(self, *args) [methodreturns(float), *IMyProtocolMethod]:

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Mon Mar 29 19:06:11 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon Mar 29 19:06:19 2004
Subject: [Python-Dev] PEP 318 and syntax coloring
In-Reply-To: <5.1.1.6.0.20040329184823.02410ad0@telecommunity.com>
Message-ID: <200403300006.i2U06BbY028970@cosc353.cosc.canterbury.ac.nz>

"Phillip J. Eby" <pje@telecommunity.com>:

> They'll know -- it always has to be the *last* decorator, or the code
> won't work properly.  :)

If Guido is right in thinking that classmethod and staticmethod
deserve to be given high prominence, they ought to be the *first*
decorator. So maybe evaluation should go in the other direction?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From python at rcn.com  Mon Mar 29 21:18:58 2004
From: python at rcn.com (Raymond Hettinger)
Date: Mon Mar 29 21:19:28 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <1080601646.15598.7.camel@anthem.wooz.org>
Message-ID: <020501c415fd$5c5378e0$e43bc797@oemcomputer>

[Anthony Baxter]
> Remember: a 2.4 that's broken is far, far, far worse
> > than a 2.4 that's 6-8 weeks later.

Py2.4 is not broken, I use it everyday for everything! It is much more
stable than any previous pre-alpha.  I'm sensing more FUD than fact.

The alpha release is not the same as final release.  So, can we
compromise and agree to get out a late May alpha but leave the final
release date as a range (allowing for your "baking" delay if it turns
out that there is some value in letting the bits sit idle for a few
months)?



[Anthony Baxter]
> > Python's release cycle has historically been cautious and measured.

Py2.2 didn't get debugged or fully documented for ages.  Py2.3 didn't
even have a feature freeze until the second beta.  This time we will.
We're already fairly conservative with two alphas and two betas.  

Besides, inaction!=caution.  Without an alpha, only a handful of us
exercise the code.  All the time buys you is bitrot and irrelevance.



[Barry]
> Why don't you cut your teeth on a few micro releases, starting with
> Python 2.3.4?  Then if you still want to do it <wink> you can be the
> release manager for Python 2.5.

The world is safer with me doing an alpha than with 2.3.4 which has to
be perfect.  Also, I have no desire to be RM, but that appears to be the
only way to avoid months of unnecessary delay.



[Guido]
> In the past, the real test for any version has been the final release
> -- the alpha and beta releases get very little exercise except by some
> diehards.  They are still necessary because *some* important folks
> take them seriously, but don't expect that a flawless alpha + beta
> means a perfect final release.

Genexps are reasonably exciting and I think a little cheerleading on the
newsgroup may help the betas exercised.  Also, the AST is different from
other new features in that just running existing apps will exercise the
compiler.



> issues with new functionality
> usually don't come to the light until months after the final release,
> when people have started writing real code using them.  That's why
> there will be a 2.4.1 release...

The implication is that sitting on the release for extra months doesn't
buy us anything except old age.



Raymond


From mwh at python.net  Mon Mar 29 09:21:01 2004
From: mwh at python.net (Michael Hudson)
Date: Mon Mar 29 22:03:50 2004
Subject: [Python-Dev] Rich comparisons
In-Reply-To: <405B11E4.1040505@gradient.cis.upenn.edu> (Edward Loper's
	message of "Fri, 19 Mar 2004 10:29:40 -0500")
References: <E1B4Iho-00040J-3X@mail.python.org>
	<405B11E4.1040505@gradient.cis.upenn.edu>
Message-ID: <2mekrblooi.fsf@starship.python.net>

Edward Loper <edloper@gradient.cis.upenn.edu> writes:

> Michael Hudson wrote:
>>  >>> float('nan')
>>  nan
>>  >>> _ == _
>>  False
>
> This means that 'nan' is no longer a well-behaved dictionary key:
>
>      >>> x = {float('nan'):0}
>      >>> x[float('nan')] = 1
>      >>> print x
>      {nan: 0, nan: 1}
>
> Even worse, we get different behavior if we use the "same copy" of nan:
>
>      >>> nan = float('nan')
>      >>> x = {nan:0}
>      >>> x[nan] = 1
>      >>> print x
>      {nan: 1}

Gah.  Still, I think this is more a theoretical objection than
anything else?

BTW, with 2.3 on OS X:

>>> {1e3000/1e3000:1}[0]
1
>>> {0:2}[1e3000/1e3000]
2

So your 'no longer' isn't really valid :-)

> If we *really* want nan==nan to be false, then it seems like we have
> to say that nan is unhashable.  I'm also disturbed by the fact that
> cmp() has something different to say about their equality:
>
>      >>> cmp(float('nan'), float('nan'))
>      0

Well, yah.  cmp() assumes a total ordering.  If there just isn't one,
what can we do?

I have at no point claimed that I have given Python 2.4 a coherent
ieee 754 floating point story.  All I have tried to do is /reduce/ the
level of surprise knocking around, and I think I've succeeded.  If
someone (not me!) has the time and energy to do a 'proper job' (and
I'd expect working out what that means to be the hard part), then you
have my support and pre-emptive thanks.

Cheers,
mwh

-- 
  MARVIN:  Oh dear, I think you'll find reality's on the blink again.
                   -- The Hitch-Hikers Guide to the Galaxy, Episode 12

From shane at zope.com  Mon Mar 29 22:38:35 2004
From: shane at zope.com (Shane Hathaway)
Date: Mon Mar 29 22:39:45 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <200403292351.i2TNpjOL028923@cosc353.cosc.canterbury.ac.nz>
References: <40684073.1060707@zope.com>
	<200403292351.i2TNpjOL028923@cosc353.cosc.canterbury.ac.nz>
Message-ID: <4068EBBB.7000805@zope.com>

On 03/29/04 18:51, Greg Ewing wrote:
> Shane Hathaway <shane@zope.com>:
> 
> 
>>That would be nice, but here is what I would like it to mean:
>>
>>   def _f():
>>     do_something()
>>   lock(foo)(_f)
> 
> 
> But what about
> 
>   x = 17
>   with lock(foo):
>     x = 42
>   print x
> 
> ?

Hmm, yep, that won't work.  A pity.

Shane

From anthony at interlink.com.au  Mon Mar 29 22:49:13 2004
From: anthony at interlink.com.au (Anthony Baxter)
Date: Mon Mar 29 22:48:26 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <020501c415fd$5c5378e0$e43bc797@oemcomputer>
References: <020501c415fd$5c5378e0$e43bc797@oemcomputer>
Message-ID: <4068EE39.3030302@interlink.com.au>

Raymond Hettinger wrote:
> [Anthony Baxter]
> 
>>Remember: a 2.4 that's broken is far, far, far worse
>>
>>>than a 2.4 that's 6-8 weeks later.
> 
> 
> Py2.4 is not broken, I use it everyday for everything! It is much more
> stable than any previous pre-alpha.  I'm sensing more FUD than fact.
> 
> The alpha release is not the same as final release.  So, can we
> compromise and agree to get out a late May alpha but leave the final
> release date as a range (allowing for your "baking" delay if it turns
> out that there is some value in letting the bits sit idle for a few
> months)?

I'm not saying 2.4 is broken. I _am_ saying that there's a pile of large
and potentially disruptive changes that are _not_ _even_ _checked_ _in_
_yet_. At least one of these (decorators) still hasn't had a Guido
pronouncement.

> Py2.2 didn't get debugged or fully documented for ages.  Py2.3 didn't
> even have a feature freeze until the second beta.  This time we will.
> We're already fairly conservative with two alphas and two betas.  

The goal is, if possible, to have as few alphas and betas as possible.
If we can kick out an alpha1 that works on a whole pile of systems,
and we don't need an alpha2, this is a good thing. A python release
is a non-trivial amount of work. I point (again) to the wide variety
of systems in the HP testdrive farm, and the SF compile farm, as well
as the various systems I have around here. I try very hard to get
releases tested and built on as many of these as possible, _before_
I go and cut the release. Saying "oh, just release an alpha1 anyway,
if it is completely screwed, we can just release an alpha2 two days
later" underestimates the amount of work needed to cut one of these
things.

> The world is safer with me doing an alpha than with 2.3.4 which has to
> be perfect.  Also, I have no desire to be RM, but that appears to be the
> only way to avoid months of unnecessary delay.

"Unnecessary" would seem to be in the eye of the beholder.


-- 
Anthony Baxter     <anthony@interlink.com.au>
It's never too late to have a happy childhood.

From python at rcn.com  Mon Mar 29 23:58:31 2004
From: python at rcn.com (Raymond Hettinger)
Date: Mon Mar 29 23:58:40 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <4068EE39.3030302@interlink.com.au>
Message-ID: <002701c41613$a5eed920$722ec797@oemcomputer>

> Saying "oh, just release an alpha1 anyway,
> if it is completely screwed, we can just release an alpha2 two days
> later" underestimates the amount of work needed to cut one of these
> things.

Don't misquote.  I said alphas are part of the development process that
allows a wider range of people to exercise the build with the potential
benefit of finding an issue that would have otherwise gone undetected.
Issuing an alpha is not an irresponsible act.



> "Unnecessary" would seem to be in the eye of the beholder.

Unnecessary means that currently no one has committed to any work effort
after May 1.  If the past few months are any indication, those bits will
just sit there.


Raymond


From tim.one at comcast.net  Tue Mar 30 00:00:20 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 00:00:32 2004
Subject: [Python-Dev] Rich comparisons
In-Reply-To: <2mekrblooi.fsf@starship.python.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEPJJNAB.tim.one@comcast.net>

[Edward Loper]
>> This means that 'nan' is no longer a well-behaved dictionary key:
>>
>>      >>> x = {float('nan'):0}
>>      >>> x[float('nan')] = 1
>>      >>> print x
>>      {nan: 0, nan: 1}

Whatever happened there is a platform accident.  For example, not even the
float('nan') part is portable.  On Python 2.3 on Windows, your example
already works that way (here just changed to spell NaN in a way that works
on Windows):

>>> inf = 1e3000
>>> x = {}
>>> x[inf - inf] = 0
>>> x[inf - inf] = 1
>>> print x
{-1.#IND: 0, -1.#IND: 1}
>>>

>> Even worse, we get different behavior if we use the "same copy" of
>> nan:
>>
>>      >>> nan = float('nan')
>>      >>> x = {nan:0}
>>      >>> x[nan] = 1
>>      >>> print x
>>      {nan: 1}

dict lookup does assume that "key1 is key2" implies "key1 == key2", so at
least that one will work the same way across platforms.

[Michael Hudson]
> Gah.  Still, I think this is more a theoretical objection than
> anything else?

I do.

> BTW, with 2.3 on OS X:
>
> >>> {1e3000/1e3000:1}[0]
> 1
> >>> {0:2}[1e3000/1e3000]
> 2
>
> So your 'no longer' isn't really valid :-)

>> If we *really* want nan==nan to be false, then it seems like we have
>> to say that nan is unhashable.

Python doesn't have access to a portable way (at the C level) to determine
whether something *is* a NaN, and the hash value of a NaN follows from
whatever accidental bits the platform C modf(NaN, &intpart) returns.

>> I'm also disturbed by the fact that cmp() has something different to
>> say about their equality:
>>
>>      >>> cmp(float('nan'), float('nan'))
>>      0

> Well, yah.  cmp() assumes a total ordering.  If there just isn't one,
> what can we do?

If we could recognize it was a NaN, my preference would be to raise an
exception.  I was told that GNU sort arranges to make NaNs look "less than"
all non-NaN floats, which is a way to make a total ordering -- kinda.  Not
all NaNs contain the same bits, and depending on other platform accidents
"the other bits" in a NaN may or may not have significance to the platform C
and its libraries.

> I have at no point claimed that I have given Python 2.4 a coherent
> ieee 754 floating point story.

Ya, but I keep telling people you claim that <wink>.

> All I have tried to do is /reduce/ the level of surprise knocking
> around, and I think I've succeeded.

Making

    x relop y

return the same thing as the platform C does reduce the surprise -- at least
for those who aren't surprised by their platform C's behavior.

> If someone (not me!) has the time and energy to do a 'proper job' (and
> I'd expect working out what that means to be the hard part), then you
> have my support and pre-emptive thanks.

I don't believe it will happen until all the major C compilers support the
(still optional!) 754 gimmicks in C99 -- or Python is rewritten in some
other language that supports them.  Before then, we'd need a small army of
people expert in the intersection of their platform C and the 754 standard,
to produce piles of #ifdef'ed platform-specific implementation code.  That's
doable, but unlikely.

tomorrow-is-usually-like-yesterday-ly y'rs  - tim


From mwh at python.net  Mon Mar 29 08:27:28 2004
From: mwh at python.net (Michael Hudson)
Date: Tue Mar 30 00:17:30 2004
Subject: [Python-Dev] python-dev Summary for 2004-03-01 through
	2004-03-15 [rough draft]
In-Reply-To: <Pine.SOL.4.58.0403200747340.23908@death.OCF.Berkeley.EDU> (Brett
	Cannon's message of "Sat, 20 Mar 2004 07:49:21 -0800 (PST)")
References: <Pine.SOL.4.58.0403200747340.23908@death.OCF.Berkeley.EDU>
Message-ID: <2mptavlr5r.fsf@starship.python.net>

Brett Cannon <bac@OCF.Berkeley.EDU> writes:

> Ever since I first had to type Martin v. L|o_diaeresis|wis' name, I have
> had issues with Unicode in the summary.  When I realized there was a
> problem I thought it was Vim changing my Unicode in some way since I would
> notice problems when I reopened the file in TextEdit, OS X's included text
> editor that I have always used for writing the summaries (and no, I am not
> about to use Vim to do this nor Emacs; spoiled by real-time spelling and
> it is just the way I do it).

I take it you're not /all that/ interested in learning how to do real
time spelling in emacs? <wink>

Cheers,
mwh

-- 
  Darned confusing, unless you have that magic ingredient coffee, of
  which I can pay you Tuesday for a couple pounds of extra-special
  grind today.                           -- John Mitchell, 11 Jan 1999

From kbk at shore.net  Tue Mar 30 01:13:04 2004
From: kbk at shore.net (Kurt B. Kaiser)
Date: Tue Mar 30 01:13:37 2004
Subject: [Python-Dev] Weekly Python Bug/Patch Summary
Message-ID: <200403300613.i2U6D4Wm031855@hydra.localdomain>


Patch / Bug Summary
___________________

Patches :  243 open ( -7) /  2362 closed (+22) /  2605 total (+15)
Bugs    :  736 open (-10) /  3960 closed (+33) /  4696 total (+23)
RFE     :  130 open ( +2) /   124 closed ( +1) /   254 total ( +3)

New / Reopened Patches
______________________

simple   (2004-03-17)
       http://python.org/sf/918462  reopened by  rhettinger

Patch to enable profiling of C functions called from python  (2004-03-21)
CLOSED http://python.org/sf/920509  opened by  Nick Bastin 

Patch for Vinay Sajip's Python-logging package  (2004-03-22)
       http://python.org/sf/921318  opened by  Mark E. Davidson 

Reduce number of open calls on startup  (2004-03-23)
       http://python.org/sf/921466  opened by  Martin v. Löwis 

Rewrite of site.py  (2004-03-23)
       http://python.org/sf/921793  opened by  Brett Cannon 

Fixed a typo/thinko spelling "parameter" as "paramter"  (2004-03-23)
CLOSED http://python.org/sf/921927  opened by  Sean Reifschneider 

PEP 292 reference implementation  (2004-03-23)
       http://python.org/sf/922115  opened by  Barry A. Warsaw 

unidiomatic str.replace  (2004-03-23)
CLOSED http://python.org/sf/922117  opened by  Andrew Gaul 

Patch to 742342 Crash on recursive reload  (2004-03-24)
       http://python.org/sf/922167  opened by  Brian Leair 

Patch for substantial startup time reduction  (2004-03-24)
CLOSED http://python.org/sf/922881  opened by  Nick Bastin 

Support for interned strings in marshal  (2004-03-25)
       http://python.org/sf/923098  opened by  Martin v. Löwis 

'os' patch to speed up import (and python startup)  (2004-03-25)
CLOSED http://python.org/sf/923226  opened by  Nick Bastin 

'os' patch to speed up import (and python startup)  (2004-03-25)
CLOSED http://python.org/sf/923236  opened by  Nick Bastin 

long <-> byte-string conversion  (2004-03-25)
       http://python.org/sf/923643  opened by  Trevor Perrin 

Updates to the Misc/RPM spec file.  (2004-03-27)
       http://python.org/sf/924497  opened by  Sean Reifschneider 

work around to compile \r\n file  (2004-03-28)
       http://python.org/sf/924771  opened by  George Yoshida 

Patches Closed
______________

fix bug 625698, speed up some comparisons  (2003-02-25)
       http://python.org/sf/693221  closed by  tim_one

CGIHTTPServer execfile should save cwd   (2002-01-25)
       http://python.org/sf/508730  closed by  tim_one

Improvement of cgi.parse_qsl function  (2002-01-25)
       http://python.org/sf/508665  closed by  bcannon

CGIHTTPServer fix  (2003-08-28)
       http://python.org/sf/796772  closed by  gvanrossum

Patch to enable profiling of C functions called from python  (2004-03-21)
       http://python.org/sf/920509  closed by  mondragon

Updates to the .spec file.  (2003-12-08)
       http://python.org/sf/855999  closed by  loewis

Tix hlist missing entry_configure and entry_cget methods  (2003-12-03)
       http://python.org/sf/853488  closed by  loewis

math.sqrt EDOM handling for FreeBSD, OpenBSD  (2004-01-06)
       http://python.org/sf/871657  closed by  perky

Modify Setup.py to Detect Tcl/Tk on BSD  (2003-11-28)
       http://python.org/sf/850977  closed by  akuchling

support CVS version of tcl/tk ("8.5")  (2004-02-27)
       http://python.org/sf/905863  closed by  akuchling

XHTML support for webchecker.py  (2004-03-17)
       http://python.org/sf/918212  closed by  akuchling

Examples for urllib2  (2002-04-18)
       http://python.org/sf/545480  closed by  tim_one

skips.txt for regrtest.py  (2002-12-24)
       http://python.org/sf/658316  closed by  bcannon

move test() in SimpleDialog.py  (2004-03-07)
       http://python.org/sf/911176  closed by  loewis

Patch submission for #876533 (potential leak in ensure_froml  (2004-03-20)
       http://python.org/sf/920211  closed by  loewis

unidiomatic str.replace  (2004-03-23)
       http://python.org/sf/922117  closed by  bcannon

Patch for substantial startup time reduction  (2004-03-24)
       http://python.org/sf/922881  closed by  mondragon

Fixed a typo/thinko spelling   (2004-03-23)
       http://python.org/sf/921927  closed by  rhettinger

Improve "veryhigh.tex" API docs  (2003-09-01)
       http://python.org/sf/798638  closed by  fdrake

build of html docs broken (liboptparse.tex)  (2003-05-04)
       http://python.org/sf/732174  closed by  fdrake

'os' patch to speed up import (and python startup)  (2004-03-25)
       http://python.org/sf/923226  closed by  mondragon

'os' patch to speed up import (and python startup)  (2004-03-25)
       http://python.org/sf/923236  closed by  mondragon

Marshal clean-up  (2004-01-08)
       http://python.org/sf/873224  closed by  arigo

New / Reopened Bugs
___________________

2.3a2 site.py non-existing dirs  (2003-02-25)
CLOSED http://python.org/sf/693255  reopened by  jvr

http libraries throw errors internally  (2004-03-21)
       http://python.org/sf/920573  opened by  Bram Cohen 

locale module is segfaulting on locale.ERA  (2004-03-21)
CLOSED http://python.org/sf/920575  opened by  Matthias Klose 

Problem With email.MIMEText Package  (2003-05-12)
       http://python.org/sf/736407  reopened by  judasiscariot

embedding in multi-threaded & multi sub-interpreter environ  (2004-03-22)
       http://python.org/sf/921077  opened by  Atul 

HTMLParser ParseError in start tag  (2004-03-23)
       http://python.org/sf/921657  opened by  Bernd Zimmermann 

socket_htons does not work under AIX 64-bit  (2004-03-23)
       http://python.org/sf/921868  opened by  John Marshall 

socket_htons does not work under AIX 64-bit  (2004-03-23)
CLOSED http://python.org/sf/921898  opened by  John Marshall 

package manager page outdated link  (2004-03-24)
       http://python.org/sf/922690  opened by  Russell Owen 

Installer misplaces tcl library  (2004-03-25)
CLOSED http://python.org/sf/922914  opened by  WS Wong 

tempfile.mkstemp() documentation discrepancy  (2004-03-25)
CLOSED http://python.org/sf/922922  opened by  Michael S. Fischer 

file built-in object is after file() function  (2004-03-25)
       http://python.org/sf/922964  opened by  Francesco Ricciardi 

AIX POLLNVAL definition causes problems  (2004-03-25)
       http://python.org/sf/923315  opened by  John Marshall 

Incorrect __name__ assignment  (2004-03-26)
       http://python.org/sf/923576  opened by  El cepi 

SAX2 'property_encoding' feature not supported  (2004-03-26)
       http://python.org/sf/923697  opened by  Joseph Walton 

make fails using -std option  (2004-03-26)
       http://python.org/sf/924008  opened by  Bob Benites 

socket._fileobject._getclosed() returns wrong value  (2004-03-26)
CLOSED http://python.org/sf/924218  opened by  June Kim 

socket._fileobject._getclosed() returns wrong value  (2004-03-26)
CLOSED http://python.org/sf/924242  opened by  June Kim 

socket._fileobject._getclosed() returns wrong value  (2004-03-26)
CLOSED http://python.org/sf/924242  reopened by  tim_one

IPV6 not correctly ifdef'd in socketmodule.c  (2004-03-26)
       http://python.org/sf/924294  opened by  David Meleedy 

A leak case with cmd.py & readline & exception  (2004-03-27)
       http://python.org/sf/924301  opened by  Sverker Nilsson 

Build issues (lib path) on linux2-x86_64  (2004-03-26)
       http://python.org/sf/924333  opened by  Christopher Petrilli 

unicodeobjct: bad argument to internal function  (2004-03-27)
       http://python.org/sf/924361  opened by  Matthias Klose 

test_unicode_file fails on Win98SE  (2004-03-27)
       http://python.org/sf/924703  opened by  Tim Peters 

email.Header.Header() produces wrong headers with utf8 enc.  (2004-03-28)
       http://python.org/sf/924806  opened by  Andreas Jung 

_Subfile.readline( ) in mailbox.py ignoring self.stop  (2004-03-28)
       http://python.org/sf/925107  opened by  Sye van der Veen 

buffer problem in pyexpat.c(xmlparse_GetInputContext)  (2004-03-29)
       http://python.org/sf/925152  opened by  Tobias Sargeant 

os.py uses #' - undocumented magic?  (2004-03-29)
       http://python.org/sf/925500  opened by  Jim Jewett 

dir(mod) OK or use vars(mod).keys()?  (2004-03-29)
       http://python.org/sf/925537  opened by  Jim Jewett 

help does not help with imported objects  (2004-03-29)
       http://python.org/sf/925628  opened by  Jim Jewett 

Bugs Closed
___________

Errors with recursive objects  (2002-10-19)
       http://python.org/sf/625698  closed by  tim_one

os.rename() silently overwrites files  (2004-03-19)
       http://python.org/sf/919605  closed by  pje

site.py should ignore trailing CRs in .pth files  (2003-03-08)
       http://python.org/sf/700055  closed by  bcannon

2.3a2 site.py non-existing dirs  (2003-02-25)
       http://python.org/sf/693255  closed by  bcannon

CGIHTTPServer and urls  (2003-07-25)
       http://python.org/sf/777848  closed by  gvanrossum

cast from pointer to integer of different size  (2003-07-30)
       http://python.org/sf/780407  closed by  gvanrossum

datetime.datetime initialization needs more strict checking  (2003-11-21)
       http://python.org/sf/847019  closed by  tim_one

super passes bad arguments to __get__ when used w/class  (2003-05-25)
       http://python.org/sf/743267  closed by  pje

profile.run makes assumption regarding namespace  (2003-04-07)
       http://python.org/sf/716587  closed by  mondragon

Pathological case segmentation fault in issubclass  (2003-12-10)
       http://python.org/sf/858016  closed by  bcannon

locale specific problem in test_strftime.py  (2004-01-24)
       http://python.org/sf/883604  closed by  bcannon

Solaris Forte 7 &8 bug in test_long  (2003-08-15)
       http://python.org/sf/789294  closed by  bcannon

"string".split behaviour for empty strings  (2003-09-24)
       http://python.org/sf/811604  closed by  mondragon

warnings.py does not define _test()  (2004-03-16)
       http://python.org/sf/917108  closed by  tim_one

locale module is segfaulting on locale.ERA  (2004-03-22)
       http://python.org/sf/920575  closed by  perky

Allow traceback analysis from C/C++...  (2001-12-27)
       http://python.org/sf/497067  closed by  mondragon

need to skip some build tests for sunos5  (2003-01-03)
       http://python.org/sf/661981  closed by  mondragon

need to skip some build tests for sunos5  (2003-01-03)
       http://python.org/sf/661981  closed by  bcannon

asyncore.py poll* missing urgent data  (2004-01-29)
       http://python.org/sf/887279  closed by  akuchling

optparse: usage issues  (2004-02-19)
       http://python.org/sf/900071  closed by  akuchling

posix needs to generate unicode filenames where necessary  (2003-07-17)
       http://python.org/sf/773356  closed by  mondragon

test_poll fails in 2.3.2 on MacOSX(Panther)  (2003-11-28)
       http://python.org/sf/850981  closed by  mondragon

Cannot step in debugger if line # doesn't change  (2003-07-03)
       http://python.org/sf/765624  closed by  mondragon

bug in idna-encoding-module  (2004-03-03)
       http://python.org/sf/909230  closed by  loewis

Python for BeOS/PPC Build Prob.  (2001-05-01)
       http://python.org/sf/420416  closed by  mondragon

Segmentation fault running XSV  (2004-02-09)
       http://python.org/sf/893610  closed by  jbcombes

potential leak in ensure_fromlist (import.c)  (2004-01-14)
       http://python.org/sf/876533  closed by  loewis

socket_htons does not work under AIX 64-bit  (2004-03-23)
       http://python.org/sf/921898  closed by  john_marshall

python accepts illegal "import mod.sub as name" syntax  (2003-03-19)
       http://python.org/sf/706253  closed by  bcannon

urllib.basejoin() mishandles ''  (2002-11-16)
       http://python.org/sf/639311  closed by  bcannon

Installer misplaces tcl library  (2004-03-25)
       http://python.org/sf/922914  closed by  wswong

tempfile.mkstemp() documentation discrepancy  (2004-03-25)
       http://python.org/sf/922922  closed by  otterley

platform module needs update  (2004-01-08)
       http://python.org/sf/873005  closed by  lemburg

socket._fileobject._getclosed() returns wrong value  (2004-03-26)
       http://python.org/sf/924218  closed by  gigamorph

socket._fileobject._getclosed() returns wrong value  (2004-03-26)
       http://python.org/sf/924242  closed by  gigamorph

socket._fileobject._getclosed() returns wrong value  (2004-03-26)
       http://python.org/sf/924242  closed by  tim_one

Can't create Carbon.File.FSSpec for non-existent file  (2004-03-19)
       http://python.org/sf/919776  closed by  mwh

rexec.r_eval() does not work like eval()  (2004-03-03)
       http://python.org/sf/908936  closed by  loewis

IDLE: "Save As..." keybind (Ctrl+Shift+S) does not work  (2003-07-21)
       http://python.org/sf/775353  closed by  kbk

New / Reopened RFE
__________________

readline not implemented for UTF-16  (2004-03-21)
       http://python.org/sf/920680  opened by  Bob Ippolito 

RFE Closed
__________

socket.sslerror is not a socket.error  (2002-03-17)
       http://python.org/sf/531145  closed by  bcannon


From shane.holloway at ieee.org  Tue Mar 30 01:16:04 2004
From: shane.holloway at ieee.org (Shane Holloway (IEEE))
Date: Tue Mar 30 01:16:15 2004
Subject: [Python-Dev] method decorators (PEP 318)
Message-ID: <406910A4.9030500@ieee.org>

Greg Ewing wrote:
> "Shane Holloway (IEEE)" <shane.holloway@ieee.org>:
> 
> 
>>  def foo(self, *args) [methodreturns(float)] + IMyProtocolMethod:
>>      pass # Code goes here
> 
> 
> Under the current proposal, that wouldn't be allowed.
> You'd need a helper function of some sort that turned
> a list of decorators into a single decorator:
> 
>   def foo(self, *args) [methodreturns(float),
>                         decoratorlist(IMyProtocolMethod)]:
> 
> or, using the extension I proposed a while back,
> 
>   def foo(self, *args) [methodreturns(float), *IMyProtocolMethod]:
> 

I think that reads the best, but then I'd start to want it with normal
lists, too.  ;)

And I did think of the decoratorlist work-around, but I hadn't thought
that you could just as easily do the following:

     class Example(object):
         IMyProtocolMethod = decoratorlist(
             synchronized(lock),
             attributes(author='SWH', protocol=IMyProtocol),
             myArgumentWireMarshaller)

         def foo(self, *args) [methodreturns(float), IMyProtocolMethod]:
             pass # Code goes here
         def bar(self, *args) [methodreturns(str), IMyProtocolMethod]:
             pass # Code goes here
         def baz(self, *args) [methodreturns(int), IMyProtocolMethod]:
             pass # Code goes here

Which makes me just as happy again.  :)  I hope you all had a good time
at PyCon.  One of these years I look forward to meeting all of you.

Cheers,
-Shane Holloway


p.s. sorry for the repeat Greg ;)

From mwh at python.net  Tue Mar 30 05:49:42 2004
From: mwh at python.net (Michael Hudson)
Date: Tue Mar 30 05:49:46 2004
Subject: [Python-Dev] Rich comparisons
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEPJJNAB.tim.one@comcast.net> (Tim
	Peters's message of "Tue, 30 Mar 2004 00:00:20 -0500")
References: <LNBBLJKPBEHFEDALKOLCIEPJJNAB.tim.one@comcast.net>
Message-ID: <2moeqek3sp.fsf@starship.python.net>

"Tim Peters" <tim.one@comcast.net> writes:

[>>> is Ed Loper, >> is me]

>>> I'm also disturbed by the fact that cmp() has something different to
>>> say about their equality:
>>>
>>>      >>> cmp(float('nan'), float('nan'))
>>>      0
>
>> Well, yah.  cmp() assumes a total ordering.  If there just isn't one,
>> what can we do?
>
> If we could recognize it was a NaN, my preference would be to raise an
> exception.

Well, we could sorta do this, by doing something like

if a > b: return 1
if a < b: return -1
if a == b: return 0
raise ValueError, 'can't compare NaNs'

which again goes back to depending on the C compiler.

> I was told that GNU sort arranges to make NaNs look "less than" all
> non-NaN floats, which is a way to make a total ordering -- kinda.

Well, it's an answer.  Not sure it's a good one.

>> I have at no point claimed that I have given Python 2.4 a coherent
>> ieee 754 floating point story.
>
> Ya, but I keep telling people you claim that <wink>.

Oh!  So it's your fault <wink>.

>> If someone (not me!) has the time and energy to do a 'proper job' (and
>> I'd expect working out what that means to be the hard part), then you
>> have my support and pre-emptive thanks.
>
> I don't believe it will happen until all the major C compilers support the
> (still optional!) 754 gimmicks in C99 -- or Python is rewritten in some
> other language that supports them.  Before then, we'd need a small army of
> people expert in the intersection of their platform C and the 754 standard,

And Python's implementation!

> to produce piles of #ifdef'ed platform-specific implementation code.
> That's doable, but unlikely.

T'would be easier in assembly (as would checking for overflow in int
multiplication, but that's another story).

I could do 'it' in PPC assembly for MacOS X if I knew what 'it' was,
but I'm not a hardcore numerics geek, I was just offended by
float('nan') == 1 returning True.

Cheers,
mwh

-- 
  Haha! You had a *really* weak argument! <wink>
                                      -- Moshe Zadka, comp.lang.python

From florian.proff.schulze at gmx.net  Tue Mar 30 05:13:06 2004
From: florian.proff.schulze at gmx.net (Florian Schulze)
Date: Tue Mar 30 06:00:29 2004
Subject: [Python-Dev] Re: Timing for Py2.4
References: <4068EE39.3030302@interlink.com.au>
	<002701c41613$a5eed920$722ec797@oemcomputer>
Message-ID: <opr5n6n4fxttxc4i@news.gmane.org>

On Mon, 29 Mar 2004 23:58:31 -0500, Raymond Hettinger <python@rcn.com> 
wrote:

>> Saying "oh, just release an alpha1 anyway,
>> if it is completely screwed, we can just release an alpha2 two days
>> later" underestimates the amount of work needed to cut one of these
>> things.
>
> Don't misquote.  I said alphas are part of the development process that
> allows a wider range of people to exercise the build with the potential
> benefit of finding an issue that would have otherwise gone undetected.
> Issuing an alpha is not an irresponsible act.

I currently can't compile Python 2.4 myself (don't have MSVC 7.1), so I 
can't test out Python 2.4. What about just releasing a development 
snapshot if that is a better name than alpha. I think 2.4 should get a 
release soon, so people who want to test it out, get the chance.

The guy who wants to try Python 2.4 but isn't able to,
Florian Schulze


From Paul.Moore at atosorigin.com  Tue Mar 30 06:08:18 2004
From: Paul.Moore at atosorigin.com (Moore, Paul)
Date: Tue Mar 30 06:08:17 2004
Subject: [Python-Dev] Re: Timing for Py2.4
Message-ID: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>

From: Florian Schulze
> I currently can't compile Python 2.4 myself (don't have MSVC 7.1), so I 
> can't test out Python 2.4. What about just releasing a development 
> snapshot if that is a better name than alpha. I think 2.4 should get a 
> release soon, so people who want to test it out, get the chance.

Martin von Loewis released a MSI installer of 2.4 some time back, which I
installed. But without binaries of pywin32, ctypes and cx_Oracle, I can't
realistically use it for anything other than "playing". (wxPython and PIL
would be nice, but not essential).

I've no idea how to get over this hump - it's a problem I hit right through
the alpha/beta cycle, all the way to the release stage.

> The guy who wants to try Python 2.4 but isn't able to,

Me too.

Paul.

From python-dev at zesty.ca  Tue Mar 30 06:17:08 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue Mar 30 06:16:46 2004
Subject: [Python-Dev] PEP 318: Singleton decorator
Message-ID: <Pine.LNX.4.58.0403300124560.18028@server1.LFW.org>

Hi folks.

Earlier messages suggested a nice singleton decorator, which is shown
in the draft PEP:

    def singleton(cls):
        return cls()

    class MyClass [singleton]:
        ...

This has been mentioned as an argument against requiring or recommending
that decorators accept and return callables.

But i don't think this is a good way to write a singleton, because then
the user of the class has to get instances by saying "MyClass" instead
of "MyClass()".  Better would be:

    def singleton(cls):
        instances = {}
        def getinstance():
            if cls not in instances:
                instances[cls] = cls()
            return instances[cls]
        return getinstance

Then, code that gets an instance makes more sense.  MyClass() still
instantiates MyClass -- it just always returns the same instance.

I respectfully suggest that the PEP use the latter implementation in
its singleton example instead.


-- ?!ng

From python-dev at zesty.ca  Tue Mar 30 06:17:23 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue Mar 30 06:17:03 2004
Subject: [Python-Dev] PEP 318: Preserve function signatures
Message-ID: <Pine.LNX.4.58.0403300311360.18028@server1.LFW.org>

There has been some discussion about whether decorators are required
to be callable, accept callables, or return callables.

I would like to ask that the PEP recommend (perhaps strongly, but not
require) that decorators preserve function argument signatures whenever
it is reasonable to do so.  (That is, given a three-argument function
returning a string, a typical decorator should produce another three-
argument function returning a string.)

I think this is a good idea because the original function signature is
the most obvious documentation of its arguments.  If i see:

    def func(foo, bar, *arg) [decorator]:
        ...

i am inclined to expect that func will accept two arguments named foo
and bar and optionally some more arguments, with their meanings having
something to do with their names and the descriptions in the docstring.

This need not be enforced by any interpreter mechanism.  I just think
it should be the encouraged practice.


-- ?!ng

From python-dev at zesty.ca  Tue Mar 30 06:17:26 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue Mar 30 06:17:13 2004
Subject: [Python-Dev] PEP 318: Security use case
Message-ID: <Pine.LNX.4.58.0403300133000.18028@server1.LFW.org>

Here is an additional simple use case you could consider.

    def private(cls):
        def instantiate(*args, **kw):
            return cls(*args, **kw)
        return instantiate

    class DontTouch [private]:
        ...

Inner scopes are one of the best places to hide things in Python;
they are very difficult to get at.  (I can't seem to find any
special attributes that access the values inside them, and even
if there is a way, it would be easy to imagine a restricted
execution mode that wouldn't expose them.)


-- ?!ng

From python-dev at zesty.ca  Tue Mar 30 06:17:29 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue Mar 30 06:17:20 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
Message-ID: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>

Three different positions for decorators have been suggested:

    (a) def [decorator] func(arg, arg):

    (b) def func [decorator] (arg, arg):

    (c) def func(arg, arg) [decorator]:

There are several strong arguments to choose (c).

    1.  The decorator part is optional.  Optional things usually come
        at the end and shouldn't interrupt non-optional things.

    2.  The decorator part can contain arbitrary expressions.  This
        makes it impossible for most syntax colorizers to find the
        function name in (a).  Finding the defining occurences of
        names is the most important thing a code navigator can do.

    3.  In (b) the function name is separated from the argument list,
        making the definition look very different from a function call.

    4.  When you're reading the body of the function, you will refer
        to the arguments frequently and the decorators not at all.
        So the arguments should come first, in the usual position.

    5.  The decorators act on the function after the entire function
        definition has been evaluated.  It makes sense to arrange the
        function as a visual unit so you can see what is being created
        and manipulated.


To see argument 5, compare these illustrations:

    (a)               ---------- then do this
                     |
        .-----.      v      .-----------------.
        | def | [decorator] | func(arg, arg): |
        |     '-------------'                 |  <-- first do this
        |     print arg + arg                 |
        '-------------------------------------'


    (b)                    ----- then do this
                          |
        .----------.      v      .-------------.
        | def func | [decorator] | (arg, arg): |
        |          '-------------'             |  <-- first do this
        |     print arg + arg                  |
        '--------------------------------------'


    (c)     first do this       then do this
                  |                  |
                  v                  |
        .---------------------.      v
        | def func(arg, arg)  | [decorator]:
        |     print arg + arg |
        '---------------------'

I claim (c) is much easier to visually understand.


-- ?!ng

From python-dev at zesty.ca  Tue Mar 30 06:17:31 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue Mar 30 06:17:26 2004
Subject: [Python-Dev] PEP 318: Set attribs with .name = value
Message-ID: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org>

Someone suggested the following syntax for setting function attributes:

    def func(arg, arg):
        .author = 'Guido van Rossum'
        .version = 42

Guido replied that he wanted to reserve ".name = value" for use in
"with" blocks.

These two uses are not in conflict at all -- in fact, they are
perfectly compatible and i find the syntactic consistency elegant.

I think the above syntax for function metadata is by far the cleanest
and simplest method i have seen, and its meaning is the most obvious
of all the proposed syntaxes.  After "def func...: .version = 42" it
makes sense that func.version should equal 42.

I am also happy with the idea of "with":

    with some[long].expression:
        .attrib = .something + something

It all fits together beautifully.

    def func(arg, arg) [optimized, classmethod]:
        .author = 'Sir Galahad'
        .version = 42
        """Description."""
        code
        more code


-- ?!ng

From garth at garthy.com  Tue Mar 30 06:21:56 2004
From: garth at garthy.com (Gareth)
Date: Tue Mar 30 06:28:58 2004
Subject: [Python-Dev] Re: Timing for Py2.4
In-Reply-To: <opr5n6n4fxttxc4i@news.gmane.org>
References: <4068EE39.3030302@interlink.com.au>	<002701c41613$a5eed920$722ec797@oemcomputer>
	<opr5n6n4fxttxc4i@news.gmane.org>
Message-ID: <40695854.50004@garthy.com>

Florian Schulze wrote:

> On Mon, 29 Mar 2004 23:58:31 -0500, Raymond Hettinger <python@rcn.com> 
> wrote:
>
>>> Saying "oh, just release an alpha1 anyway,
>>> if it is completely screwed, we can just release an alpha2 two days
>>> later" underestimates the amount of work needed to cut one of these
>>> things.
>>
>>
>> Don't misquote.  I said alphas are part of the development process that
>> allows a wider range of people to exercise the build with the potential
>> benefit of finding an issue that would have otherwise gone undetected.
>> Issuing an alpha is not an irresponsible act.
>
>
> I currently can't compile Python 2.4 myself (don't have MSVC 7.1), so 
> I can't test out Python 2.4. What about just releasing a development 
> snapshot if that is a better name than alpha. I think 2.4 should get a 
> release soon, so people who want to test it out, get the chance.
>
> The guy who wants to try Python 2.4 but isn't able to,
> Florian Schulze
>
If you want to compile python 2.4 with the freely downloadable tools 
available from microsoft you can use a script I wrote.

Download + install the Microsoft .NET SDK
and the the Platform SDK

Grab the scripts from here

http://mail.python.org/pipermail/python-dev/2004-February/042595.html

This enables anyone to download python24 and compile it with freely 
available tools

Garth





From mwh at python.net  Tue Mar 30 06:30:43 2004
From: mwh at python.net (Michael Hudson)
Date: Tue Mar 30 06:30:50 2004
Subject: [Python-Dev] PEP 318: Preserve function signatures
In-Reply-To: <Pine.LNX.4.58.0403300311360.18028@server1.LFW.org> (Ka-Ping
	Yee's message of "Tue, 30 Mar 2004 05:17:23 -0600 (CST)")
References: <Pine.LNX.4.58.0403300311360.18028@server1.LFW.org>
Message-ID: <2mk712k1wc.fsf@starship.python.net>

Ka-Ping Yee <python-dev@zesty.ca> writes:

> There has been some discussion about whether decorators are required
> to be callable, accept callables, or return callables.
>
> I would like to ask that the PEP recommend (perhaps strongly, but not
> require) that decorators preserve function argument signatures whenever
> it is reasonable to do so.  (That is, given a three-argument function
> returning a string, a typical decorator should produce another three-
> argument function returning a string.)

Careful, here.

>>> callable(classmethod(lambda cls:1))
False

I agree with the spirit of your suggestion, though.

[...]
> This need not be enforced by any interpreter mechanism.  I just think
> it should be the encouraged practice.

And (very much) with this.

Cheers,
mwh

-- 
  Programming languages should be designed not by piling feature on
  top of feature, but by removing the weaknesses and restrictions
  that make the additional features appear necessary.
               -- Revised(5) Report on the Algorithmic Language Scheme

From python-dev at zesty.ca  Tue Mar 30 06:33:45 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue Mar 30 06:33:25 2004
Subject: [Python-Dev] PEP 318: Order of operations
In-Reply-To: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org>
References: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org>
Message-ID: <Pine.LNX.4.58.0403300524070.18028@server1.LFW.org>

Referring back to this example:

> It all fits together beautifully.
>
>     def func(arg, arg) [optimized, classmethod]:
>         .author = 'Sir Galahad'
>         .version = 42
>         """Description."""
>         code
>         more code

These syntax decisions work well together because they make it
visually obvious what is going to happen in what order.

        first do this       then this   then this
              |                 |           |
              v                 |           |
    .--------------------.      v           v
    | def func(arg, arg) | [optimized, classmethod]:
    |                    '---------.
    |     .author = 'Sir Galahad'  |
    |     .version = 42            |
    |     """Description."""       |
    |     code                     |
    |     more code                |
    '------------------------------'

First, the function is defined with the attributes set in it;
then the decorators are applied from left to right.  That's good
because it lets us set attributes that pass information to
decorators, as in Paul's examples:

    def foo(self) [spark]:
        .spark_rule = 'DOC := HEAD BODY FOOT'

    def foo(self) [publish_to_web]:
        .url = '/cgi-bin/directory/directory'


-- ?!ng

From mwh at python.net  Tue Mar 30 06:35:53 2004
From: mwh at python.net (Michael Hudson)
Date: Tue Mar 30 06:35:58 2004
Subject: [Python-Dev] PEP 318: Set attribs with .name = value
In-Reply-To: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org> (Ka-Ping
	Yee's message of "Tue, 30 Mar 2004 05:17:31 -0600 (CST)")
References: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org>
Message-ID: <2mfzbqk1nq.fsf@starship.python.net>

Ka-Ping Yee <python-dev@zesty.ca> writes:

> Someone suggested the following syntax for setting function attributes:
>
>     def func(arg, arg):
>         .author = 'Guido van Rossum'
>         .version = 42

[...]

I don't really see what this has to do with PEP 318, apart from the
fact that it impacts the syntax and compilation of function
definitions.

PEP 318 enables this:

    def func(arg1, arg2) [attrs(author='Guido van Rossum',
                                version=42)]:
        ...

which is less pretty, but also less effort.

I also thought the whole 'special syntax for function attributes'
discussion was very dead indeed.

Cheers,
mwh

-- 
  You have run into the classic Dmachine problem: your machine has
  become occupied by a malevolent spirit.  Replacing hardware or
  software will not fix this - you need an exorcist. 
                                       -- Tim Bradshaw, comp.lang.lisp

From andrew-pythondev at puzzling.org  Tue Mar 30 06:39:39 2004
From: andrew-pythondev at puzzling.org (Andrew Bennetts)
Date: Tue Mar 30 06:39:45 2004
Subject: [Python-Dev] PEP 318: Set attribs with .name = value
In-Reply-To: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org>
References: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org>
Message-ID: <20040330113939.GF7833@frobozz>

On Tue, Mar 30, 2004 at 05:17:31AM -0600, Ka-Ping Yee wrote:
> Someone suggested the following syntax for setting function attributes:
> 
>     def func(arg, arg):
>         .author = 'Guido van Rossum'
>         .version = 42
> 
> Guido replied that he wanted to reserve ".name = value" for use in
> "with" blocks.
> 
> These two uses are not in conflict at all -- in fact, they are
> perfectly compatible and i find the syntactic consistency elegant.

Although this would not (I hope) be a common use-case, what would this code
mean:

    with foo:
        def func(arg):
            .attrib = value
            pass
            
?

I'm not entirely sure it's safe to say they don't conflict, although I don't
see this case as a serious problem.

-Andrew


From jeremy at alum.mit.edu  Tue Mar 30 09:06:24 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue Mar 30 09:08:12 2004
Subject: [Python-Dev] PEP 318: Security use case
In-Reply-To: <Pine.LNX.4.58.0403300133000.18028@server1.LFW.org>
References: <Pine.LNX.4.58.0403300133000.18028@server1.LFW.org>
Message-ID: <1080655583.12643.28.camel@localhost.localdomain>

On Tue, 2004-03-30 at 06:17, Ka-Ping Yee wrote:
> Inner scopes are one of the best places to hide things in Python;
> they are very difficult to get at.  (I can't seem to find any
> special attributes that access the values inside them, and even
> if there is a way, it would be easy to imagine a restricted
> execution mode that wouldn't expose them.)

It's by design that there is no meta way to get at bindings for free
variables.  I don't think I said anything about at in the PEP, but I was
thinking of JAR's thesis (http://mumble.net/~jar/pubs/secureos/).

Jeremy



From pje at telecommunity.com  Tue Mar 30 09:50:52 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar 30 09:44:58 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <4068EBBB.7000805@zope.com>
References: <200403292351.i2TNpjOL028923@cosc353.cosc.canterbury.ac.nz>
	<40684073.1060707@zope.com>
	<200403292351.i2TNpjOL028923@cosc353.cosc.canterbury.ac.nz>
Message-ID: <5.1.0.14.0.20040330094935.02dfa5f0@mail.telecommunity.com>

At 10:38 PM 3/29/04 -0500, Shane Hathaway wrote:
>On 03/29/04 18:51, Greg Ewing wrote:
>>Shane Hathaway <shane@zope.com>:
>>
>>>That would be nice, but here is what I would like it to mean:
>>>
>>>   def _f():
>>>     do_something()
>>>   lock(foo)(_f)
>>
>>But what about
>>   x = 17
>>   with lock(foo):
>>     x = 42
>>   print x
>>?
>
>Hmm, yep, that won't work.  A pity.

It would work if the code in the block was treated as a nested scope but 
with write access to the immediately surrounding scope.  But that's quite a 
bit of compiler change, I'd imagine.  Anyway, this is one of the very few 
places where I'd actually be in favor of allowing rebinding variables from 
an enclosing scope.


From edloper at gradient.cis.upenn.edu  Tue Mar 30 09:51:40 2004
From: edloper at gradient.cis.upenn.edu (Edward Loper)
Date: Tue Mar 30 09:50:41 2004
Subject: [Python-Dev] Re: Rich comparisons
In-Reply-To: <E1B8A7B-0003oR-C7@mail.python.org>
References: <E1B8A7B-0003oR-C7@mail.python.org>
Message-ID: <4069897C.3070407@gradient.cis.upenn.edu>

> Edward Loper <edloper@gradient.cis.upenn.edu> writes:
> 
>>This means that 'nan' is no longer a well-behaved dictionary key:
>>
>>     >>> x = {float('nan'):0}
>>     >>> x[float('nan')] = 1
>>     >>> print x
>>     {nan: 0, nan: 1}
>>
>>Even worse, we get different behavior if we use the "same copy" of nan:
>>
>>     >>> nan = float('nan')
>>     >>> x = {nan:0}
>>     >>> x[nan] = 1
>>     >>> print x
>>     {nan: 1}
> 
> Gah.  Still, I think this is more a theoretical objection than
> anything else? [...]
> I have at no point claimed that I have given Python 2.4 a coherent
> ieee 754 floating point story.  All I have tried to do is /reduce/ the
> level of surprise knocking around, and I think I've succeeded.  If
> someone (not me!) has the time and energy to do a 'proper job' (and
> I'd expect working out what that means to be the hard part), then you
> have my support and pre-emptive thanks.

Tim Delaney's suggestion [1] seemed pretty reasonable.  In particular:

   - Ensure that NaN is a singleton (like True and False).  I.e., the
     float constructor checks if the float is NaN, and if so returns a
     singleton.

Advantages:
   - We have an easy way to test if a variable is nan: "x is NaN"
   - nan will work "properly" as a dictionary key

The important question is whether this would slow down other operations. 
  As for cmp(), my understanding was that it should just return -1 if 
two unordered objects are not equal.

-Edward

[1] http://aspn.activestate.com/ASPN/Mail/Message/python-dev/2030764


From pje at telecommunity.com  Tue Mar 30 09:58:06 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar 30 09:52:18 2004
Subject: [Python-Dev] PEP 318: Singleton decorator
In-Reply-To: <Pine.LNX.4.58.0403300124560.18028@server1.LFW.org>
Message-ID: <5.1.0.14.0.20040330095542.030435e0@mail.telecommunity.com>

At 05:17 AM 3/30/04 -0600, Ka-Ping Yee wrote:
>Hi folks.
>
>Earlier messages suggested a nice singleton decorator, which is shown
>in the draft PEP:
>
>     def singleton(cls):
>         return cls()
>
>     class MyClass [singleton]:
>         ...
>
>This has been mentioned as an argument against requiring or recommending
>that decorators accept and return callables.
>
>But i don't think this is a good way to write a singleton, because then
>the user of the class has to get instances by saying "MyClass" instead
>of "MyClass()".

That's a stylistic decision.  IMO, it's more pythonic to *not* call a 
constructor.  One does not, after all, call modules in order to "construct" 
them, and modules are a prime example of singletons in Python.

What benefit is there to forcing people to perform a call operation that 
doesn't *do* anything?


>I respectfully suggest that the PEP use the latter implementation in
>its singleton example instead.

I respectfully disagree.  :)


From pedronis at bluewin.ch  Tue Mar 30 10:01:40 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Tue Mar 30 09:57:03 2004
Subject: [Python-Dev] Re: method decorators (PEP 318)
In-Reply-To: <5.1.0.14.0.20040330094935.02dfa5f0@mail.telecommunity.com>
References: <4068EBBB.7000805@zope.com>
	<200403292351.i2TNpjOL028923@cosc353.cosc.canterbury.ac.nz>
	<40684073.1060707@zope.com>
	<200403292351.i2TNpjOL028923@cosc353.cosc.canterbury.ac.nz>
Message-ID: <5.2.1.1.0.20040330165348.032ffd08@pop.bluewin.ch>

At 09:50 30.03.2004 -0500, Phillip J. Eby wrote:
>At 10:38 PM 3/29/04 -0500, Shane Hathaway wrote:
>>On 03/29/04 18:51, Greg Ewing wrote:
>>>Shane Hathaway <shane@zope.com>:
>>>
>>>>That would be nice, but here is what I would like it to mean:
>>>>
>>>>   def _f():
>>>>     do_something()
>>>>   lock(foo)(_f)
>>>
>>>But what about
>>>   x = 17
>>>   with lock(foo):
>>>     x = 42
>>>   print x
>>>?
>>
>>Hmm, yep, that won't work.  A pity.
>
>It would work if the code in the block was treated as a nested scope but 
>with write access to the immediately surrounding scope.  But that's quite 
>a bit of compiler change, I'd imagine.  Anyway, this is one of the very 
>few places where I'd actually be in favor of allowing rebinding variables 
>from an enclosing scope.

Umph, that's totally OT wrt 318, and we had this whole discussion at this 
informal nice-to-have level at the beginning of last year. No point 
rehashing that at that level of (non-)depth.

If someone want this or something like this, then please write a PEP and 
preferably an implementation.

Just to recall from the top of my head, open issues are:
- what to do about break/continue, they become kind of variable-geometry
- if what is wanted is a real imitation of a control flow statement suite, 
one problem is that 'yield'
cannot be supported in there without removing the "simple" from simple 
generators. But if this thing just look like a ctrl-flow block but is not 
really like one, then isn't the syntax confusing? practicality beats purity 
or not?
- what 'return' should mean there, non-local return like in Smalltalk
- syntax for passing parameters (?), how to have the block possibly give 
back values, do we want that? new keyword, last expression ...

regards.












From mwh at python.net  Tue Mar 30 09:58:36 2004
From: mwh at python.net (Michael Hudson)
Date: Tue Mar 30 09:58:40 2004
Subject: [Python-Dev] Re: Rich comparisons
In-Reply-To: <4069897C.3070407@gradient.cis.upenn.edu> (Edward Loper's
	message of "Tue, 30 Mar 2004 09:51:40 -0500")
References: <E1B8A7B-0003oR-C7@mail.python.org>
	<4069897C.3070407@gradient.cis.upenn.edu>
Message-ID: <2mbrmejs9v.fsf@starship.python.net>

Edward Loper <edloper@gradient.cis.upenn.edu> writes:

>> Edward Loper <edloper@gradient.cis.upenn.edu> writes:
>> 
>>>This means that 'nan' is no longer a well-behaved dictionary key:
>>>
>>>     >>> x = {float('nan'):0}
>>>     >>> x[float('nan')] = 1
>>>     >>> print x
>>>     {nan: 0, nan: 1}
>>>
>>>Even worse, we get different behavior if we use the "same copy" of nan:
>>>
>>>     >>> nan = float('nan')
>>>     >>> x = {nan:0}
>>>     >>> x[nan] = 1
>>>     >>> print x
>>>     {nan: 1}
>> Gah.  Still, I think this is more a theoretical objection than
>> anything else? [...]
>> I have at no point claimed that I have given Python 2.4 a coherent
>> ieee 754 floating point story.  All I have tried to do is /reduce/ the
>> level of surprise knocking around, and I think I've succeeded.  If
>> someone (not me!) has the time and energy to do a 'proper job' (and
>> I'd expect working out what that means to be the hard part), then you
>> have my support and pre-emptive thanks.
>
> Tim Delaney's suggestion [1] seemed pretty reasonable.  In particular:
>
>    - Ensure that NaN is a singleton (like True and False).  I.e., the
>      float constructor checks if the float is NaN, and if so returns a
>      singleton.

And we do that... how, exactly?

> Advantages:
>    - We have an easy way to test if a variable is nan: "x is NaN"

There's (potentially, at least) more than one NaN (QNaN & SNaN, f'ex).

>    - nan will work "properly" as a dictionary key

BFD.

> The important question is whether this would slow down other
> operations. As for cmp(), my understanding was that it should just
> return -1 if two unordered objects are not equal.

Then I believe your understanding is flawed.

>>> cmp(2,1)
1

Without wanting to sound patronizing or rude -- but risking it anyway
-- you're not giving the impression of understanding these issues any
more than I do, and I *know* I don't understand them well enough to
get them straight.

Cheers,
mwh

-- 
  > say-hi-to-the-flying-pink-elephants-for-me-ly y'rs,
  No way, the flying pink elephants are carrying MACHINE GUNS!
  Aiiee!!  Time for a kinder, gentler hallucinogen...
                               -- Barry Warsaw & Greg Ward, python-dev

From edloper at gradient.cis.upenn.edu  Tue Mar 30 10:09:24 2004
From: edloper at gradient.cis.upenn.edu (Edward Loper)
Date: Tue Mar 30 10:07:48 2004
Subject: [Python-Dev] Re: Python-Dev Digest, Vol 8, Issue 91
In-Reply-To: <E1B8HVQ-0000ox-6T@mail.python.org>
References: <E1B8HVQ-0000ox-6T@mail.python.org>
Message-ID: <40698DA4.3050900@gradient.cis.upenn.edu>

> Hi folks.
> 
> Earlier messages suggested a nice singleton decorator, which is shown
> in the draft PEP: [...]
> But i don't think this is a good way to write a singleton, because then
> the user of the class has to get instances by saying "MyClass" instead
> of "MyClass()".  

Depending on how you look at it, that might be a good thing.  In 
particular, it "documents" the fact that you're using a singleton, which 
might make your code easier to read/understand.  (You expect very 
different behavior from a singleton than from a normal class).

On a related note, now that Python has class methods, is there much 
point in a "singleton" pattern?  In particular, why not just make a 
class that only defines class methods, and uses the class namespace to 
store variables (instead of an instance namespace)?

-Edward


From anthony at interlink.com.au  Tue Mar 30 10:54:19 2004
From: anthony at interlink.com.au (Anthony Baxter)
Date: Tue Mar 30 10:53:28 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: <002701c41613$a5eed920$722ec797@oemcomputer>
References: <002701c41613$a5eed920$722ec797@oemcomputer>
Message-ID: <4069982B.9090504@interlink.com.au>

I have no interest in having my motives or my intentions
maligned any further. I volunteered as release manager
purely from a desire to help the Python community, my only
goal is to keep Python's release quality at a high level.

Due to abusive private email, I have no interest in
continuing this thread. Cutting releases is a non-trivial
amount of (quite boring, and high concentration) work.
Demands that I spend time on it when it suits someone
else are both insulting and unrealistic. Being accused
of bad faith because I have a different belief in the
correct time to do a release is beyond insulting.

If anyone is absolutely wanting to do the release sooner
than the timeframe we discussed at PyCon, they're welcome
to it. But don't take this on unless you're going to do
it right, and all of it. This includes all the cross-platform
testing, and chasing up people to make sure it gets tested on
the more obscure platforms. I'm still planning to get a 2.3.4
out in May, and if the 2.4 release happens in the timeframe
that I can deal with it, I'm happy to do it.

Anthony

From jim.jewett at eds.com  Tue Mar 30 11:34:38 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Tue Mar 30 11:35:07 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D41F@USAHM010.amer.corp.eds.com>

Ka-Ping Yee:

> Three different positions for decorators have been suggested:

>    (a) def [decorator] func(arg, arg):
>    (b) def func [decorator] (arg, arg):
>    (c) def func(arg, arg) [decorator]:

You forgot before and after

	using:
	    classmethod
	def func(arg1, arg2):
	    pass

	def func(arg1, arg2):
	    .classmethod
	    pass

There is also a possibility of two separate places, 
depending on whether the original object is 

(1)  returned, possibly with extra attributes, or
(2)  replaced with a new entry point (which might or 
might not forward to the original, change the signature 
entirely, do something completely different).

	using:
	    classmethod
	def func(arg1, arg2):
	    .annotation
	    pass

I would personally prefer to see both types before the def, 
but separated from each other.  I do understand that this 
gets bulky; it scales up better than it scales down.

	note:
	    .name1 = value
	    .name2 = val2
	using:
	    classmethod
	def func(arg1, arg2):
	    pass


>There are several strong arguments to choose (c).

>    1.  The decorator part is optional.  Optional things usually come
>        at the end and shouldn't interrupt non-optional things.

Agreed, but taking it out of the (current) funcdef header clause 
entirely also meets this goal.  The question is whether it would 
still bind tightly *enough* to the funcdef.  

>    4.  When you're reading the body of the function, you will refer
>        to the arguments frequently and the decorators not at all.
>        So the arguments should come first, in the usual position.

classmethod does make a difference, because it changes the signature.
One of Guido's questions is whether there will ever be a long chain 
of *tranforming* decorators.  

We have seen examples of long chains of decorators, but I don't think 
anyone has a concrete use case with more than two *transforming* 
decorators.  If chains really stay short, then

	def classmethod func(arg1, arg2):
	    pass

looks OK.  (Except that new users will assume classmethod is a keyword,
and try to look it up -- which does not work for user-created decorators.)

>    5.  The decorators act on the function after the entire function
>        definition has been evaluated.  It makes sense to arrange the
>        function as a visual unit so you can see what is being created
>        and manipulated.

You can already put them after the full definition; the reason for this
syntax is to move them up so that people can keep them in mind while
reading the definition.

-jJ

From guido at python.org  Tue Mar 30 11:40:47 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 11:41:49 2004
Subject: [Python-Dev] Timing for Py2.4
In-Reply-To: Your message of "Tue, 30 Mar 2004 10:54:19 EST."
	<4069982B.9090504@interlink.com.au> 
References: <002701c41613$a5eed920$722ec797@oemcomputer>  
	<4069982B.9090504@interlink.com.au> 
Message-ID: <200403301640.i2UGelk09222@guido.python.org>

> I have no interest in having my motives or my intentions
> maligned any further. I volunteered as release manager
> purely from a desire to help the Python community, my only
> goal is to keep Python's release quality at a high level.
> 
> Due to abusive private email, I have no interest in
> continuing this thread. Cutting releases is a non-trivial
> amount of (quite boring, and high concentration) work.
> Demands that I spend time on it when it suits someone
> else are both insulting and unrealistic. Being accused
> of bad faith because I have a different belief in the
> correct time to do a release is beyond insulting.
> 
> If anyone is absolutely wanting to do the release sooner
> than the timeframe we discussed at PyCon, they're welcome
> to it. But don't take this on unless you're going to do
> it right, and all of it. This includes all the cross-platform
> testing, and chasing up people to make sure it gets tested on
> the more obscure platforms. I'm still planning to get a 2.3.4
> out in May, and if the 2.4 release happens in the timeframe
> that I can deal with it, I'm happy to do it.

Ouch.  Clearly Anthony and Raymond aren't getting along.  I've asked
both off-line for more details and hope to resolve this ASAP.  Both of
them have made great contributions to Python, and this community has
room for both of them.  While tempers may flare occasionally, we
should be able to quickly revert to our normal polite tone of
conversation, where facts count rather than personalities.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From edloper at gradient.cis.upenn.edu  Tue Mar 30 12:25:01 2004
From: edloper at gradient.cis.upenn.edu (Edward Loper)
Date: Tue Mar 30 12:23:26 2004
Subject: [Python-Dev] Re: Rich comparisons
In-Reply-To: <E1B8MG6-0003KQ-VX@mail.python.org>
References: <E1B8MG6-0003KQ-VX@mail.python.org>
Message-ID: <4069AD6D.9080905@gradient.cis.upenn.edu>

>>Tim Delaney's suggestion [1] seemed pretty reasonable.  In particular:
>>
>>   - Ensure that NaN is a singleton (like True and False).  I.e., the
>>     float constructor checks if the float is NaN, and if so returns a
>>     singleton.
> 
> And we do that... how, exactly?
 >
> [...]
 >
> Without wanting to sound patronizing or rude -- but risking it anyway
> -- you're not giving the impression of understanding these issues any
> more than I do, and I *know* I don't understand them well enough to
> get them straight.

You're right -- I apparently don't understand the issues well enough. 
And (worse) I didn't understand how far my understanding went.  I was 
basing my proposal on Tim Delaney's suggestion and on a couple quick 
experiments on my 2 machines.  Given that things aren't as simple as I 
had hoped they were, I guess I'll leave it up to people who know more to 
figure it out.

But I'm not particularly happy about the fact that you have to be an 
expert to understand the intricacies of basic arithmetic with floats 
(not that I have a way to fix it).

-Edward


From aahz at pythoncraft.com  Tue Mar 30 12:29:16 2004
From: aahz at pythoncraft.com (Aahz)
Date: Tue Mar 30 12:29:31 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <4069AD6D.9080905@gradient.cis.upenn.edu>
References: <E1B8MG6-0003KQ-VX@mail.python.org>
	<4069AD6D.9080905@gradient.cis.upenn.edu>
Message-ID: <20040330172916.GA4501@panix.com>

On Tue, Mar 30, 2004, Edward Loper wrote:
>
> But I'm not particularly happy about the fact that you have to be an 
> expert to understand the intricacies of basic arithmetic with floats 
> (not that I have a way to fix it).

>>> 1.1
1.1000000000000001

Just a fact of life.  :-/
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From FBatista at uniFON.com.ar  Tue Mar 30 12:35:19 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Tue Mar 30 12:37:15 2004
Subject: [Python-Dev] Expert floats
Message-ID: <A128D751272CD411BC9200508BC2194D03383804@escpl.tcp.com.ar>

#- > But I'm not particularly happy about the fact that you 
#- have to be an 
#- > expert to understand the intricacies of basic arithmetic 
#- with floats 
#- > (not that I have a way to fix it).
#- 
#- >>> 1.1
#- 1.1000000000000001
#- 
#- Just a fact of life.  :-/

You can always use Decimal!

Well, mmmh, not always, when it's ready, ;)

.	Facundo


From python-dev at zesty.ca  Tue Mar 30 12:50:46 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue Mar 30 12:50:43 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040330172916.GA4501@panix.com>
References: <E1B8MG6-0003KQ-VX@mail.python.org>
	<4069AD6D.9080905@gradient.cis.upenn.edu>
	<20040330172916.GA4501@panix.com>
Message-ID: <Pine.LNX.4.58.0403301139070.18028@server1.LFW.org>

On Tue, Mar 30, 2004, Edward Loper wrote:
>
> But I'm not particularly happy about the fact that you have to be an
> expert to understand the intricacies of basic arithmetic with floats
> (not that I have a way to fix it).

On Tue, 30 Mar 2004, Aahz wrote:
>
> >>> 1.1
> 1.1000000000000001
>
> Just a fact of life.  :-/

I regret that this "feature" was ever introduced or "fixed" or what have
you.  Things were much better when repr(1.1) was "1.1" a few versions ago.

This inconsistency is strange and surprising to every Python learner and
I still believe there is no good reason for it.  The motivation, as i
remember it, was to make repr(x) produce a cross-platform representation
of x.  But no one uses repr() expecting bit-for-bit identity across
platforms.  repr() can't even represent most objects; if you want to
transfer things between platforms, you would use pickle.

If typing in "1.1" produces x, then "1.1" is a perfectly accurate
representation of x on the current platform.  And that is sufficient.

Showing "1.1000000000000001" is a clear case of confusing lots of people
in exchange for an obscure benefit to very few.  If i could vote for
just one thing to roll back about Python, this would be it.


-- ?!ng

From jcarlson at uci.edu  Tue Mar 30 12:50:47 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Tue Mar 30 12:54:57 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D41F@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D41F@USAHM010.amer.corp.eds.com>
Message-ID: <20040330094026.E5F8.JCARLSON@uci.edu>

>     using:
>         classmethod
>     def func(arg1, arg2):
>         pass

The one major problem that i can see with this kind of syntax is that it
is a /special case/ when standard Python block structure does not apply.

At least when using 'with' (which I'm not particularly fond of), block
structure applies, and makes a moderate amount of sense in Python.  The
above 'using' keyword and organization lacks any sort of Pythonic flavor
(IMO).

If we were to stick with standard Python block structure, the 'using'
syntax should really be...

using:
    classmethod
    def funct(arg1, arg2):
        pass

I think this syntax makes more conceptual sense, sticks with standard
Python block structure, etc.  However, I think that what Ka-Ping has
listed as case 'c' makes measureably more sense than any nested or
non-nested block structure for decorators.

 - Josiah


From theller at python.net  Tue Mar 30 13:14:51 2004
From: theller at python.net (Thomas Heller)
Date: Tue Mar 30 13:15:01 2004
Subject: [Python-Dev] Re: Timing for Py2.4
In-Reply-To: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>
	(Paul Moore's message of "Tue, 30 Mar 2004 12:08:18 +0100")
References: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>
Message-ID: <d66ucick.fsf@python.net>

"Moore, Paul" <Paul.Moore@atosorigin.com> writes:

> From: Florian Schulze
>> I currently can't compile Python 2.4 myself (don't have MSVC 7.1), so I 
>> can't test out Python 2.4. What about just releasing a development 
>> snapshot if that is a better name than alpha. I think 2.4 should get a 
>> release soon, so people who want to test it out, get the chance.
>
> Martin von Loewis released a MSI installer of 2.4 some time back, which I
> installed. But without binaries of pywin32, ctypes and cx_Oracle, I can't
> realistically use it for anything other than "playing". (wxPython and PIL
> would be nice, but not essential).

I'll try to add a 2.4 binary to the next ctypes release, although I'm
not completely sure that bdist_wininst works with 2.4. IIRC, I had some
problems using bdist_wininst with the now current zlib 1.2.1.
I could do the same for pywin32 (although I don't have the more exotic
SDK's installed, and I don't build autoduck docs).

But I cannot help with the other extensions you mention.

Thomas


From ark-mlist at att.net  Tue Mar 30 13:16:27 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Tue Mar 30 13:16:36 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <Pine.LNX.4.58.0403301139070.18028@server1.LFW.org>
Message-ID: <00bc01c41683$1faced50$6402a8c0@arkdesktop>

> If typing in "1.1" produces x, then "1.1" is a perfectly accurate
> representation of x on the current platform.  And that is sufficient.
> 
> Showing "1.1000000000000001" is a clear case of confusing lots of people
> in exchange for an obscure benefit to very few.  If i could vote for
> just one thing to roll back about Python, this would be it.

I wish that Python would use the same conversion rules as Scheme:

	string->float yields the closest rounded approximation to the
	infinite-precision number represented by the string

	float->string yields the string with the fewest significant
	digits that, when converted as above, yields exactly the same
	floating-point value

These rules guarantee that 1.1 will always print as 1.1, and also that
printing any floating-point value and reading it back in again will give
exactly the same results.  They do, however, have three disadvantages:

	1) They are a pain to implement correctly.

	2) There are some pathological cases that take a long time to
convert.

	3) Results may be different from the local C implementation.

(1) can be ameliorated, on many platforms by using David Gay's
implementation (www.netlib.org/fp, which is distributed for free under such
liberal terms that I find it hard to believe that it wouldn't be compatible
with Python).  I don't know what to do about (2) or (3).


From tim.one at comcast.net  Tue Mar 30 13:30:14 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 13:30:40 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <Pine.LNX.4.58.0403301139070.18028@server1.LFW.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEDCJOAB.tim.one@comcast.net>

[Aahz]
>> >>> 1.1
>> 1.1000000000000001
>>
>> Just a fact of life.  :-/

[Ping]
> I regret that this "feature" was ever introduced or "fixed" or what
> have you.  Things were much better when repr(1.1) was "1.1" a few
> versions ago.

Naturally, I disagree.  The immediate motivation at the time was that
marshal uses repr(float) to store floats in code objects, so people who use
floats seriously found that results differed between running a module
directly and importing the same module via a .pyc/.pyo file.  That's flatly
intolerable for serious work.

That could have been repaired by changing the marshal format, at some cost
in compatibility headaches.  But since we made the change anyway, it had a
wonderful consequence:  fp newbies gripe about an example very much like the
above right away, and we have a tutorial appendix now that gives them
crucial education about the issues involved early in their Python career.
Before, they were bit by a large variety of subtler fp surprises much later
in their Python life, harder to explain, each requiring a different detailed
explanation.  Since I'm the guy who traditionally tried to help newbies with
stuff like that over the last decade, my testimony that life is 10x better
after the change shouldn't be dismissed lightly.

A display hook was added to sys so that people who give a rip (not naming
Ping specifically <wink>) could write and share code to format interactive
responses following whatever rules they can tolerate.  It's still a surprise
to me that virtually nobody seems to have cared enough to bother.


From tim.one at comcast.net  Tue Mar 30 13:32:06 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 13:32:28 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <Pine.LNX.4.58.0403301139070.18028@server1.LFW.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEDDJOAB.tim.one@comcast.net>

[Aahz]
>> >>> 1.1
>> 1.1000000000000001
>>
>> Just a fact of life.  :-/

[Ping]
> I regret that this "feature" was ever introduced or "fixed" or what
> have you.  Things were much better when repr(1.1) was "1.1" a few
> versions ago.

Naturally, I disagree.  The immediate motivation at the time was that
marshal uses repr(float) to store floats in code objects, so people who use
floats seriously found that results differed between running a module
directly and importing the same module via a .pyc/.pyo file.  That's flatly
intolerable for serious work.

That could have been repaired by changing the marshal format, at some cost
in compatibility headaches.  But since we made the change anyway, it had a
wonderful consequence:  fp newbies gripe about an example very much like the
above right away, and we have a tutorial appendix now that gives them
crucial education about the issues involved early in their Python career.
Before, they were bit by a large variety of subtler fp surprises much later
in their Python life, harder to explain, each requiring a different detailed
explanation.  Since I'm the guy who traditionally tried to help newbies with
stuff like that over the last decade, my testimony that life is 10x better
after the change shouldn't be dismissed lightly.

A display hook was added to sys so that people who give a rip (not naming
Ping specifically <wink>) could write and share code to format interactive
responses following whatever rules they can tolerate.  It's still a surprise
to me that virtually nobody seems to have cared enough to bother.


From python-dev at zesty.ca  Tue Mar 30 14:04:38 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue Mar 30 14:04:19 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEDDJOAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCCEDDJOAB.tim.one@comcast.net>
Message-ID: <Pine.LNX.4.58.0403301257110.18028@server1.LFW.org>

On Tue, 30 Mar 2004, Tim Peters wrote:
> [Aahz]
> >> >>> 1.1
> >> 1.1000000000000001
> >>
> >> Just a fact of life.  :-/
>
> [Ping]
> > I regret that this "feature" was ever introduced or "fixed" or what
> > have you.  Things were much better when repr(1.1) was "1.1" a few
> > versions ago.
>
> Naturally, I disagree.  The immediate motivation at the time was that
> marshal uses repr(float) to store floats in code objects, so people who use
> floats seriously found that results differed between running a module
> directly and importing the same module via a .pyc/.pyo file.  That's flatly
> intolerable for serious work.

That doesn't make sense to me.  If the .py file says "1.1" and the
.pyc file says "1.1", you're going to get the same results.

In fact, you've just given a stronger reason for keeping "1.1".
Currently, compiling a .py file containing "1.1" produces a .pyc file
containing "1.1000000000000001".  .pyc files are supposed to be
platform-independent.  If these files are then run on a platform with
different floating-point precision, the .py and the .pyc will produce
different results.

> But since we made the change anyway, it had a
> wonderful consequence:  fp newbies gripe about an example very much like the
> above right away, and we have a tutorial appendix now that gives them
> crucial education about the issues involved early in their Python career.

This is terrible, not wonderful.  The purpose of floating-point is to
provide an abstraction that does the expected thing in most cases.
To throw the IEEE book at beginners only distracts them from the main
challenge of learning a new programming language.

> A display hook was added to sys so that people who give a rip (not naming
> Ping specifically <wink>) could write and share code to format interactive
> responses following whatever rules they can tolerate.  It's still a surprise
> to me that virtually nobody seems to have cared enough to bother.

That's because custom display isn't the issue here.  It's the *default*
behaviour that's causing all the trouble.

Out of the box, Python should show that numbers evaluate to themselves.


-- ?!ng

From mwh at python.net  Mon Mar 29 09:52:53 2004
From: mwh at python.net (Michael Hudson)
Date: Tue Mar 30 14:09:35 2004
Subject: [Python-Dev] 
	Re: [Python-checkins] python/dist/src/Python compile.c, 2.299, 2.300
In-Reply-To: <E1B54cN-00042h-BD@sc8-pr-cvs1.sourceforge.net>
	(rhettinger@users.sourceforge.net's
	message of "Sun, 21 Mar 2004 07:12:03 -0800")
References: <E1B54cN-00042h-BD@sc8-pr-cvs1.sourceforge.net>
Message-ID: <2mad1zln7e.fsf@starship.python.net>

rhettinger@users.sourceforge.net writes:

> Update of /cvsroot/python/python/dist/src/Python
> In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv15203/Python
>
> Modified Files:
> 	compile.c 
> Log Message:
> Improve byte coding for multiple assignments.
> Gives 30% speedup on "a,b=1,2" and 25% on "a,b,c=1,2,3".

Would it hurt terribly much to keep Lib/compiler up to date with these
changes?

Cheers,
mwh
(the sig is randomly chosen, really :-)

-- 
  Premature optimization is the root of all evil.
       -- Donald E. Knuth, Structured Programming with goto Statements

From jeremy at alum.mit.edu  Tue Mar 30 14:18:26 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue Mar 30 14:21:47 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
Message-ID: <1080674306.12643.76.camel@localhost.localdomain>

On Tue, 2004-03-30 at 06:17, Ka-Ping Yee wrote:
> Three different positions for decorators have been suggested:
> 
>     (a) def [decorator] func(arg, arg):
> 
>     (b) def func [decorator] (arg, arg):
> 
>     (c) def func(arg, arg) [decorator]:

Another possibility that has been suggested is

[decorator] 
def func(arg, arg):

This has some of the same readability benefits as (c).

Jeremy



From jcarlson at uci.edu  Tue Mar 30 14:16:53 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Tue Mar 30 14:21:58 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <Pine.LNX.4.58.0403301257110.18028@server1.LFW.org>
References: <LNBBLJKPBEHFEDALKOLCCEDDJOAB.tim.one@comcast.net>
	<Pine.LNX.4.58.0403301257110.18028@server1.LFW.org>
Message-ID: <20040330111413.E5FB.JCARLSON@uci.edu>

> That doesn't make sense to me.  If the .py file says "1.1" and the
> .pyc file says "1.1", you're going to get the same results.
> 
> In fact, you've just given a stronger reason for keeping "1.1".
> Currently, compiling a .py file containing "1.1" produces a .pyc file
> containing "1.1000000000000001".  .pyc files are supposed to be
> platform-independent.  If these files are then run on a platform with
> different floating-point precision, the .py and the .pyc will produce
> different results.

I believe (please correct me if I'm wrong), that Python floats, on all
platforms, are IEEE 754 doubles.  That is, Python uses the 8-byte FP,
not the (arguably worthless) 4-bit FP.  Cross-platform precision is not
an issue.

 - Josiah


From FBatista at uniFON.com.ar  Tue Mar 30 14:34:54 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Tue Mar 30 14:36:51 2004
Subject: [Python-Dev] Expert floats
Message-ID: <A128D751272CD411BC9200508BC2194D03383807@escpl.tcp.com.ar>

Ka-Ping Yee wrote:

#- Out of the box, Python should show that numbers evaluate to 
#- themselves.

The number evaluates to themself:

>>> 1.1
1.1000000000000001
>>> 1.1 == 1.1000000000000001
True

I think that the issue here is that, for floats, Python uses C, C uses the
FPU, the FPU is implemented using binary numbers and a finite amount of
memory.

So, you'll *never* get an exact 1.1. Python just doesn't lies tou you.

And again, if you want a Decimal floating point (not a binary one), you'll
have to wait me to finish it. The great side of this is that you can help
me! ;)

.	Facundo





. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
ADVERTENCIA  

La informaci?n contenida en este mensaje y cualquier archivo anexo al mismo,
son para uso exclusivo del destinatario y pueden contener informaci?n
confidencial o propietaria, cuya divulgaci?n es sancionada por la ley. 

Si Ud. No es uno de los destinatarios consignados o la persona responsable
de hacer llegar este mensaje a los destinatarios consignados, no est?
autorizado a divulgar, copiar, distribuir o retener informaci?n (o parte de
ella) contenida en este mensaje. Por favor notif?quenos respondiendo al
remitente, borre el mensaje original y borre las copias (impresas o grabadas
en cualquier medio magn?tico) que pueda haber realizado del mismo. 

Todas las opiniones contenidas en este mail son propias del autor del
mensaje y no necesariamente coinciden con las de Telef?nica Comunicaciones
Personales S.A. o alguna empresa asociada. 

Los mensajes electr?nicos pueden ser alterados, motivo por el cual
Telef?nica Comunicaciones Personales S.A. no aceptar? ninguna obligaci?n
cualquiera sea el resultante de este mensaje. 

Muchas Gracias.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20040330/b810738d/attachment.html
From pf_moore at yahoo.co.uk  Tue Mar 30 14:38:55 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Tue Mar 30 14:38:59 2004
Subject: [Python-Dev] Re: Timing for Py2.4
References: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>
	<d66ucick.fsf@python.net>
Message-ID: <u106nn00.fsf@yahoo.co.uk>

Thomas Heller <theller@python.net> writes:

> I'll try to add a 2.4 binary to the next ctypes release

That would be good.

> although I'm not completely sure that bdist_wininst works with
> 2.4.

Ouch. One other thought I had, based on some comments from Gareth, was
that getting distutils to work with the free MS compilers would be
good. I'm not sure what would be involved, or if I could help (I'm not
familiar with the distutils code, though...)

> IIRC, I had some problems using bdist_wininst with the now current
> zlib 1.2.1. I could do the same for pywin32 (although I don't have
> the more exotic SDK's installed, and I don't build autoduck docs).

At one stage Mark Hammond was producing snapshot preview builds of
pywin32 for Python 2.3. Maybe he would be willing to do the same for
2.4? Getting pywin32 binaries for 2.4 would make it much more viable
for Windows users to try out 2.4.

> But I cannot help with the other extensions you mention.

No problem. It's just a fact of alpha-testing life...

Paul.
-- 
This signature intentionally left blank


From jcarlson at uci.edu  Tue Mar 30 14:34:36 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Tue Mar 30 14:39:09 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <1080674306.12643.76.camel@localhost.localdomain>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
Message-ID: <20040330113320.E5FE.JCARLSON@uci.edu>

> Another possibility that has been suggested is
> 
> [decorator] 
> def func(arg, arg):
> 
> This has some of the same readability benefits as (c).

Except that what you show is currently valid syntax.  Making a currently
valid syntax mean something new is, I believe, generally frowned upon.

 - Josiah


From tim.one at comcast.net  Tue Mar 30 14:40:19 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 14:40:31 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <Pine.LNX.4.58.0403301257110.18028@server1.LFW.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEDJJOAB.tim.one@comcast.net>

[Ping]
[Tim]
>> The immediate motivation at the time was that marshal uses repr(float)
>> to store floats in code objects, so people who use floats seriously
>> found that results differed between running a module directly and
>> importing the same module via a .pyc/.pyo file.  That's flatly
>> intolerable for serious work.

[Ping]
> That doesn't make sense to me.  If the .py file says "1.1" and the
> .pyc file says "1.1", you're going to get the same results.

repr(float) used to round to 12 significant digits (same as str() does
ow  -- repr(float) and str(float) used to be identical).  So the problem was
real, and so was the fix.

> In fact, you've just given a stronger reason for keeping "1.1".
> Currently, compiling a .py file containing "1.1" produces a .pyc file
> containing "1.1000000000000001".  .pyc files are supposed to be
> platform-independent.  If these files are then run on a platform with
> different floating-point precision, the .py and the .pyc will produce
> different results.

But you can't get away from that via any decimal rounding rule.  One of the
*objections* the 754 committee had to the Scheme rule is that moving rounded
shortest-possible decimal output to a platform with greater precision could
cause the latter platform to read in an unnecessarily poor  approximation to
the actual number written on the source platform.  It's simply a fact that
decimal 1.1000000000000001 is a closer approximation to the number stored in
an IEEE double (given input "1.1" perfectly rounded to IEEE double format)
than decimal 1.1, and that has consequences too when moving to a wider
precision.

You have in mind *typing* "1.1" literally, so that storing "1.1" would give
a better approximation to decimal 1.1 on that box with wider precision, but
repr() doesn't know whether its input was typed by hand or computed.  Most
floats in real life are computed.

So if we were to change the marshal format, it would make much more sense to
reuse pickle's binary format for floats (which represents floats exactly, at
least those that don't exceed the precision or dynamic range of a 754
double).  The binary format also *is* portable.  Relying on decimal strings
(of any form) isn't really, so long as Python relies on the platform C to do
string<->float conversion.  Slinging shortest-possible output requires
perfect rounding on input, which is stronger than the 754 standard requires.
Slinging decimal strings rounded to 17 digits is less demanding, and is
portable across all boxes whose C string->float meets the 754 standard.

>> But since we made the change anyway, it had a wonderful consequence:
>> ...

> This is terrible, not wonderful. ...

We've been through all this before, so I'm heartened to see that we'll still
never agree <wink>.


From tim.one at comcast.net  Tue Mar 30 14:40:20 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 14:40:39 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040330111413.E5FB.JCARLSON@uci.edu>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEDJJOAB.tim.one@comcast.net>

[Josiah Carlson]
> I believe (please correct me if I'm wrong), that Python floats, on all
> platforms, are IEEE 754 doubles.

I don't know.  Python used to run on some Crays that had their own fp
format, with 5 bits less precision than an IEEE double but much greater
dynamic range.  I don't know whether VAX D double format is still in use
either (which has 3 bits more precision than IEEE double).

> That is, Python uses the 8-byte FP, not the (arguably worthless) 4-bit
> FP.

I believe all Python platforms use *some* flavor of 8-byte float.

> Cross-platform precision is not an issue.

If it is, nobody has griped about it (to my knowledge).


From dan at sidhe.org  Tue Mar 30 14:44:47 2004
From: dan at sidhe.org (Dan Sugalski)
Date: Tue Mar 30 14:46:11 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEDJJOAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCCEDJJOAB.tim.one@comcast.net>
Message-ID: <a0601021cbc8f7e84ee90@[10.0.1.2]>

At 2:40 PM -0500 3/30/04, Tim Peters wrote:
>[Josiah Carlson]
>  > That is, Python uses the 8-byte FP, not the (arguably worthless) 4-bit
>>  FP.
>
>I believe all Python platforms use *some* flavor of 8-byte float.

Don't be too sure. I've seen the VMS version getting thumped 
lately--someone may well be using X floats there. (which are 16 byte 
floats)
-- 
                                         Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
dan@sidhe.org                         have teddy bears and even
                                       teddy bears get drunk

From jeremy at alum.mit.edu  Tue Mar 30 14:47:02 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue Mar 30 14:48:48 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEDJJOAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCAEDJJOAB.tim.one@comcast.net>
Message-ID: <1080676021.12643.79.camel@localhost.localdomain>

On Tue, 2004-03-30 at 14:40, Tim Peters wrote:
> We've been through all this before, so I'm heartened to see that we'll still
> never agree <wink>.

Perhaps we need to write a very short PEP that explains why things are
the way they are.  I enjoy the occasional break from discussions about
decorator syntax as much as the next guy, but I'd rather discussion knew
controversies like generator expression binding rules than rehash the
same old discussions.

Jeremy



From FBatista at uniFON.com.ar  Tue Mar 30 14:50:32 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Tue Mar 30 14:52:28 2004
Subject: [Python-Dev] Expert floats
Message-ID: <A128D751272CD411BC9200508BC2194D03383809@escpl.tcp.com.ar>

Jeremy Hylton wrote:

#- Perhaps we need to write a very short PEP that explains why 
#- things are
#- the way they are.  I enjoy the occasional break from 

In the intro of PEP 327 I give a brief explanation of these issues, but
oriented to "why I need a Decimal data type". 

Maybe it's a good starting point.

.	Facundo

From allison at sumeru.stanford.EDU  Tue Mar 30 14:54:16 2004
From: allison at sumeru.stanford.EDU (Dennis Allison)
Date: Tue Mar 30 14:54:29 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <a0601021cbc8f7e84ee90@[10.0.1.2]>
Message-ID: <Pine.LNX.4.10.10403301148380.18116-100000@sumeru.stanford.EDU>

Did I miss the issue here?  

Floating point represetnations are a problem because for some decimal
representations coverting the decimal form to binary and then back to
decimal does not (necessarily) return the same value.  There's a large
literature on this problem and known solutions.  (See, for example Guy
Steele's paper on printing floating point.)

On Tue, 30 Mar 2004, Dan Sugalski wrote:

> At 2:40 PM -0500 3/30/04, Tim Peters wrote:
> >[Josiah Carlson]
> >  > That is, Python uses the 8-byte FP, not the (arguably worthless) 4-bit
> >>  FP.
> >
> >I believe all Python platforms use *some* flavor of 8-byte float.
> 
> Don't be too sure. I've seen the VMS version getting thumped 
> lately--someone may well be using X floats there. (which are 16 byte 
> floats)
> -- 
>                                          Dan
> 
> --------------------------------------"it's like this"-------------------
> Dan Sugalski                          even samurai
> dan@sidhe.org                         have teddy bears and even
>                                        teddy bears get drunk
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/allison%40sumeru.stanford.edu
> 


From theller at python.net  Tue Mar 30 15:02:48 2004
From: theller at python.net (Thomas Heller)
Date: Tue Mar 30 15:02:59 2004
Subject: [Python-Dev] Re: Timing for Py2.4
In-Reply-To: <u106nn00.fsf@yahoo.co.uk> (Paul Moore's message of "Tue, 30
	Mar 2004 20:38:55 +0100")
References: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>
	<d66ucick.fsf@python.net> <u106nn00.fsf@yahoo.co.uk>
Message-ID: <vfkmays7.fsf@python.net>

Paul Moore <pf_moore@yahoo.co.uk> writes:

> Thomas Heller <theller@python.net> writes:
>
>> I'll try to add a 2.4 binary to the next ctypes release
>
> That would be good.
>
>> although I'm not completely sure that bdist_wininst works with
>> 2.4.
>
> Ouch. One other thought I had, based on some comments from Gareth, was
> that getting distutils to work with the free MS compilers would be
> good. I'm not sure what would be involved, or if I could help (I'm not
> familiar with the distutils code, though...)

Do you mean building the wininst.exe template with other compilers?
Shouldn't be needed, because the binaries are also in CVS.

>> IIRC, I had some problems using bdist_wininst with the now current
>> zlib 1.2.1. I could do the same for pywin32 (although I don't have
>> the more exotic SDK's installed, and I don't build autoduck docs).
>
> At one stage Mark Hammond was producing snapshot preview builds of
> pywin32 for Python 2.3. Maybe he would be willing to do the same for
> 2.4? Getting pywin32 binaries for 2.4 would make it much more viable
> for Windows users to try out 2.4.

This should be much easier than before now that pywin32 is built with
distutils.

Thomas


From ark-mlist at att.net  Tue Mar 30 15:12:29 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Tue Mar 30 15:12:38 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEDJJOAB.tim.one@comcast.net>
Message-ID: <00fe01c41693$57404590$6402a8c0@arkdesktop>

> But you can't get away from that via any decimal rounding rule.  One of
> the *objections* the 754 committee had to the Scheme rule is that moving
> rounded shortest-possible decimal output to a platform with greater
> precision could cause the latter platform to read in an unnecessarily poor
> approximation to the actual number written on the source platform.  It's
> simply a fact that decimal 1.1000000000000001 is a closer approximation to
> the number stored in an IEEE double (given input "1.1" perfectly rounded
> to IEEE double format) than decimal 1.1, and that has consequences too
> when moving to a wider precision.

But if you're moving to a wider precision, surely there is an even better
decimal approximation to the IEEE-rounded "1.1" than 1.1000000000000001
(with even more digits), so isn't the preceding paragraph a justification
for using that approximation instead?




From jim.jewett at eds.com  Tue Mar 30 15:26:39 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Tue Mar 30 15:27:21 2004
Subject: [Python-Dev] repr(1.1)
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D422@USAHM010.amer.corp.eds.com>

Ka-Ping Yee:

> In fact, you've just given a stronger reason for keeping "1.1".
> Currently, compiling a .py file containing "1.1" produces a .pyc file
> containing "1.1000000000000001".  .pyc files are supposed to be
> platform-independent.  If these files are then run on a platform with
> different floating-point precision, the .py and the .pyc will produce
> different results.

In a previous job, every system upgrade meant a C compiler upgrade.  We 
would recompile everything and rerun a week of production data as a 
regression test.  We would get different results.  Then I had to find 
each difference to let the customer decide whether it was large enough 
to really matter.  (It never was.)

I would have been very grateful if I could have flipped a switch to say
"Do the math like the old version, even if it was buggy.  Do it just this
once, so that I can show the customer that any changes are intentional!"

Running a .pyc created on system1 should produce the same results you 
got on system1, even if system2 could do a better job.  Printing a 
dozen zeros followed by a 1 tells system2 just how precise the
calculations should be.

Yes, Decimal would be better for this, but historically it wasn't there.

For that matter, Decimal might be a better default format for 1.1, if
a language were starting fresh.  It still wouldn't be perfect, though.
How many digits should 1.1/3 print?

-jJ

From jim.jewett at eds.com  Tue Mar 30 15:38:39 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Tue Mar 30 15:39:05 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D423@USAHM010.amer.corp.eds.com>

>>     using:
>>         classmethod
>>     def func(arg1, arg2):
>>         pass

> The one major problem that i can see with this kind of 
> syntax is that it is a /special case/ when standard Python
> block structure does not apply.

It warns you that the expressions in the "using:" block will
be evaluated later -- after the "def:" block.

> If we were to stick with standard Python block structure, 
> the 'using' syntax should really be...

> using:
>     classmethod
>     def funct(arg1, arg2):
>        pass

Typically the suite (things indented after the colon) are all
of the same type.

	using classmethod:
	    def funct(arg1, arg2):
	        pass

was unacceptable.

	using classmethod:
	    .version = 2134
	    .author = "anonymous"
	def funct(arg1, arg2):
	    pass

gets the block structure right, but it moves true transformations 
(such as classmethod) too far away.  (Well, unless there is only 
ever one decorator per function, but then this is overkill.)

-jJ


From tim.one at comcast.net  Tue Mar 30 16:08:59 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 16:09:12 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <00fe01c41693$57404590$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEGJOAB.tim.one@comcast.net>

>> But you can't get away from that via any decimal rounding rule.  One
>> of the *objections* the 754 committee had to the Scheme rule is that
>> moving rounded shortest-possible decimal output to a platform with
>> greater precision could cause the latter platform to read in an
>> unnecessarily poor approximation to the actual number written on the
>> source platform.  It's simply a fact that decimal 1.1000000000000001
>> is a closer approximation to the number stored in an IEEE double
>> (given input "1.1" perfectly rounded to IEEE double format) than
>> decimal 1.1, and that has consequences too when moving to a wider
>> precision.

[Andrew Koenig]
> But if you're moving to a wider precision, surely there is an even
> better decimal approximation to the IEEE-rounded "1.1" than
> 1.1000000000000001 (with even more digits), so isn't the preceding
> paragraph a justification for using that approximation instead?

Like Ping, you're picturing typing in "1.1" by hand, so that you *know*
decimal 1.1 on-the-nose is the number you "really want".  But repr() can't
know that -- it's a general principle of 754 semantics for each operation to
take the bits it's fed at face value, because the implementation can't guess
intent, and it's likely to create more problems than it solves if it tries
to "improve" the bits it actually sees.  So far as reproducing observed
results as closely as possible goes, the wider machine will in fact do
better if it sees "1.100000000000001" instead of "1.1", because the former
is in fact a closer approximation to the number the narrower machine
actually *uses*.

Suppose you had a binary float format with 3 bits of precision, and the
result of a computation on that box is .001 binary = 1/8 = 0.125 decimal.
The "shortest-possible reproducing decimal representation" on that box is
0.1.  Is it more accurate to move that result to a wider machine via the
string "0.1" or via the string "0.125"?  The former is off by 25%, but the
latter is exactly right.  repr() on the former machine has no way to guess
whether the 1/8 it's fed is the result of the user typing in "0.1" or the
result of dividing 1.0 by 8.0.  By taking the bits at face value, and
striving to communicate that as faithfully as possible, it's explainable,
predictable, and indeed as faithful as possible.  "Looks pretty too" isn't a
requirement for serious floating-point work.


From jim.jewett at eds.com  Tue Mar 30 16:11:31 2004
From: jim.jewett at eds.com (Jewett, Jim J)
Date: Tue Mar 30 16:12:19 2004
Subject: [Python-Dev] Rich comparisons shortcut
Message-ID: <B8CDFB11BB44D411B8E600508BDF076C1E96D424@USAHM010.amer.corp.eds.com>

Version 2.215 (Mar 21, 2004) of object.c assumes that
identity implies equality.

I won't rehash the discussion about custom objects and NaN.
I do think the decision should be reasonably consistent.
The current CVS version shortcuts for == and !=, but not 
for the other operators.

Normally, 

	x == y implies 
		x <= y
		x >= y
		not x < y
		not x > y 
(in addition to)
		not x != y

I think any object that intentionally violates these rules 
is likely to also violate the (is implies ==) assumption.

-jJ

From jcarlson at uci.edu  Tue Mar 30 16:11:17 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Tue Mar 30 16:15:04 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D423@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D423@USAHM010.amer.corp.eds.com>
Message-ID: <20040330125601.E604.JCARLSON@uci.edu>

> > The one major problem that i can see with this kind of 
> > syntax is that it is a /special case/ when standard Python
> > block structure does not apply.
> 
> It warns you that the expressions in the "using:" block will
> be evaluated later -- after the "def:" block.

It still violates the standard Python block rules.
"Special cases aren't special enough to break the rules."


> Typically the suite (things indented after the colon) are all
> of the same type.
> 
> 	using classmethod:
> 	    def funct(arg1, arg2):
> 	        pass

Funny, but this isn't the case with Python anywhere else; I can have
functions, class and variable definitions inside any other suite.

> 	using classmethod:
> 	    .version = 2134
> 	    .author = "anonymous"
> 	def funct(arg1, arg2):
> 	    pass
> 
> gets the block structure right, but it moves true transformations 
> (such as classmethod) too far away.  (Well, unless there is only 
> ever one decorator per function, but then this is overkill.)

No, it doesn't get the block structure right.  If I were making a quick
pass with my eyes through the source, I'd notice that funct was a
function definition, but I wouldn't even notice the 'using' keyword or
various metadata that was inserted.  The wonderful thing about Python is
that indentation is scope.  By making this 'using' syntax not based on
standard Python indentation syntax and scoping rules, you (and other
suggesting this) are breaking one of the things that makes Python so
damn easy to learn.

Neither I, nor anyone else who happens to need to read Python source
with decorators, shouldn't have to reverse grep the source to discover
what is done to this definition.

I don't know if anyone else has said it precisely like this (Ka-Ping has
said it nicer), but the best thing about the decorator syntax 'c' (as
referred to by Ka-Ping) is that decorators get the hell out of your way
when you don't want to see them, but when you want to, are in the
perfect location; the function definition.  The other variants ('a', 'b',
various 'using' or other syntaxes) lack these two desireable features.

The one other syntax that does seem to visually work is the use of a
dictionary literal for metadata, which would suggest the use of a list
literal for decorators.  Both have the drawback of being currently valid
syntax, and changing the meaning of currently valid syntax is generally
frowned upon.

As stated previously by other people, keeping various author and version
information per function definition is overkill, and I doubt would be a
real use case.

 - Josiah


From guido at python.org  Tue Mar 30 16:21:18 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 16:21:29 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: Your message of "Tue, 30 Mar 2004 14:18:26 EST."
	<1080674306.12643.76.camel@localhost.localdomain> 
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>  
	<1080674306.12643.76.camel@localhost.localdomain> 
Message-ID: <200403302121.i2ULLIM09840@guido.python.org>

> Another possibility that has been suggested is
> 
> [decorator] 
> def func(arg, arg):

And one that I currently favor.  I'm out of bandwidth to participate
on a msg-by-msg basis, but perhaps folks can see if they can come to
terms with this solution?

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Tue Mar 30 16:22:37 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 16:22:44 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: Your message of "Tue, 30 Mar 2004 11:34:36 PST."
	<20040330113320.E5FE.JCARLSON@uci.edu> 
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain> 
	<20040330113320.E5FE.JCARLSON@uci.edu> 
Message-ID: <200403302122.i2ULMcY09855@guido.python.org>

> Except that what you show is currently valid syntax.  Making a currently
> valid syntax mean something new is, I believe, generally frowned upon.

Not in this case.  I've thought this through and don't think I see any
practical issues with this syntax.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pf_moore at yahoo.co.uk  Tue Mar 30 16:28:54 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Tue Mar 30 16:28:55 2004
Subject: [Python-Dev] Re: Timing for Py2.4
References: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>
	<d66ucick.fsf@python.net>
	<u106nn00.fsf@yahoo.co.uk> <vfkmays7.fsf@python.net>
Message-ID: <ptaunhwp.fsf@yahoo.co.uk>

Thomas Heller <theller@python.net> writes:

>> Ouch. One other thought I had, based on some comments from Gareth, was
>> that getting distutils to work with the free MS compilers would be
>> good. I'm not sure what would be involved, or if I could help (I'm not
>> familiar with the distutils code, though...)
>
> Do you mean building the wininst.exe template with other compilers?
> Shouldn't be needed, because the binaries are also in CVS.

Nope. I'm talking about the "build_ext" command. I have MSVC6
installed, and the .NET framework MSVC7 compilers. If I have the free
compilers in my PATH (I don't expect distutils to auto-detect the free
compilers from the registry, as it does for MSVC6), and do python24
setup.py build, I get the error:

>C:\Apps\Python24\python.exe setup.py build
running build
running build_py
creating build
creating build\lib.win32-2.4
copying alarm.py -> build\lib.win32-2.4
running build_ext
error: Python was built with version 7.1 of Visual Studio, and
extensions need to be built with the same version of the compiler,
but it isn't installed.

So even though I have the free VC7 compiler, and it's in my PATH, I
still can't build a simple extension with it. Even renaming the MSVC6
registry key, to get it out of the way, didn't help.

Also, distutils adds optimisation flags to the command line, which
don't work with the free compilers. Etc, etc.

I suspect it's possible, but non-trivial, to get it working. I'm not
sure I understand distutils well enough to risk attempting to get it
right :-(

Paul.
-- 
This signature intentionally left blank


From martin at v.loewis.de  Tue Mar 30 16:33:38 2004
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue Mar 30 16:33:49 2004
Subject: [Python-Dev] Re: Timing for Py2.4
In-Reply-To: <ptaunhwp.fsf@yahoo.co.uk>
References: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>	<d66ucick.fsf@python.net>	<u106nn00.fsf@yahoo.co.uk>
	<vfkmays7.fsf@python.net> <ptaunhwp.fsf@yahoo.co.uk>
Message-ID: <4069E7B2.2080701@v.loewis.de>

Paul Moore wrote:
> error: Python was built with version 7.1 of Visual Studio, and
> extensions need to be built with the same version of the compiler,
> but it isn't installed.
> 
> So even though I have the free VC7 compiler, and it's in my PATH, I
> still can't build a simple extension with it.

Of course it won't. You need VC7.1, which comes as part of
Visual Studio .NET 2003.

> I suspect it's possible, but non-trivial, to get it working. I'm not
> sure I understand distutils well enough to risk attempting to get it
> right :-(

I doubt it could work, in the general case. VC7 uses msvcr7.dll, whereas
VC7.1 uses msvcr71.dll. Mixing different versions of the C runtime is
not supported by Microsoft.

Regards,
Martin


From jeremy at alum.mit.edu  Tue Mar 30 16:35:11 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue Mar 30 16:36:58 2004
Subject: [Python-Dev] Re: Timing for Py2.4
In-Reply-To: <ptaunhwp.fsf@yahoo.co.uk>
References: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>
	<d66ucick.fsf@python.net> <u106nn00.fsf@yahoo.co.uk>
	<vfkmays7.fsf@python.net>  <ptaunhwp.fsf@yahoo.co.uk>
Message-ID: <1080682510.12643.104.camel@localhost.localdomain>

On Tue, 2004-03-30 at 16:28, Paul Moore wrote:
> Nope. I'm talking about the "build_ext" command. I have MSVC6
> installed, and the .NET framework MSVC7 compilers. If I have the free
> compilers in my PATH (I don't expect distutils to auto-detect the free
> compilers from the registry, as it does for MSVC6), and do python24
> setup.py build, I get the error:
> 
> >C:\Apps\Python24\python.exe setup.py build
> running build
> running build_py
> creating build
> creating build\lib.win32-2.4
> copying alarm.py -> build\lib.win32-2.4
> running build_ext
> error: Python was built with version 7.1 of Visual Studio, and
> extensions need to be built with the same version of the compiler,
> but it isn't installed.
> 
> So even though I have the free VC7 compiler, and it's in my PATH, I
> still can't build a simple extension with it. Even renaming the MSVC6
> registry key, to get it out of the way, didn't help.

You should be able to get it to work if you build Python with the free
VC7 compiler.  That's what the error message is trying to tell you:
"extensions need to be built with the same version of the compiler."

> Also, distutils adds optimisation flags to the command line, which
> don't work with the free compilers. Etc, etc.

I have a cheap copy of VC7 on my laptop -- the $100 one that doesn't
have an optimizer.  It issues a warning for the optimization flags, but
it still compiles.

Jeremy



From tdelaney at avaya.com  Tue Mar 30 16:41:58 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Tue Mar 30 16:42:08 2004
Subject: [Python-Dev] Re: Rich comparisons
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE01536127@au3010avexu1.global.avaya.com>

> From: Edward Loper
> 
> Tim Delaney's suggestion [1] seemed pretty reasonable.  In particular:
> 
>    - Ensure that NaN is a singleton (like True and False).  I.e., the
>      float constructor checks if the float is NaN, and if so returns a
>      singleton.

Did I say that? I know I thought about the possibility of NaN being a singleton subclass of float, but I don't remember posting it ...

It would be nice, but I don't think it's overly feasible.

> Advantages:
>    - We have an easy way to test if a variable is nan: "x is NaN"
>    - nan will work "properly" as a dictionary key

This second statement is not correct. Since NaN would always compare non-equal with itself, it most definitely would not work as a dictionary key. My thoughts were that by having NaN as a separate class, it could raise if __hash__ were called i.e. explicitly preventing it from being used as a dictionary key.

> The important question is whether this would slow down other 
> operations. 
>   As for cmp(), my understanding was that it should just return -1 if 
> two unordered objects are not equal.

Nah - negative for <, 0 for ==, positive for >. And an exception if it's non-comparable (it used to be that __cmp__ wasn't allowed to raise an exception, but this restriction was relaxed).

Tim Delaney

From tdelaney at avaya.com  Tue Mar 30 17:00:50 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Tue Mar 30 17:00:57 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE0153612C@au3010avexu1.global.avaya.com>

> From: Guido van Rossum
> 
> > Another possibility that has been suggested is
> > 
> > [decorator] 
> > def func(arg, arg):
> 
> And one that I currently favor.  I'm out of bandwidth to participate
> on a msg-by-msg basis, but perhaps folks can see if they can come to
> terms with this solution?

OTOH, if there is any code currently like that (which I highly doubt) then it's quite possible that it contains side effects e.g.

    [synchronized(lock)]
    def func (arg):

would currently call synchronized before the function definition - if this syntax were changed to be the decorator syntax that would change it's meaning.

As I said though, I think it's highly unlikely to exist in practice.

Tim Delaney

From shane at zope.com  Tue Mar 30 17:05:30 2004
From: shane at zope.com (Shane Hathaway)
Date: Tue Mar 30 17:05:55 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEEGJOAB.tim.one@comcast.net>
References: <00fe01c41693$57404590$6402a8c0@arkdesktop>
	<LNBBLJKPBEHFEDALKOLCMEEGJOAB.tim.one@comcast.net>
Message-ID: <4069EF2A.5050606@zope.com>

Tim Peters wrote:
> Suppose you had a binary float format with 3 bits of precision, and the
> result of a computation on that box is .001 binary = 1/8 = 0.125 decimal.
> The "shortest-possible reproducing decimal representation" on that box is
> 0.1.  Is it more accurate to move that result to a wider machine via the
> string "0.1" or via the string "0.125"?  The former is off by 25%, but the
> latter is exactly right.  repr() on the former machine has no way to guess
> whether the 1/8 it's fed is the result of the user typing in "0.1" or the
> result of dividing 1.0 by 8.0.  By taking the bits at face value, and
> striving to communicate that as faithfully as possible, it's explainable,
> predictable, and indeed as faithful as possible.  "Looks pretty too" isn't a
> requirement for serious floating-point work.

It seems like most people who write '1.1' don't really want to dive into 
serious floating-point work.  I wonder if literals like 1.1 should 
generate decimal objects as described by PEP 327, rather than floats. 
Operations on decimals that have no finite decimal representation (like 
1.1 / 3) should use floating-point arithmetic internally, but they 
should store in decimal rather than floating-point format.

Shane

From guido at python.org  Tue Mar 30 17:18:14 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 17:18:23 2004
Subject: [Python-Dev] PEP 328 -- relative and multi-line import
Message-ID: <200403302218.i2UMIEV10058@guido.python.org>

Is anyone championing PEP 328?  This is planned for inclusion in 2.4,
but not much movement has happened.  If it's stuck on the decision
whether to use multiple dots (one per level up) or a single dot (to
indicate searching upwards until found), I'm willing to pronounce that
it should be multiple dots.

At least "from . import X" and "from .. import X" are completely clear
and more levels up are not likely to occur in practice...

--Guido van Rossum (home page: http://www.python.org/~guido/)


From pje at telecommunity.com  Tue Mar 30 17:19:44 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar 30 17:21:38 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403302121.i2ULLIM09840@guido.python.org>
References: <Your message of "Tue, 30 Mar 2004 14:18:26 EST."
	<1080674306.12643.76.camel@localhost.localdomain>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
Message-ID: <5.1.1.6.0.20040330170423.02aaee20@telecommunity.com>

At 01:21 PM 3/30/04 -0800, Guido van Rossum wrote:
> > Another possibility that has been suggested is
> >
> > [decorator]
> > def func(arg, arg):
>
>And one that I currently favor.  I'm out of bandwidth to participate
>on a msg-by-msg basis, but perhaps folks can see if they can come to
>terms with this solution?
>
>--Guido van Rossum (home page: http://www.python.org/~guido/)

I showed this to a Python programmer at my office.  He immediately asked if 
this:

if foo:
    [decorator1]
else:
    [decorator2]
def func(arg,arg):
    ...

was valid.  He then further commented that it was strange to have a Python 
construct split across two lines, and inquired whether these variants:

[decorator] def func(arg,arg):
     ...

[decorator1, decorator2
] def func(arg,arg):

[decorator] \
def func(arg,arg):
     ...

would be considered legal as well.

I also showed him the implemented syntax of 'def func(args) [decorators]:', 
and he thought it seemed natural and Pythonic, and he suggested 'as' as 
another possible alternative.  We also discussed the issue of evaluation 
order, and wondered if perhaps the decorator order in your latest syntax 
should be in reverse of the order used for decorators-at-the-end.  His last 
comment was that he thought it seemed very Perlish to begin a Python 
statement with special symbols, that then modify the behavior of code on a 
subsequent line.

Some of his questions also got me to wondering how the grammar would even 
handle the initial proposal, since it seems it would have to be phrased as 
"lists can have an optional 'NEWLINE function-definition' clause after them".

Anyway, I think I could manage to live with this syntax, although I really 
don't like it either, for many of the same reasons.  I don't think that 
putting decorators at the end of the definition impedes visibility unless 
there are a really large number of arguments or a large amount of decorator 
data.  In both cases, you're going to need to be reading more carefully 
anyway.  For the common cases, it's all going to be on one line anyway.

Oh...  one more thing...  the new proposal brings back up (to a limited 
extent) the "what to look up" question, in a different form.  More 
precisely, someone who has learned basic Python syntax may wonder why 
somebody is creating a list with 'classmethod' in it, but unless they 
already know about the syntax rule, they have no way to even *guess* that 
it has something to do with the function definition that follows.  By 
contrast, the decorators-last syntax is visibly part of the 'def' statement 
and gives much more clue as to its purpose.


From andrew-pythondev at puzzling.org  Tue Mar 30 17:27:19 2004
From: andrew-pythondev at puzzling.org (Andrew Bennetts)
Date: Tue Mar 30 17:27:27 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403302121.i2ULLIM09840@guido.python.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
Message-ID: <20040330222718.GJ7833@frobozz>

On Tue, Mar 30, 2004 at 01:21:18PM -0800, Guido van Rossum wrote:
> > Another possibility that has been suggested is
> > 
> > [decorator] 
> > def func(arg, arg):
> 
> And one that I currently favor.  I'm out of bandwidth to participate
> on a msg-by-msg basis, but perhaps folks can see if they can come to
> terms with this solution?

The most obvious issues I see with this is are that:
  - grepping for "def func" won't show any sign that its decorated, and 
  - it looks like normal syntax, so it looks like expressions should
    work, e.g.

        [decorator1] + [decorator2]
        def func(arg, arg):

    or even:

        get_std_decorators()   # Returns a list
        def func(arg, arg):
        
    I'm pretty sure this isn't the intention, though.  I can imagine some
    posts to comp.lang.python asking how to do the above...
  
Neither of these are fatal flaws, but I think I still prefer:

    def func(args) [decorator]:

-Andrew.


From tim.one at comcast.net  Tue Mar 30 17:38:23 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 17:38:34 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <003e01c416a2$b23cd260$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEFEJOAB.tim.one@comcast.net>

[Andrew Koenig]
>>> But if you're moving to a wider precision, surely there is an even
>>> better decimal approximation to the IEEE-rounded "1.1" than
>>> 1.1000000000000001 (with even more digits), so isn't the preceding
>>> paragraph a justification for using that approximation instead?

[Tim]
>> Like Ping, you're picturing typing in "1.1" by hand, so that you
>> *know* decimal 1.1 on-the-nose is the number you "really want".

[Andrew]
> No, I don't think so.  I said ``the IEEE-rounded "1.1"'', by which I
> mean the IEEE floating-point number that is closest to
> (infinite-precision) 1.1.

Oops -- got it.

> Let's call that number X.  Now, of course X is a rational number, and
> one that can be exactly represented on any machine with at least as
> many bits in its floating-point representation as the machine that
> computed X.
>
> On the original machine, converting 1.1 to floating-point yields
> exactly X, as does converting 1.1000000000000001.
>
> You claim that on a machine with more precision than the original
> machine, converting 1.1000000000000001 to floating-point will yield a
> value closer to X than converting 1.1 to floating-point will yield.
>
> I agree with you.  However, I claim that there is probably another
> decimal number, with even more digits, that when converted to
> floating-point on that machine will yield even a closer approximation
> to X, so isn't your line of reasoning an argument for using that
> decimal number instead?

It is, but it wasn't practical.  754 requires that float->string done to 17
significant digits, then back to float again, will reproduce the original
float exactly.  It doesn't require perfect rounding (there are different
accuracy requirements over different parts of the domain -- it's
complicated), and it doesn't require that a conforming float->string
operation be able to produce more than 17 meaningful digits.  For example,
on Windows under 2.3.3:

>>> print "%.50f" % 1.1
1.10000000000000010000000000000000000000000000000000
>>>

It's fine by the 754 std that all digits beyond the 17th are 0.  It would
also be fine if all digits beyond the 17th were 1, 8, or chosen at random.

So long as Python relies on the platform C, it can't assume more than that
is available.  Well, it can't even assume that much, relying on C89, but
almost all Python fp behavior is inherited from C, and as a "quality of
implementation" issue I believed vendors would, over time, at least try to
pay lip service to 754.  That prediction was a good one, actually.

> Here's another way to look at it.  Suppose I want to convert 2**-30 to
> decimal.  On a 64-bit machine, I can represent that value to 17
> significant digits as 9.31322574615478516e-10.  However, I can also
> represent it exactly as 9.3132257461547851625e-10.

On Windows (among others), not unless you write your own float->string
routines to get those "extra" digits.

>>> print "%.50g" % (2**-30)
9.3132257461547852e-010
>>>

BTW, it's actually easy to write perfect-rounding float<->string routines in
Python.  The drawback is (lack of) speed.

> If you are arguing that I can get a better approximation on a machine
> with more precision if I write the first of these representations,
> doesn't that argument suggest that the second of these representations
> is better still?

Yes.  The difference is that no standard requires that C be able to produce
the latter, and you only suggest David Gay's code because you haven't tried
to maintain it <wink -- but it is a cross-platform mess>.

> Remember that every binary floating-point number has an exact decimal
> representation (though the reverse, of course, is not true).

Yup.


From ark-mlist at att.net  Tue Mar 30 17:48:34 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Tue Mar 30 17:48:36 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEFEJOAB.tim.one@comcast.net>
Message-ID: <006201c416a9$21f86eb0$6402a8c0@arkdesktop>

> It is, but it wasn't practical.  754 requires that float->string done to
> 17 significant digits, then back to float again, will reproduce the
> original float exactly.

But that rules out those hypothetical machines with greater precision on
which you are basing your argument.

> It doesn't require perfect rounding (there are different
> accuracy requirements over different parts of the domain -- it's
> complicated), and it doesn't require that a conforming float->string
> operation be able to produce more than 17 meaningful digits.

I thought that 754 requires input and output to be no more than 0.47 LSB
away from exact.  Surely the spirit of 754 would require more than 17
significant digits on machines with more than 56-bit fractions.

> So long as Python relies on the platform C, it can't assume more than that
> is available.  Well, it can't even assume that much, relying on C89, but
> almost all Python fp behavior is inherited from C, and as a "quality of
> implementation" issue I believed vendors would, over time, at least try to
> pay lip service to 754.  That prediction was a good one, actually.

Understood.  What I meant when I started this thread was that I think things
would be better in some ways if Python did not rely on the underlying C
library for its floating-point conversions--especially in light of the fact
that not all C libraries meet the 754 requirements for conversions.

> > Here's another way to look at it.  Suppose I want to convert 2**-30 to
> > decimal.  On a 64-bit machine, I can represent that value to 17
> > significant digits as 9.31322574615478516e-10.  However, I can also
> > represent it exactly as 9.3132257461547851625e-10.

> On Windows (among others), not unless you write your own float->string
> routines to get those "extra" digits.

No -- the representation is exact regardless of whether a particular
implementation is capable of producing it :-)

> BTW, it's actually easy to write perfect-rounding float<->string routines
> in Python.  The drawback is (lack of) speed.

Agreed.  I did it in C many moons ago.

> > If you are arguing that I can get a better approximation on a machine
> > with more precision if I write the first of these representations,
> > doesn't that argument suggest that the second of these representations
> > is better still?


> Yes.  The difference is that no standard requires that C be able to
> produce the latter, and you only suggest David Gay's code because you
> haven't tried to maintain it <wink -- but it is a cross-platform mess>.

Naah - I also suggested it because I like the Scheme style of conversions,
and because I happen to know David Gay personally.  I have no opinion about
how easy his code is to maintain.

I completely agree that if you're going to rely on the underlying C
implementation for floating-point conversions, there's little point in
trying to do anything really good--C implementations are just too variable.


From aahz at pythoncraft.com  Tue Mar 30 17:53:24 2004
From: aahz at pythoncraft.com (Aahz)
Date: Tue Mar 30 17:53:27 2004
Subject: [Python-Dev] PEP 328 -- relative and multi-line import
In-Reply-To: <200403302218.i2UMIEV10058@guido.python.org>
References: <200403302218.i2UMIEV10058@guido.python.org>
Message-ID: <20040330225324.GA1947@panix.com>

On Tue, Mar 30, 2004, Guido van Rossum wrote:
>
> Is anyone championing PEP 328?  This is planned for inclusion in 2.4,
> but not much movement has happened.  If it's stuck on the decision
> whether to use multiple dots (one per level up) or a single dot (to
> indicate searching upwards until found), I'm willing to pronounce that
> it should be multiple dots.
> 
> At least "from . import X" and "from .. import X" are completely clear
> and more levels up are not likely to occur in practice...

I'm working on producing an edit, but if you just want to Pronounce,
that's fine with me.  There hasn't been a lot of emotion attached, and
nobody has gotten seriously annoyed with the multiple dot idea.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From guido at python.org  Tue Mar 30 17:58:41 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 17:58:54 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: Your message of "Wed, 31 Mar 2004 08:27:19 +1000."
	<20040330222718.GJ7833@frobozz> 
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org> 
	<20040330222718.GJ7833@frobozz> 
Message-ID: <200403302258.i2UMwfp10162@guido.python.org>

> The most obvious issues I see with this is are that:
>   - grepping for "def func" won't show any sign that its decorated, and 

Seems rather minor -- there's only so much you can cram on a single
line.

>   - it looks like normal syntax, so it looks like expressions should
>     work, e.g.
> 
>         [decorator1] + [decorator2]
>         def func(arg, arg):
> 
>     or even:
> 
>         get_std_decorators()   # Returns a list
>         def func(arg, arg):
>         
>     I'm pretty sure this isn't the intention, though.  I can imagine
>     some posts to comp.lang.python asking how to do the above...

The same syntactic restrictions apply to docstrings (you can't have an
expression of type string in the docstring position, it *has* to be a
literal).  Nobody's confused by that one.

> Neither of these are fatal flaws, but I think I still prefer:
> 
>     def func(args) [decorator]:

Which hides 'classmethod' behind the more voluminous stuff, and that's
my main gripe.

My main reasoning is as follows:

1) If we do decorators at all, decorators should be allowed to be
   arbitrary expressions.

2) Since we allow arbitrary expressions, decorators provide a much
   better way to set function attributes than the current way.

3) This will be attractive (better than putting special mark-up in
   docstrings), so there will be lots of voluminous decorators.

4) Then the "important" decorators like classmethod will be hidden at
   the end of the list of decorators.

The [...] prefix proposal addresses this by putting the end of the
decorator list closest to the def keyword.  This doesn't look so bad:

    [funcattr(spark="<spark syntax here>",
              deprecated=True,
              overrides=True),
     classmethod]
    def foo(cls, arg1, arg2):
        pass

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Tue Mar 30 17:59:21 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 17:59:29 2004
Subject: [Python-Dev] PEP 328 -- relative and multi-line import
In-Reply-To: Your message of "Tue, 30 Mar 2004 17:53:24 EST."
	<20040330225324.GA1947@panix.com> 
References: <200403302218.i2UMIEV10058@guido.python.org>  
	<20040330225324.GA1947@panix.com> 
Message-ID: <200403302259.i2UMxLi10184@guido.python.org>

> > Is anyone championing PEP 328?  This is planned for inclusion in 2.4,
> > but not much movement has happened.  If it's stuck on the decision
> > whether to use multiple dots (one per level up) or a single dot (to
> > indicate searching upwards until found), I'm willing to pronounce that
> > it should be multiple dots.
> > 
> > At least "from . import X" and "from .. import X" are completely clear
> > and more levels up are not likely to occur in practice...
> 
> I'm working on producing an edit, but if you just want to Pronounce,
> that's fine with me.  There hasn't been a lot of emotion attached, and
> nobody has gotten seriously annoyed with the multiple dot idea.

Consider it Pronounced.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Tue Mar 30 18:17:55 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar 30 18:19:32 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403302258.i2UMwfp10162@guido.python.org>
References: <Your message of "Wed,
	31 Mar 2004 08:27:19 +1000." <20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
Message-ID: <5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>

At 02:58 PM 3/30/04 -0800, Guido van Rossum wrote:
>3) This will be attractive (better than putting special mark-up in
>    docstrings), so there will be lots of voluminous decorators.
>
>4) Then the "important" decorators like classmethod will be hidden at
>    the end of the list of decorators.

Hm.  So if we reversed the order so that the outermost decorators (such as 
classmethod) come first in the list, would that sway you to relent in favor 
of decorators-after-arguments?  I don't like the reversed order, but I 
think I'd be a lot more comfortable with explaining that relatively minor 
semantic oddity to other developers than I would be with trying to explain 
the major syntactic oddity (relative to the rest of the Python language) of 
decorators-before-def.


From tim.one at comcast.net  Tue Mar 30 18:24:03 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 18:24:16 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <006201c416a9$21f86eb0$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFIJOAB.tim.one@comcast.net>

[Tim]
>> It is, but it wasn't practical.  754 requires that float->string
>> done to 17 significant digits, then back to float again, will
>> reproduce the original float exactly.

[Andrew]
> But that rules out those hypothetical machines with greater precision
> on which you are basing your argument.

Sorry, couldn't follow that one.

>> It doesn't require perfect rounding (there are different
>> accuracy requirements over different parts of the domain -- it's
>> complicated), and it doesn't require that a conforming float->string
>> operation be able to produce more than 17 meaningful digits.

> I thought that 754 requires input and output to be no more than 0.47
> LSB away from exact.

No, and no standard can ask for better than 0.5 ULP error (when the true
result is halfway between two representable quantities, 0.5 ULP is the
smallest possible error "even in theory").  It requires perfect rounding for
"suitably small" inputs; outside that range, and excepting
underflow/overflow:

- for nearest/even rounding, it requires no more than 0.47 ULP error
  *beyond* that allowed for perfect nearest/even conversion (which
  has a max error of 0.5 ULP on its own)

- for directed rounding, no more than 1.47 ULP error (and with the
  correct sign)

> Surely the spirit of 754 would require more than 17 significant
> digits on machines with more than 56-bit fractions.

Yes, and the derivation of "17" for IEEE double format isn't hard.  Of
course the 754 standard doesn't say anything about non-754 architectures;
there are generalizations in the related 854 standard.

...

> Understood.  What I meant when I started this thread was that I think
> things would be better in some ways if Python did not rely on the
> underlying C library for its floating-point conversions--especially
> in light of the fact that not all C libraries meet the 754
> requirements for conversions.

No argument there.  In fact, it would be better in some ways if Python
didn't rely on the platform C libraries for anything.

> ...
> Naah - I also suggested it because I like the Scheme style of
> conversions, and because I happen to know David Gay personally.  I
> have no opinion about how easy his code is to maintain.

It's not "David Gay" at issue, it's that this code is trying to do an
extremely delicate and exacting task in a language that offers no native
support.  So here's a snippet of the #ifdef maze at the top:

"""
#else /* ifndef IEEE_Arith */
#undef Check_FLT_ROUNDS
#undef Honor_FLT_ROUNDS
#undef SET_INEXACT
#undef  Sudden_Underflow
#define Sudden_Underflow
#ifdef IBM
#undef Flt_Rounds
#define Flt_Rounds 0
#define Exp_shift  24
#define Exp_shift1 24
#define Exp_msk1   0x1000000
#define Exp_msk11  0x1000000
#define Exp_mask  0x7f000000
#define P 14
#define Bias 65
#define Exp_1  0x41000000
#define Exp_11 0x41000000
#define Ebits 8	/* exponent has 7 bits, but 8 is the right value in b2d */
#define Frac_mask  0xffffff
#define Frac_mask1 0xffffff
#define Bletch 4
#define Ten_pmax 22
#define Bndry_mask  0xefffff
#define Bndry_mask1 0xffffff
#define LSB 1
#define Sign_bit 0x80000000
#define Log2P 4
#define Tiny0 0x100000
#define Tiny1 0
#define Quick_max 14
#define Int_max 15
#else /* VAX */
#undef Flt_Rounds
#define Flt_Rounds 1
#define Exp_shift  23
#define Exp_shift1 7
#define Exp_msk1    0x80
#define Exp_msk11   0x800000
#define Exp_mask  0x7f80
#define P 56
#define Bias 129
#define Exp_1  0x40800000
#define Exp_11 0x4080
#define Ebits 8
#define Frac_mask  0x7fffff
#define Frac_mask1 0xffff007f
#define Ten_pmax 24
#define Bletch 2
#define Bndry_mask  0xffff007f
#define Bndry_mask1 0xffff007f
#define LSB 0x10000
#define Sign_bit 0x8000
#define Log2P 1
#define Tiny0 0x80
#define Tiny1 0
#define Quick_max 15
#define Int_max 15
#endif /* IBM, VAX */
#endif /* IEEE_Arith */

#ifndef IEEE_Arith
#define ROUND_BIASED
#endif

#ifdef RND_PRODQUOT
#define rounded_product(a,b) a = rnd_prod(a, b)
#define rounded_quotient(a,b) a = rnd_quot(a, b)
#ifdef KR_headers
extern double rnd_prod(), rnd_quot();
#else
extern double rnd_prod(double, double), rnd_quot(double, double);
#endif
#else
#define rounded_product(a,b) a *= b
#define rounded_quotient(a,b) a /= b
#endif

#define Big0 (Frac_mask1 | Exp_msk1*(DBL_MAX_EXP+Bias-1))
#define Big1 0xffffffff

#ifndef Pack_32
#define Pack_32
#endif

#ifdef KR_headers
#define FFFFFFFF ((((unsigned long)0xffff)<<16)|(unsigned long)0xffff)
#else
#define FFFFFFFF 0xffffffffUL
#endif

#ifdef NO_LONG_LONG
#undef ULLong
#ifdef Just_16
#undef Pack_32
/* When Pack_32 is not defined, we store 16 bits per 32-bit Long.
 * This makes some inner loops simpler and sometimes saves work
 * during multiplications, but it often seems to make things slightly
 * slower.  Hence the default is now to store 32 bits per Long.
 */
#endif
#else	/* long long available */
#ifndef Llong
#define Llong long long
#endif
#ifndef ULLong
#define ULLong unsigned Llong
"""

and a snippet of some #ifdef'ed guts (assuming that it's obvious why Bletch
is #define'd to 2 on some platforms but 4 on others <wink>):

"""
#ifdef Pack_32
	if (k < Ebits) {
		d0 = Exp_1 | y >> Ebits - k;
		w = xa > xa0 ? *--xa : 0;
		d1 = y << (32-Ebits) + k | w >> Ebits - k;
		goto ret_d;
		}
	z = xa > xa0 ? *--xa : 0;
	if (k -= Ebits) {
		d0 = Exp_1 | y << k | z >> 32 - k;
		y = xa > xa0 ? *--xa : 0;
		d1 = z << k | y >> 32 - k;
		}
	else {
		d0 = Exp_1 | y;
		d1 = z;
		}
#else
	if (k < Ebits + 16) {
		z = xa > xa0 ? *--xa : 0;
		d0 = Exp_1 | y << k - Ebits | z >> Ebits + 16 - k;
		w = xa > xa0 ? *--xa : 0;
		y = xa > xa0 ? *--xa : 0;
		d1 = z << k + 16 - Ebits | w << k - Ebits | y >> 16 + Ebits - k;
		goto ret_d;
		}
	z = xa > xa0 ? *--xa : 0;
	w = xa > xa0 ? *--xa : 0;
	k -= Ebits + 16;
	d0 = Exp_1 | y << k + 16 | z << k | w >> 16 - k;
	y = xa > xa0 ? *--xa : 0;
	d1 = w << k + 16 | y << k;
#endif
 ret_d:
#ifdef VAX
	word0(d) = d0 >> 16 | d0 << 16;
	word1(d) = d1 >> 16 | d1 << 16;
#else
#undef d0
#undef d1
#endif
"""

There are over 3,000 lines of code "like that" in dtoa.c alone.  "Obviously
correct" isn't obvious, and some days I think I'd rather track down a bug in
Unicode.

> I completely agree that if you're going to rely on the underlying C
> implementation for floating-point conversions, there's little point in
> trying to do anything really good--C implementations are just too
> variable.

Well, so far as marshal goes (storing floats in code objects), we could and
should stop trying to use decimal strings at all -- Python's 8-byte binary
pickle format for floats is portable and is exact for (finite) IEEE doubles.


From pyth at devel.trillke.net  Tue Mar 30 18:33:40 2004
From: pyth at devel.trillke.net (Holger Krekel)
Date: Tue Mar 30 18:34:11 2004
Subject: [Python-Dev] repr(1.1)
In-Reply-To: <B8CDFB11BB44D411B8E600508BDF076C1E96D422@USAHM010.amer.corp.eds.com>
References: <B8CDFB11BB44D411B8E600508BDF076C1E96D422@USAHM010.amer.corp.eds.com>
Message-ID: <20040330233340.GA6361@solar.trillke>

Jewett, Jim J wrote:
> Ka-Ping Yee:
> 
> > In fact, you've just given a stronger reason for keeping "1.1".
> > Currently, compiling a .py file containing "1.1" produces a .pyc file
> > containing "1.1000000000000001".  .pyc files are supposed to be
> > platform-independent.  If these files are then run on a platform with
> > different floating-point precision, the .py and the .pyc will produce
> > different results.
> 
> In a previous job, every system upgrade meant a C compiler upgrade.  We 
> would recompile everything and rerun a week of production data as a 
> regression test.  We would get different results.  Then I had to find 
> each difference to let the customer decide whether it was large enough 
> to really matter.  (It never was.)
> 
> I would have been very grateful if I could have flipped a switch to say
> "Do the math like the old version, even if it was buggy.  Do it just this
> once, so that I can show the customer that any changes are intentional!"
> 
> Running a .pyc created on system1 should produce the same results you 
> got on system1, even if system2 could do a better job.  Printing a 
> dozen zeros followed by a 1 tells system2 just how precise the
> calculations should be.

This sounds more like a general versioning issue.  If you e.g. use Numeric, 
and face C-compiler and python upgrades (and what not), you can hardly expect 
python to simulate your old environment.  And .pyc files are not guaruanteed to 
work across different versions of Python (which is likely the reason why 
system2 could do a better job, anyway). 

cheers,

    holger

From jcarlson at uci.edu  Tue Mar 30 18:35:35 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Tue Mar 30 18:39:09 2004
Subject: [Python-Dev] PEP 328 -- relative and multi-line import
In-Reply-To: <200403302259.i2UMxLi10184@guido.python.org>
References: <20040330225324.GA1947@panix.com>
	<200403302259.i2UMxLi10184@guido.python.org>
Message-ID: <20040330152856.E607.JCARLSON@uci.edu>

> > I'm working on producing an edit, but if you just want to Pronounce,
> > that's fine with me.  There hasn't been a lot of emotion attached, and
> > nobody has gotten seriously annoyed with the multiple dot idea.
> 
> Consider it Pronounced.

I guess going grocery shopping removed my voice in this area *wink*.

There was a thread on an a VMS-style relative package option that used
negative integers for a relative imports.  The thread with this line of
thought starts with Stephen Horne's post here:
http://groups.google.com/groups?q=g:thl3068059620d&dq=&hl=en&lr=&ie=UTF-8&c2coff=1&safe=off&selm=ra4q401p3ld92uthl6l34u8vfqq39567n0%404ax.com

I personally like that it is concise for deeply nested relative imports,
and doesn't require counting of dots.  We didn't really get any 'elder
developers' to comment on that syntax.


 - Josiah


From tim.one at comcast.net  Tue Mar 30 18:39:21 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 18:39:33 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <4069EF2A.5050606@zope.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEFKJOAB.tim.one@comcast.net>

[Shane Hathaway]
> It seems like most people who write '1.1' don't really want to dive
> into serious floating-point work.  I wonder if literals like 1.1
> should generate decimal objects as described by PEP 327, rather than
> floats.  Operations on decimals that have no finite decimal
> representation (like 1.1 / 3) should use floating-point arithmetic
> internally, but they should store in decimal rather than floating-point
> format.

Well, one of the points of the Decimal module is that it gives results that
"look like" people get from pencil-and-paper math (or hand calculators).

So, e.g., I think the newbie traumatized by not getting back "0.1" after
typing 0.1 would get just as traumitized if moving to binary fp internally
caused 1.1 / 3.3 to look like

    0.33333333333333337

instead of

    0.33333333333333333

If they stick to Decimal throughout, they will get the latter result (and
they'll continue to get a string of 3's for as many digits as they care to
ask for).

Decimal doesn't suffer string<->float conversion errors, but beyond that
it's prone to all the same other sources of error as binary fp.  Decimal's
saving grace is that the user can boost working precision to well beyond the
digits they care about in the end.  Kahan always wrote that the best feature
of IEEE-754 to ease the lives of the fp-naive is the "double extended"
format, and HW support for that is built in to all Pentium chips.  Alas,
most compilers and languages give no access to it.

The only thing Decimal will have against it in the end is runtime sloth.


From guido at python.org  Tue Mar 30 18:43:46 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 18:43:56 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: Your message of "Tue, 30 Mar 2004 18:17:55 EST."
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com> 
References: <Your message of "Wed,
	31 Mar 2004 08:27:19 +1000." <20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz> 
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com> 
Message-ID: <200403302343.i2UNhk810304@guido.python.org>

> Hm.  So if we reversed the order so that the outermost decorators (such as 
> classmethod) come first in the list, would that sway you to relent in favor 
> of decorators-after-arguments?

Not really, because they're still hidden behind the argument list.

> I don't like the reversed order, but I think I'd be a lot more
> comfortable with explaining that relatively minor semantic oddity to
> other developers than I would be with trying to explain the major
> syntactic oddity (relative to the rest of the Python language) of
> decorators-before-def.

OTOH to C# programmers you won't have to explain a thing, because
that's what C# already does.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Tue Mar 30 18:50:29 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 18:50:36 2004
Subject: [Python-Dev] PEP 328 -- relative and multi-line import
In-Reply-To: Your message of "Tue, 30 Mar 2004 15:35:35 PST."
	<20040330152856.E607.JCARLSON@uci.edu> 
References: <20040330225324.GA1947@panix.com>
	<200403302259.i2UMxLi10184@guido.python.org> 
	<20040330152856.E607.JCARLSON@uci.edu> 
Message-ID: <200403302350.i2UNoT010378@guido.python.org>

> I guess going grocery shopping removed my voice in this area *wink*.

Actually it was long ago decided that the pronouncement would be
either one dot or multiple dots; other ideas have been removed from
consideration.

> There was a thread on an a VMS-style relative package option that used
> negative integers for a relative imports.  The thread with this line of
> thought starts with Stephen Horne's post here:
> http://groups.google.com/groups?q=g:thl3068059620d&dq=&hl=en&lr=&ie=UTF-8&c2coff=1&safe=off&selm=ra4q401p3ld92uthl6l34u8vfqq39567n0%404ax.com
> 
> I personally like that it is concise for deeply nested relative imports,
> and doesn't require counting of dots.  We didn't really get any 'elder
> developers' to comment on that syntax.

OK, since you asked: Yuck.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Tue Mar 30 19:17:14 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue Mar 30 19:18:53 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403302343.i2UNhk810304@guido.python.org>
References: <Your message of "Tue, 30 Mar 2004 18:17:55 EST."
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<Your message of "Wed,
	31 Mar 2004 08:27:19 +1000." <20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
Message-ID: <5.1.1.6.0.20040330185626.024903c0@telecommunity.com>

At 03:43 PM 3/30/04 -0800, Guido van Rossum wrote:
> > Hm.  So if we reversed the order so that the outermost decorators (such as
> > classmethod) come first in the list, would that sway you to relent in 
> favor
> > of decorators-after-arguments?
>
>Not really, because they're still hidden behind the argument list.

Because:

1) it won't be on the same line if there are lots of arguments
2) nobody will read past the argument list
3) other
4) all of the above
5) none of the above

(Not trying to change your opinion; I just think the answer to this should 
go in the PEP.)


> > I don't like the reversed order, but I think I'd be a lot more
> > comfortable with explaining that relatively minor semantic oddity to
> > other developers than I would be with trying to explain the major
> > syntactic oddity (relative to the rest of the Python language) of
> > decorators-before-def.
>
>OTOH to C# programmers you won't have to explain a thing, because
>that's what C# already does.

Correct me if I'm wrong, but I don't believe C# attributes have anything 
like the same semantics as Python decorators; in fact I believe they may be 
more akin to Python function attributes!  So, even to a C# programmer, I'll 
have to explain Python's semantics.  Indeed, the C# syntax has things like 
this:

[ReturnValue: whatever(something)]

to specify what the attributes apply to, and they can be applied to 
parameters, the return value, the module as a whole, etc.  But I don't want 
to get too far off-topic.

By the way, you didn't mention whether it's okay to put the decorators on 
the same logical line, e.g.:

[classmethod] def foo(bar,baz):
     # body goes here

If the rationale here is that we're copying C#, I'd think that it should be 
permissible, even though it looks a bit ugly and tempts me to indent the 
body to align with the function name.


From ark-mlist at att.net  Tue Mar 30 19:38:06 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Tue Mar 30 19:38:09 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFIJOAB.tim.one@comcast.net>
Message-ID: <008c01c416b8$6f8139f0$6402a8c0@arkdesktop>

> > But that rules out those hypothetical machines with greater precision
> > on which you are basing your argument.
> 
> Sorry, couldn't follow that one.

You argued against applying the Scheme rules because that would make
marshalling less accurate when the unmarshalling is done on a machine with
longer floats.  But on such a machine, 17 digits won't be good enough
anyway.

> > I thought that 754 requires input and output to be no more than 0.47
> > LSB away from exact.

> No, and no standard can ask for better than 0.5 ULP error (when the true
> result is halfway between two representable quantities, 0.5 ULP is the
> smallest possible error "even in theory").  It requires perfect rounding
> for "suitably small" inputs; outside that range, and excepting
> underflow/overflow:
> 
> - for nearest/even rounding, it requires no more than 0.47 ULP error
>   *beyond* that allowed for perfect nearest/even conversion (which
>   has a max error of 0.5 ULP on its own)

That's what I meant.  Rather than 0.47 from exact, I meant 0.47 from the
best possible.

> > Surely the spirit of 754 would require more than 17 significant
> > digits on machines with more than 56-bit fractions.

> Yes, and the derivation of "17" for IEEE double format isn't hard.  Of
> course the 754 standard doesn't say anything about non-754 architectures;
> there are generalizations in the related 854 standard.

Yes.

> > Understood.  What I meant when I started this thread was that I think
> > things would be better in some ways if Python did not rely on the
> > underlying C library for its floating-point conversions--especially
> > in light of the fact that not all C libraries meet the 754
> > requirements for conversions.

> No argument there.  In fact, it would be better in some ways if Python
> didn't rely on the platform C libraries for anything.

Hey, I know some people who write C programs that don't rely on the platform
C libraries for anything :-)

> > Naah - I also suggested it because I like the Scheme style of
> > conversions, and because I happen to know David Gay personally.  I
> > have no opinion about how easy his code is to maintain.
> 
> It's not "David Gay" at issue, it's that this code is trying to do an
> extremely delicate and exacting task in a language that offers no native
> support.  So here's a snippet of the #ifdef maze at the top:

<snip>

> There are over 3,000 lines of code "like that" in dtoa.c alone.
> "Obviously correct" isn't obvious, and some days I think I'd rather track
> down a bug in Unicode.

Understood.

> > I completely agree that if you're going to rely on the underlying C
> > implementation for floating-point conversions, there's little point in
> > trying to do anything really good--C implementations are just too
> > variable.


> Well, so far as marshal goes (storing floats in code objects), we could
> and should stop trying to use decimal strings at all -- Python's 8-byte
> binary pickle format for floats is portable and is exact for (finite) IEEE
> doubles.

Gee, then you could go back to rounding to 12 digits and make ?!ng happy :-)


From guido at python.org  Tue Mar 30 19:38:34 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 19:38:42 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: Your message of "Tue, 30 Mar 2004 19:17:14 EST."
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com> 
References: <Your message of "Tue, 30 Mar 2004 18:17:55 EST."
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com> <Your
	message of "Wed,
	31 Mar 2004 08:27:19 +1000." <20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com> 
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com> 
Message-ID: <200403310038.i2V0cY910483@guido.python.org>

> > > Hm.  So if we reversed the order so that the outermost
> > > decorators (such as classmethod) come first in the list, would
> > > that sway you to relent in favor of decorators-after-arguments?
> >
> >Not really, because they're still hidden behind the argument list.
> 
> Because:
> 
> 1) it won't be on the same line if there are lots of arguments
> 2) nobody will read past the argument list
> 3) other
> 4) all of the above
> 5) none of the above
> 
> (Not trying to change your opinion; I just think the answer to this should 
> go in the PEP.)

1&2, mostly.

> > > I don't like the reversed order, but I think I'd be a lot more
> > > comfortable with explaining that relatively minor semantic oddity to
> > > other developers than I would be with trying to explain the major
> > > syntactic oddity (relative to the rest of the Python language) of
> > > decorators-before-def.
> >
> >OTOH to C# programmers you won't have to explain a thing, because
> >that's what C# already does.
> 
> Correct me if I'm wrong, but I don't believe C# attributes have
> anything like the same semantics as Python decorators; in fact I
> believe they may be more akin to Python function attributes!  So,
> even to a C# programmer, I'll have to explain Python's semantics.

Yes of course, but the basic idea that this specifies attributes
should be clear to them.  You always have to explain Python's
semantics -- even assignment is deeply different!

> Indeed, the C# syntax has things like this:
> 
> [ReturnValue: whatever(something)]
> 
> to specify what the attributes apply to, and they can be applied to
> parameters, the return value, the module as a whole, etc.  But I
> don't want to get too far off-topic.
> 
> By the way, you didn't mention whether it's okay to put the decorators on 
> the same logical line, e.g.:
> 
> [classmethod] def foo(bar,baz):
>      # body goes here
> 
> If the rationale here is that we're copying C#, I'd think that it
> should be permissible, even though it looks a bit ugly and tempts me
> to indent the body to align with the function name.

This is much harder to do with the current parser.  (My plan would be
to tie the list expression and the function definition together in the
code generating phase, just like doc strings.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From python-dev at zesty.ca  Tue Mar 30 19:41:40 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue Mar 30 19:41:24 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEDJJOAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCAEDJJOAB.tim.one@comcast.net>
Message-ID: <Pine.LNX.4.58.0403301609500.18028@server1.LFW.org>

On Tue, 30 Mar 2004, Tim Peters wrote:
> repr(float) used to round to 12 significant digits (same as str() does
> ow  -- repr(float) and str(float) used to be identical).  So the problem was
> real, and so was the fix.

All right.  Maybe we can make some progress.  I agree that round-to-12
was a real problem.  But i think we are talking about two different use
cases: compiling to disk and displaying on screen.

I think we can satisfy both desires.

If i understand you right, your primary aim is to make sure the marshalled
form of any floating-point number yields the closest possible binary
approximation to the machine value on the original platform, even when
that representation is used on a different platform.  (Is that correct?
Perhaps it's best if you clarify -- exactly what is the invariant you
want to maintain, and what changes [in platform or otherwise] do you want
the representation to withstand?)

That doesn't have to be part of repr()'s contract.  (In fact, i would
argue that already repr() makes no such promise.)  repr() is about
providing a representation for humans.

Can we agree on maximal precision for marshalling, and shortest-
accurate precision for repr, so we can both be happy?

(By shortest-accurate i mean "the shortest representation that converts
to the same machine number".  I believe this is exactly what Andrew
described as Scheme's method.  If you are very concerned about this being
a complex and/or slow operation, a fine compromise would be a "try-12"
algorithm: if %.12g is accurate then use it, and otherwise use %.17g.
This is simple, easy to implement, produces reasonable results in most
cases, and has a small bound on CPU cost.)

    def try_12(x):
        rep = '%.12g' % x
        if float(rep) == x: return rep
        return '%.17g' % x

    def shortest_accurate(x):
        for places in range(17):
            fmt = '%.' + str(places) + 'g'
            rep = fmt % x
            if float(rep) == x: return rep
        return '%.17g' % x


-- ?!ng

From tim.one at comcast.net  Tue Mar 30 20:22:46 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 20:22:57 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <008c01c416b8$6f8139f0$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGCJOAB.tim.one@comcast.net>

[Andrew Koenig]
> You argued against applying the Scheme rules because that would make
> marshalling less accurate when the unmarshalling is done on a machine
> with longer floats.

I said the 754 committee had that objection.  This was discussed on David
Hough's numeric-interest mailing list at the time Clinger and Steele/White
published their float<->string papers, and "phooey" was the consensus of the
754 folks on the mailing list at the time.  The current incarnation of that
committee appears to be in favor of perfect rounding all the time (so was
the older incarnation, but it wasn't believed to be practical then), but I
don't know what they think about shortest-possible (the older incarnation
disliked that one).

I personally don't think decimal strings are a sane way to transport binary
floats regardless of rounding gimmicks.

> But on such a machine, 17 digits won't be good enough anyway.

Doesn't change that 17 digits gets closer then shortest-possible:  the art
of binary fp is about reducing error, not generally about eliminating error.
Shortest-possible does go against the spirit of 754 in that respect.

>>> I thought that 754 requires input and output to be no more than 0.47
>>> LSB away from exact.

>> No
...
>> - for nearest/even rounding, it requires no more than 0.47 ULP error
>>   *beyond* that allowed for perfect nearest/even conversion (which
>>   has a max error of 0.5 ULP on its own)

> That's what I meant.  Rather than 0.47 from exact, I meant 0.47 from
> the best possible.

Well, you originally said that in response to my saying that the standard
doesn't require perfect rounding (and it doesn't), and that the standard has
different accuracy requirements for different inputs (and it does).  So now
I'm left wondering what your original "I thought that ..." was trying to get
across.

...

> Hey, I know some people who write C programs that don't rely on the
> platform C libraries for anything :-)

Python would love to grab their I/O implementation then <0.8 wink>.


From aahz at pythoncraft.com  Tue Mar 30 20:28:12 2004
From: aahz at pythoncraft.com (Aahz)
Date: Tue Mar 30 20:28:15 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <00bc01c41683$1faced50$6402a8c0@arkdesktop>
References: <Pine.LNX.4.58.0403301139070.18028@server1.LFW.org>
	<00bc01c41683$1faced50$6402a8c0@arkdesktop>
Message-ID: <20040331012812.GA1541@panix.com>

On Tue, Mar 30, 2004, Andrew Koenig wrote:
>
> I wish that Python would use the same conversion rules as Scheme:
> 
> 	string->float yields the closest rounded approximation to the
> 	infinite-precision number represented by the string
> 
> 	float->string yields the string with the fewest significant
> 	digits that, when converted as above, yields exactly the same
> 	floating-point value

I've read the whole thread, and I wanted to repeat a critical point for
emphasis:

    This doesn't help

No matter what you do to improve conversion issues, you're still dealing
with the underlying floating-point problems, and having watched the
changing discussions in c.l.py since we moved to the different conversion
system, it seems clear to me that we've improved the nature of the
discussion by forcing people to get bitten earlier.

Facundo's Decimal module is the only way to improve the current
situation.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From tim.one at comcast.net  Tue Mar 30 20:43:56 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 20:44:21 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <Pine.LNX.4.58.0403301609500.18028@server1.LFW.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEGDJOAB.tim.one@comcast.net>

[Ping]
> All right.  Maybe we can make some progress.

Probably not -- we have indeed been thru all of this before.

> I agree that round-to-12 was a real problem.  But i think we are
> talking about two different use cases: compiling to disk and
> displaying on screen.

Those are two of the use cases, yes.

> I think we can satisfy both desires.
>
> If i understand you right, your primary aim is to make sure the
> marshalled form of any floating-point number yields the closest
> possible binary approximation to the machine value on the original
> platform,

I want marshaling of fp numbers to give exact (not approximate) round-trip
equality on a single box, and across all boxes supporting the 754 standard
where C maps "double" to a 754 double.  I also want marshaling to preserve
as much accuracy as possible across boxes with different fp formats,
although that may not be practical.  Strings have nothing to do with that,
except for the historical accident that marshal happens to use decimal
strings.  Changing repr(float) to produce 17 digits went a very long way
toward achieving all that at the time, with minimal code changes.  The
consequences of that change I *really* like didn't become apparent for
years.

> even when that representation is used on a different platform.  (Is
> that correct? Perhaps it's best if you clarify -- exactly what is the
> invariant you want to maintain, and what changes [in platform or
> otherwise] do you want the representation to withstand?)

As above.  Beyond exact equality across suitable 754 boxes, we'd have to
agree on a parameterized model of fp, and explain "as much accuracy as
possible" in terms of that.  But you don't care, so I won't bother <wink>.

> That doesn't have to be part of repr()'s contract.  (In fact, i would
> argue that already repr() makes no such promise.)

It doesn't, but the docs do say:

    If at all possible, this [repr's result] should look like a valid
    Python expression that could be used to recreate an object with the
    same value (given an appropriate environment).

This is possible for repr(float), and is currently true for repr(float) (on
754-conforming boxes).

> repr() is about providing a representation for humans.

I think the docs are quite clear that this function belongs to str():

    .... the ``informal'' string representation of an object.  This
    differs from __repr__() in that it does not have to be a valid
    Python expression: a more convenient or concise representation
    may be used instead. The return value must be a string object.

> Can we agree on maximal precision for marshalling,

I don't want to use strings at all for marshalling.  So long as we do, 17 is
already correct for that purpose (< 17 doesn't preserve equality, > 17 can't
be relied on across 754-conforming C libraries).

> and shortest-accurate precision for repr, so we can both be happy?

As I said before (again and again and again <wink>), I'm the one who has
fielded most newbie questions about fp since Python's beginning, and I'm
very happy with the results of changing repr() to produce 17 digits.  They
get a little shock at the start now, but potentially save themselves from
catastrophe by being forced to grow some *necessary* caution about fp
results early.

So, no, we're not going to agree on this.  My answer for newbies who don't
know and don't care (and who are determined never to know or care) has
always been to move to a Decimal module.  That's less surprising than binary
fp in several ways, and 2.4 will have it.

> (By shortest-accurate i mean "the shortest representation that
> converts to the same machine number".  I believe this is exactly what
> Andrew described as Scheme's method.

Yes, except that Scheme also requires that this string be correctly rounded
to however many digits are produced.  A string s just satisfying

    eval(s) == some_float

needn't necessarily be correctly rounded to s's precision.

> If you are very concerned about this being a complex and/or slow
> operation, a fine compromise would be a "try-12" algorithm: if %.12g
> is accurate then use it, and otherwise use %.17g. This is simple,
> easy to implement, produces reasonable results in most cases, and
> has a small bound on CPU cost.)

Also unique to Python.

>     def try_12(x):
>         rep = '%.12g' % x
>         if float(rep) == x: return rep
>         return '%.17g' % x
>
>     def shortest_accurate(x):
>         for places in range(17):
>             fmt = '%.' + str(places) + 'g'
>             rep = fmt % x
>             if float(rep) == x: return rep
>         return '%.17g' % x

Our disagreement is more fundamental than that.  Then again, it always has
been <smile!>.


From greg at cosc.canterbury.ac.nz  Tue Mar 30 20:47:50 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar 30 20:56:08 2004
Subject: [Python-Dev] Re: Python-Dev Digest, Vol 8, Issue 91
In-Reply-To: <40698DA4.3050900@gradient.cis.upenn.edu>
Message-ID: <200403310147.i2V1lowl014524@cosc353.cosc.canterbury.ac.nz>

Edward Loper <edloper@gradient.cis.upenn.edu>:

> On a related note, now that Python has class methods, is there much 
> point in a "singleton" pattern?  In particular, why not just make a 
> class that only defines class methods, and uses the class namespace to 
> store variables (instead of an instance namespace)?

Classes do various magic things on attribute lookups that you might
not want for an object that isn't meant to be used as a class.

For a while I've been wondering whether Python should have
an "instance" statement that's analogous to "class" but creates
an instance instead, e.g.

  instance fred(Foo):
    blarg = 42
    def f():
      do_something()

would be equivalent to something like

  class _fred(Foo):
    def f():
      do_something()
  fred = _fred()
  fred.blarg = 42

People working on interactive fiction would love something
like this, I expect.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From ncoghlan at email.com  Tue Mar 30 21:02:17 2004
From: ncoghlan at email.com (Nick Coghlan)
Date: Tue Mar 30 21:02:25 2004
Subject: [Python-Dev] PEP 318: Set attribs with .name = value
In-Reply-To: <20040330113939.GF7833@frobozz>
References: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org>
	<20040330113939.GF7833@frobozz>
Message-ID: <1080698537.406a26a9bbf5f@mail.iinet.net.au>

Quoting Andrew Bennetts <andrew-pythondev@puzzling.org>:

> Although this would not (I hope) be a common use-case, what would this code
> mean:
> 
>     with foo:
>         def func(arg):
>             .attrib = value
>             pass
>             
> ?
> 
> I'm not entirely sure it's safe to say they don't conflict, although I don't
> see this case as a serious problem.

The proposal would basically be that the above becomes equivalent to:

  with foo:
    def func(arg):
      with func:
        .attrib = value

(except that the above wouldn't actually work, as 'func' isn't bound to 
anything in it's own body)

Personally, I'd prefer to see function attributes made possible by allowing 
references to the name of the function from within the definition of the 
function.Eg:

    def func(arg):
      func.attrib = value

Then a 'function attribute block' can easily be delimited by using a with block 
as I do above. However, I don't know if this idea has been considered and 
discarded in the past.

Cheers,
Nick
-- 
Nick Coghlan
Brisbane, Australia

From jeremy at alum.mit.edu  Tue Mar 30 21:03:47 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue Mar 30 21:05:36 2004
Subject: [Python-Dev] PEP 328 -- relative and multi-line import
In-Reply-To: <20040330225324.GA1947@panix.com>
References: <200403302218.i2UMIEV10058@guido.python.org>
	<20040330225324.GA1947@panix.com>
Message-ID: <1080698627.12643.125.camel@localhost.localdomain>

On Tue, 2004-03-30 at 17:53, Aahz wrote:
> > Is anyone championing PEP 328?  This is planned for inclusion in 2.4,
> > but not much movement has happened.  If it's stuck on the decision
> > whether to use multiple dots (one per level up) or a single dot (to
> > indicate searching upwards until found), I'm willing to pronounce that
> > it should be multiple dots.
> > 
> > At least "from . import X" and "from .. import X" are completely clear
> > and more levels up are not likely to occur in practice...
> 
> I'm working on producing an edit, but if you just want to Pronounce,
> that's fine with me.  There hasn't been a lot of emotion attached, and
> nobody has gotten seriously annoyed with the multiple dot idea.

I'd say count me seriously annoyed with the multiple dot idea, but I'm
trying not to be annoyed (or annoying) on python-dev today.  In general,
I'm not a fan of relative imports; I'd rather stick to absolute imports
all the time.

Given my general outlook, I think a variable number of dots is simply
weird and would be hard to read.  "Is that three or four dots?"  I'd
like to think the number of dots would be small, but I've spent a lot of
time working on a project with packages hierarchies six levels deep.

Jeremy



From tjreedy at udel.edu  Tue Mar 30 21:52:00 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue Mar 30 21:52:00 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
Message-ID: <c4dbo8$ddh$1@sea.gmane.org>


"Guido van Rossum" <guido@python.org> wrote in message
news:200403302121.i2ULLIM09840@guido.python.org...
> > Another possibility that has been suggested is
> >
> > [decorator]
> > def func(arg, arg):
>
> And one that I currently favor.  I'm out of bandwidth to participate
> on a msg-by-msg basis, but perhaps folks can see if they can come to
> terms with this solution?

Taking the prefix position as given for the moment, why overload list
literal syntax versus something currently illegal and meaningless?  Such as

[[decorator]]      # easy enough to type, or
<decorator>      # almost as easy, or
<<decorator>>

Is there near precedent in other languages that I don't know of?

Regardless of syntax, the way I would think of the process (ie, 'come to
terms with it') is this: a string literal immediately after a function
heading automagically sets the special __doc__ attribute.  A list literal
(if that is what is used) automagically set a special __postprocess__
attribute which is read once after the body is compiled and deleted when no
longer needed.  As I understand the proposal, the actual storage would be
elsewhere in the interpreter, so the above would be 'as if'.  But, would it
be useful for later introspection to actual set and leave such an attribute
that records the decorator processing of the function?

Terry J. Reedy




From tim.one at comcast.net  Tue Mar 30 22:02:49 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 22:03:04 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403302121.i2ULLIM09840@guido.python.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEGJJOAB.tim.one@comcast.net>

[Ping]
>> Another possibility that has been suggested is
>>
>> [decorator]
>> def func(arg, arg):

[Guido]
> And one that I currently favor.  I'm out of bandwidth to participate
> on a msg-by-msg basis, but perhaps folks can see if they can come to
> terms with this solution?

+0.  I'm not convinced that Python actually needs decorators, but if I were
that would be a +1; I certainly like this spelling better than the other
ones.


From tim.one at comcast.net  Tue Mar 30 22:02:50 2004
From: tim.one at comcast.net (Tim Peters)
Date: Tue Mar 30 22:03:15 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040331012812.GA1541@panix.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEGJJOAB.tim.one@comcast.net>

[Aahz]
> I've read the whole thread, and I wanted to repeat a critical point
> for emphasis:
>
>     This doesn't help
>
> No matter what you do to improve conversion issues, you're still
> dealing with the underlying floating-point problems, and having
> watched the changing discussions in c.l.py since we moved to the
> different conversion system, it seems clear to me that we've improved
> the nature of the discussion by forcing people to get bitten earlier.

Then it's time to segue into the "how come str(container) applies repr() to
the containees?" debate, which usually follows this one, like chaos after
puppies <0.9 wink>.

> Facundo's Decimal module is the only way to improve the current
> situation.

The only short-term way to make a big difference, certainly.


From ishnigarrab at earthlink.net  Tue Mar 30 21:34:20 2004
From: ishnigarrab at earthlink.net (foo)
Date: Tue Mar 30 22:04:19 2004
Subject: [Python-Dev] more on PEP 318
Message-ID: <16664981.1080700461156.JavaMail.root@grover.psp.pas.earthlink.net>

I haven't posted on this list much (at all?) but the 318 fiasco has caught my attnetion because it's something that I've been missing in python for a while now (it's actually sparked my search for more dynamic languages like scheme, etc). So I just thought I'd throw in my two sense (sic).

First of I'm +1 on

  def a(*args) as expr: pass

I'm very +1 (+2, +3?) on keywording (where 'as' can be 'mod' or whatever). Also +10 on expressionizing the decorator, where the decorator _evaluates_ to a sequence (or iterable) of callables

Secondly, I don't see why the wrappers should be restricted to simple identifiers that must be callables. One of the things that I've always loved about python is it's "we're all adults here" attitude and the concept of enforcing some arbitrary requirement for the sake of style enforcement seems unnessesary to me. I understand that

  def a(*args) as [lambda fn: fn, module.class.clsmeth, give_me_a_wrapper(some_arg)]: pass

(which, incidently, looks much better as

  def a(*args) as [lambda fn: fn,
                   module.class.clsmeth,
                   give_me_a_wrapper(some_arg)]:
      pass

thus pointing out that the fear of ugly expressions in decorators is a bit exagerated)

can be confusing and cumbersome, limiting it from happening isn't going to prevent some caffinated hacker from doing wacky things, it merely forces them to be more clever about it, i.e.:

---- helpers.py ----

  def argument_checked(*tests):
      def outter(fn):
          def inner(*args, **kwds):
              # I swear I don't come from a functional background! ;)
              for arg, test in zip(args, tests):
                  test(arg) # raises excpetion if doesn't meet check
              fn(*args, **kwds)
      return outter

  def memoized(fn):
      # ... stuff ...

---- main.py ----

  import helpers

  # ... misc code ...

  argument_checked = argument_checked(int, iter)
  memoized = helpers.memoized

  # ... maybe pages of code ...

  def doSomething(must_be_int, must_be_iterable) as [argument_checked, memoized]:
      # ... something ...

which IMHO is
* ugly because it assigns unnessesary names and clutters the file.
* a pain to have to do every time I want to use something that's not a pure identifier as a wrapper.
* and also in violation of "There should be one-- and preferably only one --obvious way to do it." because it leads one to believe that you can only use things which are predefined out-of-the box wrappers.
* is not explicit where the wrappers are coming from or how they have been formed

Another use case would be when a certain series of wrappers are going to be used over and over again, such as:

  def myFirstFunc(*args) as [argument_checked, memoized, protected, something_else, property]:
      pass

  def mySecondFunc(*args) as [argument_checked, memoized, protected, something_else, property]:
      pass

In such a case, typing out (or c-n-p-ing) that list of wrappers is tedious, not to mention a pattern to be refactorred out:

  my_wrappers = [argument_checked, memoized, protected, something_else, property]

  def myFirstFunc(*args) as my_wrappers:
      pass

  def mySecondFunc(*args) as my_wrappers:
      pass

As for lambda's etc, I see nothing wrong with

  def myFunc(*args) [lambda fn: something(fn)
                     lambda fn: something_else(fn)
                     lambda fn: more_stuff(fn)]:
      pass

so long as it isn't abused. And even then, I don't see why I should be specifically restricted from abusing it.


Anyways, there you go, take it or leave it.

Isaac Freeman

From greg at cosc.canterbury.ac.nz  Tue Mar 30 21:42:13 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue Mar 30 22:46:41 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEDJJOAB.tim.one@comcast.net>
Message-ID: <200403310242.i2V2gD13014612@cosc353.cosc.canterbury.ac.nz>

Josiah Carlson:

> Python uses the 8-byte FP, not the (arguably worthless) 4-bit FP.
                                                          ^^^^^

Yes, most people have hardware a little more sophisticated
than an Intel 4004 these days. :-)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From shane.holloway at ieee.org  Tue Mar 30 22:45:58 2004
From: shane.holloway at ieee.org (Shane Holloway (IEEE))
Date: Tue Mar 30 22:46:52 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <1080674306.12643.76.camel@localhost.localdomain>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
Message-ID: <406A3EF6.7080107@ieee.org>

Jeremy Hylton wrote:

 > Another possibility that has been suggested is
 >
 > [decorator]
 > def func(arg, arg):
 >

+1 This tastes the best to me of the flavors I have seen.  I also
agree that this syntax would be a little difficult to look up.

-Shane



From jcarlson at uci.edu  Tue Mar 30 22:49:53 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Tue Mar 30 22:54:18 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <200403310242.i2V2gD13014612@cosc353.cosc.canterbury.ac.nz>
References: <LNBBLJKPBEHFEDALKOLCCEDJJOAB.tim.one@comcast.net>
	<200403310242.i2V2gD13014612@cosc353.cosc.canterbury.ac.nz>
Message-ID: <20040330194921.E60D.JCARLSON@uci.edu>

> > Python uses the 8-byte FP, not the (arguably worthless) 4-bit FP.
>                                                           ^^^^^
> 
> Yes, most people have hardware a little more sophisticated
> than an Intel 4004 these days. :-)

Oops, I slipped.  I meant 4-byte.  My bad. ;)

 - Josiah


From shane at zope.com  Tue Mar 30 23:08:05 2004
From: shane at zope.com (Shane Hathaway)
Date: Tue Mar 30 23:08:34 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403302121.i2ULLIM09840@guido.python.org>
References: Your message of "Tue, 30 Mar 2004 14:18:26 EST."
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
Message-ID: <406A4425.4070500@zope.com>

On 03/30/04 16:21, Guido van Rossum wrote:
>>Another possibility that has been suggested is
>>
>>[decorator] 
>>def func(arg, arg):
> 
> And one that I currently favor.  I'm out of bandwidth to participate
> on a msg-by-msg basis, but perhaps folks can see if they can come to
> terms with this solution?

+1.  We had a short email exchange about this a while ago.  I'm glad 
it's back on the table.  It's elegant, and the decorations people are 
already using will become more apparent than they are today.

This is important to me because decorators need to be very visible.  One 
class I frequently use (SimpleVocabulary in Zope 3) drove me crazy at 
first until I understood the pattern the author had used.  The 
constructor takes two strange arguments, and for quite a while I 
couldn't figure out just what it wanted.  Finally, I noticed that the 
class has several classmethods, and they all call the constructor.  The 
author intended users to call the classmethods, not the constructor, but 
it was hard to notice any classmethods since the word "classmethod" was 
buried below the function body.  Using "cls" as a first argument didn't 
help, since I've practically trained my eyes to ignore the first argument.

Zope has experimented with several ways of decorating methods with 
security declarations.  Here are some of the variations attempted so far:


class Foo:
     __ac_permissions__ = (('bar', 'View management screens'),)
     def bar(self):
         pass
InitializeClass(Foo)  # Finds __ac_permissions__ and changes methods

class Foo:
     bar__roles__ = PermissionRole('View management screens')
     def bar(self):
         pass

class Foo:
     security = ClassSecurityInfo()
     security.declareProtected('View management screens', 'bar')
     def bar(self):
         pass
InitializeClass(Foo)  # Finds a ClassSecurityInfo and calls it


These are all bad enough that Zope 3 has chosen to make no such 
declarations at all in Python code, putting them in XML instead.  That 
may be the right choice for Zope 3, but surely other Python developers 
are running into similar needs on their own projects.  They shouldn't 
have to go through this pain.  They should be able to start with 
something clean like this:


class Foo:
     [protected('View management screens')]
     def bar(self):
         pass


Shane

From shane at zope.com  Tue Mar 30 23:18:27 2004
From: shane at zope.com (Shane Hathaway)
Date: Tue Mar 30 23:19:02 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040330194921.E60D.JCARLSON@uci.edu>
References: <200403310242.i2V2gD13014612@cosc353.cosc.canterbury.ac.nz>
	<20040330194921.E60D.JCARLSON@uci.edu>
Message-ID: <406A4693.9090903@zope.com>

On 03/30/04 22:49, Josiah Carlson wrote:
>>>Python uses the 8-byte FP, not the (arguably worthless) 4-bit FP.
>>
>>                                                          ^^^^^
>>
>>Yes, most people have hardware a little more sophisticated
>>than an Intel 4004 these days. :-)
> 
> 
> Oops, I slipped.  I meant 4-byte.  My bad. ;)

ROFL!  I can just imagine:

   bit 0 - mantissa (lower bit)
   bit 1 - mantissa (upper bit)
   bit 2 - exponent
   bit 3 - sign

Now who's complaining about rounding errors?

Shane

From guido at python.org  Tue Mar 30 23:52:44 2004
From: guido at python.org (Guido van Rossum)
Date: Tue Mar 30 23:52:53 2004
Subject: [Python-Dev] PEP 328 -- relative and multi-line import
In-Reply-To: Your message of "Tue, 30 Mar 2004 21:03:47 EST."
	<1080698627.12643.125.camel@localhost.localdomain> 
References: <200403302218.i2UMIEV10058@guido.python.org>
	<20040330225324.GA1947@panix.com> 
	<1080698627.12643.125.camel@localhost.localdomain> 
Message-ID: <200403310452.i2V4qif10995@guido.python.org>

[Jeremy]
> I'd say count me seriously annoyed with the multiple dot idea, but
> I'm trying not to be annoyed (or annoying) on python-dev today.  In
> general, I'm not a fan of relative imports; I'd rather stick to
> absolute imports all the time.

So don't use them!

> Given my general outlook, I think a variable number of dots is
> simply weird and would be hard to read.  "Is that three or four
> dots?"  I'd like to think the number of dots would be small, but
> I've spent a lot of time working on a project with packages
> hierarchies six levels deep.

The alternative, with a single dot, wouldn't make things better in
that case: a single dot would cause a search up until a match is
found, leaving the reader with even more to guess (plus more
possibilities for mistakes, if the same name occurs at multiple
levels).  With multiple dots at least the author is likely to see how
silly it is to use relative import with more than two dots, and will
likely switch to absolute import.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Mar 31 00:06:00 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 00:06:08 2004
Subject: [Python-Dev] PEP 318: Set attribs with .name = value
In-Reply-To: Your message of "Wed, 31 Mar 2004 10:02:17 +0800."
	<1080698537.406a26a9bbf5f@mail.iinet.net.au> 
References: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org>
	<20040330113939.GF7833@frobozz> 
	<1080698537.406a26a9bbf5f@mail.iinet.net.au> 
Message-ID: <200403310506.i2V560H11076@guido.python.org>

> Personally, I'd prefer to see function attributes made possible by allowing 
> references to the name of the function from within the definition of the 
> function.Eg:
> 
>     def func(arg):
>       func.attrib = value

Since this keeps coming up: this gets a -1000 from me (sorry Barry :).

It is currently valid syntax with valid semantics, and strongly
suggest something it isn't: it looks as if the attribute assignment is
done at function call time rather than at function definition time.

That is the kind of language design that becomes a real liability in
the hands of someone who misinterprets it.

Ditto for variants like ".attrib = value" and even "@attrib = value"
(which I proposed under a heading "thinking aloud" meaning I wasn't so
sure of it).

--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg at cosc.canterbury.ac.nz  Wed Mar 31 00:05:52 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 31 00:10:43 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <406A4693.9090903@zope.com>
Message-ID: <200403310505.i2V55q4c014832@cosc353.cosc.canterbury.ac.nz>

Shane Hathaway <shane@zope.com>:

> ROFL!  I can just imagine:
> 
>    bit 0 - mantissa (lower bit)
>    bit 1 - mantissa (upper bit)
>    bit 2 - exponent
>    bit 3 - sign

Is that exponent in excess-1 format? :-)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From bsder at allcaps.org  Wed Mar 31 00:31:13 2004
From: bsder at allcaps.org (Andrew P. Lentvorski, Jr.)
Date: Wed Mar 31 00:28:29 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEFKJOAB.tim.one@comcast.net>
References: <LNBBLJKPBEHFEDALKOLCAEFKJOAB.tim.one@comcast.net>
Message-ID: <20040330203718.H68104@mail.allcaps.org>

On Tue, 30 Mar 2004, Tim Peters wrote:

> The only thing Decimal will have against it in the end is runtime sloth.

While the Decimal implementation is in Python, certainly.

However, I did some floating point calculation timings a while back, and
the Python FP system is slow due to the overhead of working out types,
unpacking the values, and repacking the value.  The actual native
calculation is a small portion of that time.

My question is: Is it possible that a C implementation of Decimal would be
almost as fast as native floating point in Python for reasonable digit
lengths and settings? (ie. use native FP as an approximation and then do
some tests to get the last digit right).

The intent here is to at least propose as a strawman that Python use a C
implementation of Decimal as its native floating point type.

This is similar to the long int/int unification.  Long ints are slow, but
things are okay as long as the numbers are within the native range.  The
hope would be that Decimal configurations which fit within the machine
format are reasonably fast, but things outside it slow down.

Please note that nowhere did I comment that creating such a C
implementation of Decimal would be easy or even possible. ;)

-a

From guido at python.org  Wed Mar 31 00:29:23 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 00:29:31 2004
Subject: [Python-Dev] genexps slow?
Message-ID: <200403310529.i2V5TNJ11171@guido.python.org>

Can anybody explain this?

[guido@guido linux]$ ./python ../Lib/timeit.py -s 'r=range(10000)' 'sum([x for x in r])'
100 loops, best of 3: 7.75 msec per loop
[guido@guido linux]$ ./python ../Lib/timeit.py -s 'r=range(10000)' 'sum(x for x in r)'
100 loops, best of 3: 8.23 msec per loop

(I believe this is with the penultimate version of the patch from SF.)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From aahz at pythoncraft.com  Wed Mar 31 00:47:50 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 31 00:47:58 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040330203718.H68104@mail.allcaps.org>
References: <LNBBLJKPBEHFEDALKOLCAEFKJOAB.tim.one@comcast.net>
	<20040330203718.H68104@mail.allcaps.org>
Message-ID: <20040331054750.GA11408@panix.com>

On Tue, Mar 30, 2004, Andrew P. Lentvorski, Jr. wrote:
>
> My question is: Is it possible that a C implementation of Decimal
> would be almost as fast as native floating point in Python for
> reasonable digit lengths and settings? (ie. use native FP as an
> approximation and then do some tests to get the last digit right).

Basic answer: yes, for people not doing serious number crunching

> This is similar to the long int/int unification.  Long ints are slow,
> but things are okay as long as the numbers are within the native
> range.  The hope would be that Decimal configurations which fit within
> the machine format are reasonably fast, but things outside it slow
> down.

Well, that won't happen.  The long/int issue at least has compatibility
at the binary level; binary/decimal conversions lead us right back to
the problems that Decimal is trying to fix.

> Please note that nowhere did I comment that creating such a C
> implementation of Decimal would be easy or even possible. ;)

Actually, the whole point of the Decimal class is that it's easy to
implement.  Once we agree on the API and semantics, converting to C
should be not much harder than trivial.

Although I ended up dropping the ball, that's the whole reason I got
involved with Decimal in the first place: the intention is that Decimal
written in C will release the GIL.  It will be an experiment in
computational threading.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From python at rcn.com  Wed Mar 31 00:48:53 2004
From: python at rcn.com (Raymond Hettinger)
Date: Wed Mar 31 00:49:17 2004
Subject: [Python-Dev] genexps slow?
In-Reply-To: <200403310529.i2V5TNJ11171@guido.python.org>
Message-ID: <004801c416e3$d9919300$722ec797@oemcomputer>

> Can anybody explain this?
> 
> [guido@guido linux]$ ./python ../Lib/timeit.py -s 'r=range(10000)'
'sum([x
> for x in r])'
> 100 loops, best of 3: 7.75 msec per loop
> [guido@guido linux]$ ./python ../Lib/timeit.py -s 'r=range(10000)'
'sum(x
> for x in r)'
> 100 loops, best of 3: 8.23 msec per loop

I optimized list comps so that they run much faster than they did back
when Alex first made the comparative timings.  On my machine, they run
twice as fast.

Comparing listcomps to genexps, there are several factors affecting the
relative timings:

* Genexps should come out ahead on cache utilization (the consumer
function 
always has the data element in cache because it was just produced).
This effect increases with number of iterations (large lists cannot keep
all their elements in cache as the same time).

* Genexps incur frame switching time that list comps do not.  This
effect is a constant for each iteration and will make genexps slower
than list comps for shorter lists.

* Genexps do not access malloc as often and do not move all the data as
often.   As lists grow, they periodically have to realloc and move data.
This effect is much more pronounced in real programs where the memory is
more fragmented and the data sizes are larger.

* Genexps take better advantage of re-usable containers (ones with a
free lists).  For instance, if you time 'max((x,x) for x in r)' then
genexps should come out ahead because the same tuple is being reused on
every pass while list comp has to build 10000 tuples of size two.


Raymond




From tim.one at comcast.net  Wed Mar 31 00:51:20 2004
From: tim.one at comcast.net (Tim Peters)
Date: Wed Mar 31 00:51:27 2004
Subject: [Python-Dev] genexps slow?
In-Reply-To: <200403310529.i2V5TNJ11171@guido.python.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEHFJOAB.tim.one@comcast.net>

[Guido]
> Can anybody explain this?
>
> [guido@guido linux]$ ./python ../Lib/timeit.py -s 'r=range(10000)'
                      'sum([x for x in r])'
> 100 loops, best of 3: 7.75 msec per loop
> [guido@guido linux]$ ./python ../Lib/timeit.py -s 'r=range(10000)'
                      'sum(x for x in r)'
> 100 loops, best of 3: 8.23 msec per loop
>
> (I believe this is with the penultimate version of the patch from SF.)

Does a sub-10% difference really need explanation?  Resuming a generator
function is a lot cheaper than calling a function, but there's still a
non-trivial amount of code to get in and out of eval_frame() each time,
which the listcomp version gets to skip.  Make r a lot bigger, and I expect
the genexp will get relatively faster (due to better cache behavior).
Timing plain 'sum(r)' would also be revealing (of something <wink>).


From greg at cosc.canterbury.ac.nz  Wed Mar 31 00:59:35 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 31 00:59:54 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040330203718.H68104@mail.allcaps.org>
Message-ID: <200403310559.i2V5xZDi014923@cosc353.cosc.canterbury.ac.nz>

"Andrew P. Lentvorski, Jr." <bsder@allcaps.org>:

> My question is: Is it possible that a C implementation of Decimal would be
> almost as fast as native floating point in Python for reasonable digit
> lengths and settings? (ie. use native FP as an approximation and then do
> some tests to get the last digit right).

That sounds like an extremely tricky thing to do, and it's
not immediately obvious that it's even possible.

But maybe it would still be "fast enough" doing it all
properly in decimal?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From bob at redivi.com  Wed Mar 31 01:16:03 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 31 01:11:57 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040331054750.GA11408@panix.com>
References: <LNBBLJKPBEHFEDALKOLCAEFKJOAB.tim.one@comcast.net>
	<20040330203718.H68104@mail.allcaps.org>
	<20040331054750.GA11408@panix.com>
Message-ID: <E33EF295-82DA-11D8-A68D-000A95686CD8@redivi.com>


On Mar 31, 2004, at 12:47 AM, Aahz wrote:

> On Tue, Mar 30, 2004, Andrew P. Lentvorski, Jr. wrote:
>> Please note that nowhere did I comment that creating such a C
>> implementation of Decimal would be easy or even possible. ;)
>
> Actually, the whole point of the Decimal class is that it's easy to
> implement.  Once we agree on the API and semantics, converting to C
> should be not much harder than trivial.
>
> Although I ended up dropping the ball, that's the whole reason I got
> involved with Decimal in the first place: the intention is that Decimal
> written in C will release the GIL.  It will be an experiment in
> computational threading.

That sounds like a phenomenally bad thing to advertise :)

-bob


From python at rcn.com  Wed Mar 31 01:19:58 2004
From: python at rcn.com (Raymond Hettinger)
Date: Wed Mar 31 01:20:24 2004
Subject: [Python-Dev] genexps slow?
In-Reply-To: <200403310529.i2V5TNJ11171@guido.python.org>
Message-ID: <004901c416e8$313076e0$722ec797@oemcomputer>

[Tim]
>  Make r a lot bigger, and I
> expect
> the genexp will get relatively faster (due to better cache behavior).

Also, try a timing with "sum(x for x in xrange(100000))".  This will
highlight the differences in behavior.  With xrange, this genexp will be
able to execute entirely in cache.


Raymond


From jcarlson at uci.edu  Wed Mar 31 01:38:35 2004
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed Mar 31 01:42:25 2004
Subject: [Python-Dev] genexps slow?
In-Reply-To: <004901c416e8$313076e0$722ec797@oemcomputer>
References: <200403310529.i2V5TNJ11171@guido.python.org>
	<004901c416e8$313076e0$722ec797@oemcomputer>
Message-ID: <20040330223208.E61C.JCARLSON@uci.edu>

> Also, try a timing with "sum(x for x in xrange(100000))".  This will
> highlight the differences in behavior.  With xrange, this genexp will be
> able to execute entirely in cache.

It'll also overflow on 32 bit processors.  It may be better stick with
max() or min() to reduce the effect of integer adds.  The one drawback
is that you /may/ need to use big ranges in order to note significant
differences in timings.

 - Josiah


From jiwon at softwise.co.kr  Wed Mar 31 02:41:43 2004
From: jiwon at softwise.co.kr (jiwon)
Date: Wed Mar 31 02:43:35 2004
Subject: [Python-Dev] genexps slow?
References: <004901c416e8$313076e0$722ec797@oemcomputer>
Message-ID: <0cd001c416f3$b263b780$d70aa8c0@jiwon>

> [Tim]
> >  Make r a lot bigger, and I
> > expect
> > the genexp will get relatively faster (due to better cache behavior).

[Raymond ]
> Also, try a timing with "sum(x for x in xrange(100000))".  This will
> highlight the differences in behavior.  With xrange, this genexp will be
> able to execute entirely in cache.

 I just tried timing it. For your information, here're results. :)

[jiwon@holmes] ./python ./Lib/timeit.py -s 'r=range(10000000)' 'sum(x for x
in r)'
10 loops, best of 3: 4.83 sec per loop
[jiwon@holmes] ./python ./Lib/timeit.py -s 'r=range(10000000)' 'sum([x for x
in r])'
10 loops, best of 3: 6.49 sec per loop
[jiwon@holmes] ./python ./Lib/timeit.py  'sum(x for x in xrange(10000000))'
10 loops, best of 3: 5.14 sec per loop
[jiwon@holmes] ./python ./Lib/timeit.py  'sum([x for x in
xrange(10000000)])'
10 loops, best of 3: 5.85 sec per loop

Tried with lazy-binding version of genexpr, and non-optimized version of
listcomp.


From pf_moore at yahoo.co.uk  Wed Mar 31 03:00:06 2004
From: pf_moore at yahoo.co.uk (Paul Moore)
Date: Wed Mar 31 03:00:10 2004
Subject: [Python-Dev] Re: Timing for Py2.4
References: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>
	<d66ucick.fsf@python.net>
	<u106nn00.fsf@yahoo.co.uk> <vfkmays7.fsf@python.net>
	<ptaunhwp.fsf@yahoo.co.uk> <4069E7B2.2080701@v.loewis.de>
Message-ID: <lllhv43d.fsf@yahoo.co.uk>

"Martin v. L?wis" <martin@v.loewis.de> writes:

> Of course it won't. You need VC7.1, which comes as part of
> Visual Studio .NET 2003.

[...]

> I doubt it could work, in the general case. VC7 uses msvcr7.dll, whereas
> VC7.1 uses msvcr71.dll. Mixing different versions of the C runtime is
> not supported by Microsoft.

Ah. I'd missed the ".1". So there is no free version of the compiler
used to build the official Python Windows distribution. In that case,
I agree - there's no way this could work (at least, other than by
unsupported mixing of runtimes, which may work by accident :-)).

Thanks for pointing this out,
Paul.
-- 
This signature intentionally left blank


From aleaxit at yahoo.com  Tue Mar 30 14:39:02 2004
From: aleaxit at yahoo.com (Alex Martelli)
Date: Wed Mar 31 03:02:54 2004
Subject: [Python-Dev] Re: Python-Dev Digest, Vol 8, Issue 91
In-Reply-To: <40698DA4.3050900@gradient.cis.upenn.edu>
References: <E1B8HVQ-0000ox-6T@mail.python.org>
	<40698DA4.3050900@gradient.cis.upenn.edu>
Message-ID: <E5C507C7-8281-11D8-AD6E-000A95EFAE9E@yahoo.com>


On 2004 Mar 30, at 17:09, Edward Loper wrote:
    ...
> On a related note, now that Python has class methods, is there much 
> point in a "singleton" pattern?  In particular, why not just make a 
> class that only defines class methods, and uses the class namespace to 
> store variables (instead of an instance namespace)?

I think it's generally better to use a module when what you want is a 
module's functionality, and that's what you seem to be describing here. 
  What using a class _instance_ buys you is special methods (and 
possibly descriptors), e.g. sometimes you might want __getattr__ (or a 
property) on "something of which there is a single instance" (avoiding 
the "singleton" word by paraphrase -- IMHO, "singleton" means "a set 
with a single element", NOT "the single element of that set", which is 
different -- anyway...).  Generally, something like:

class foo(object):
    ''' whatever you want in here '''
foo = foo()

deals even with such obscure desiderata.  Left out of all this (and a 
key force in the Gof4 writeup of "Singleton") is _inheritance_ -- the 
ability for the user to _subclass_ the singleton's class while still 
preserving the "there can be only one" constraint (best known as "the 
Highlander rule"...).  IMHO, even if Guido hates it, my Borg 
non-pattern is still the best ("least bad") way to provide such VERY 
obscure feechurs.  However, in the simpler approach above,

class _myspecialfoo(foo.__class__):
    ''' tweak a few things here '''
foo.__class__ = _myspecialfoo

can also work.  All which being said -- in almost 5 years of coding 
almost only in Python, I think I found myself needing anything 
"singletonish" (beyond a plain module), in production code, just once, 
and even that once went away once the code that used to rely on it got 
restructured, refactored and cleaned up.  This personal experience is 
certainly a good part of what makes me believe that the importance of 
everything singletonish is vastly exaggerated in popular perception -- 
almost _every_ other classic DP is more important and interesting, yet 
Singleton keeps coming up nevertheless...!


Alex


From s.percivall at chello.se  Wed Mar 31 03:12:39 2004
From: s.percivall at chello.se (Simon Percivall)
Date: Wed Mar 31 03:12:43 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <c4dbo8$ddh$1@sea.gmane.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<c4dbo8$ddh$1@sea.gmane.org>
Message-ID: <2CF5B513-82EB-11D8-BD7A-0003934AD54A@chello.se>

On 2004-03-31, at 04.52, Terry Reedy wrote:

> Taking the prefix position as given for the moment, why overload list
> literal syntax versus something currently illegal and meaningless?  
> Such as
>
> <decorator>      # almost as easy, or
>

Yes. This looks better and will make it more clear that it's a special 
case. Otherwise the decorators will look too decoupled from the 
function.


From mwh at python.net  Wed Mar 31 05:36:32 2004
From: mwh at python.net (Michael Hudson)
Date: Wed Mar 31 05:36:36 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403302121.i2ULLIM09840@guido.python.org> (Guido van
	Rossum's message of "Tue, 30 Mar 2004 13:21:18 -0800")
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
Message-ID: <2m3c7pjob3.fsf@starship.python.net>

Guido van Rossum <guido@python.org> writes:

>> Another possibility that has been suggested is
>> 
>> [decorator] 
>> def func(arg, arg):
>
> And one that I currently favor.  I'm out of bandwidth to participate
> on a msg-by-msg basis, but perhaps folks can see if they can come to
> terms with this solution?

Seems like it would be painful to implement.

Obviously that's not a killer blow, but:

If the implementation is hard to explain, it's a bad idea.

Cheers,
mwh

-- 
  If design space weren't so vast, and the good solutions so small a
  portion of it, programming would be a lot easier.
                                            -- maney, comp.lang.python

From mwh at python.net  Wed Mar 31 05:45:25 2004
From: mwh at python.net (Michael Hudson)
Date: Wed Mar 31 05:45:39 2004
Subject: [Python-Dev] genexps slow?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEHFJOAB.tim.one@comcast.net> (Tim
	Peters's message of "Wed, 31 Mar 2004 00:51:20 -0500")
References: <LNBBLJKPBEHFEDALKOLCKEHFJOAB.tim.one@comcast.net>
Message-ID: <2my8phi9bu.fsf@starship.python.net>

"Tim Peters" <tim.one@comcast.net> writes:

>  Resuming a generator function is a lot cheaper than calling a
> function, but there's still a non-trivial amount of code to get in
> and out of eval_frame() each time, which the listcomp version gets
> to skip.

This is something that occurred to me a while ago: how many opcodes
does a typical invocation of eval_frame actually execute?  A little
script told me that the median length of functions in Lib/*.py was 38
instructions (or 52 bytes) IIRC, but obviously a dynamic count is far
more interesting.  If the number is fairly small (and honestly, I have
no idea), the set up and tear down code becomes much more significant
than one might think.

I didn't think much about the code to get *out* of eval_frame.

Cheers,
mwh

-- 
  I love the way Microsoft follows standards.  In much the same
  manner that fish follow migrating caribou.           -- Paul Tomblin
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html

From garth at garthy.com  Wed Mar 31 06:11:05 2004
From: garth at garthy.com (Gareth)
Date: Wed Mar 31 06:18:10 2004
Subject: [Python-Dev] Re: Timing for Py2.4
In-Reply-To: <4069E7B2.2080701@v.loewis.de>
References: <16E1010E4581B049ABC51D4975CEDB88052CAFF4@UKDCX001.uk.int.atosorigin.com>	<d66ucick.fsf@python.net>	<u106nn00.fsf@yahoo.co.uk>	<vfkmays7.fsf@python.net>
	<ptaunhwp.fsf@yahoo.co.uk> <4069E7B2.2080701@v.loewis.de>
Message-ID: <406AA749.5080108@garthy.com>

Martin v. L?wis wrote:

> Paul Moore wrote:
>
>> error: Python was built with version 7.1 of Visual Studio, and
>> extensions need to be built with the same version of the compiler,
>> but it isn't installed.
>>
>> So even though I have the free VC7 compiler, and it's in my PATH, I
>> still can't build a simple extension with it.
>
>
> Of course it won't. You need VC7.1, which comes as part of
> Visual Studio .NET 2003.
>
The free compiler is based on VC7.1 so it will build it just needs 
distutils to be modified
to look for the free compiler.

It looks like it's just a case of  looking in the right registry key. 
I've never looked at distutils before but I'll have a go

Garth

From barry at python.org  Wed Mar 31 07:48:58 2004
From: barry at python.org (Barry Warsaw)
Date: Wed Mar 31 07:49:05 2004
Subject: [Python-Dev] PEP 318: Set attribs with .name = value
In-Reply-To: <200403310506.i2V560H11076@guido.python.org>
References: <Pine.LNX.4.58.0403300136500.18028@server1.LFW.org>
	<20040330113939.GF7833@frobozz>
	<1080698537.406a26a9bbf5f@mail.iinet.net.au>
	<200403310506.i2V560H11076@guido.python.org>
Message-ID: <1080737337.22341.73.camel@anthem.wooz.org>

On Wed, 2004-03-31 at 00:06, Guido van Rossum wrote:

> It is currently valid syntax with valid semantics, and strongly
> suggest something it isn't: it looks as if the attribute assignment is
> done at function call time rather than at function definition time.

That's a fair cop.  I retract. :)

-Barry



From pedronis at bluewin.ch  Wed Mar 31 08:10:47 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Wed Mar 31 08:05:53 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <c4dbo8$ddh$1@sea.gmane.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
Message-ID: <5.2.1.1.0.20040331150445.02959ef0@pop.bluewin.ch>

At 21:52 30.03.2004 -0500, Terry Reedy wrote:

>[[decorator]]      # easy enough to type, or

this already just valid syntax today

><decorator>      # almost as easy, or
><<decorator>>
>
>Is there near precedent in other languages that I don't know of?

VB.NET uses <...> for .NET attributes instead of [...].

OTOH <...> is a usual syntax for parametrized types, e.g. C++ templates, 
Java,C# generics and others

It seems VB.NET will use a syntax with (...) for that. 


From FBatista at uniFON.com.ar  Wed Mar 31 08:11:48 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 31 08:13:46 2004
Subject: [Python-Dev] repr(1.1)
Message-ID: <A128D751272CD411BC9200508BC2194D0338380D@escpl.tcp.com.ar>

Jewett, Jim wrote:

#- For that matter, Decimal might be a better default format for 1.1, if
#- a language were starting fresh.  It still wouldn't be 
#- perfect, though.
#- How many digits should 1.1/3 print?

That depends of your context.

If the precision in your context is set to 9 (default), it'll print:

>>> import Decimal
>>> d1 = Decimal.Decimal(1)
>>> d3 = Decimal.Decimal(3)
>>> d1 / d3
Decimal( (0, (3, 3, 3, 3, 3, 3, 3, 3, 3), -9L) )
>>> str(d1/d3)
'0.333333333'

.	Facundo

From pedronis at bluewin.ch  Wed Mar 31 08:22:19 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Wed Mar 31 08:17:33 2004
Subject: [Python-Dev] PEP 318: Security use case
In-Reply-To: <1080655583.12643.28.camel@localhost.localdomain>
References: <Pine.LNX.4.58.0403300133000.18028@server1.LFW.org>
	<Pine.LNX.4.58.0403300133000.18028@server1.LFW.org>
Message-ID: <5.2.1.1.0.20040331151146.03362e20@pop.bluewin.ch>

At 09:06 30.03.2004 -0500, Jeremy Hylton wrote:
>On Tue, 2004-03-30 at 06:17, Ka-Ping Yee wrote:
> > Inner scopes are one of the best places to hide things in Python;
> > they are very difficult to get at.  (I can't seem to find any
> > special attributes that access the values inside them, and even
> > if there is a way, it would be easy to imagine a restricted
> > execution mode that wouldn't expose them.)
>
>It's by design that there is no meta way to get at bindings for free
>variables.  I don't think I said anything about at in the PEP, but I was
>thinking of JAR's thesis (http://mumble.net/~jar/pubs/secureos/).

the only way I know to get at them is something like this (someone once 
asked on comp.lang.python):

 >>> def mk_acc(x=None):
...   return lambda: x
...
 >>> acc = mk_acc()
 >>> import new
 >>> def deref(cell):
...   return new.function(acc.func_code,{},"#deref",(),(cell,))()
...
 >>> def mk_test(y='foo'):
...   return lambda : y
...
 >>> deref(mk_test().func_closure[0])
'foo'
 >>>

so yes: they are difficult to get at, it's easy to imagine a restricted 
execution mode that wouldn't expose them, i.e. that wouldn't be hard part 
of such a design. 


From FBatista at uniFON.com.ar  Wed Mar 31 08:16:48 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 31 08:18:57 2004
Subject: [Python-Dev] Expert floats
Message-ID: <A128D751272CD411BC9200508BC2194D0338380E@escpl.tcp.com.ar>

Andrew P. Lentvorski, Jr. wrote:

#- > The only thing Decimal will have against it in the end is 
#- runtime sloth.
#- 
#- While the Decimal implementation is in Python, certainly.
#- 
#- However, I did some floating point calculation timings a 
#- while back, and
#- the Python FP system is slow due to the overhead of working 
#- out types,
#- unpacking the values, and repacking the value.  The actual native
#- calculation is a small portion of that time.
#- 
#- My question is: Is it possible that a C implementation of 
#- Decimal would be
#- almost as fast as native floating point in Python for 
#- reasonable digit
#- lengths and settings? (ie. use native FP as an approximation 
#- and then do
#- some tests to get the last digit right).

While I'll do my best to keep Decimal fast, I'll first make it usable. As
Aahz used to put in his signature, it is easier to optimize correct code
than to correct optimized code.

Anyway, I still don't know how slower is Decimal than built-in binary
floating point.

.	Facundo

From FBatista at uniFON.com.ar  Wed Mar 31 08:21:16 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 31 08:23:14 2004
Subject: [Python-Dev] Expert floats
Message-ID: <A128D751272CD411BC9200508BC2194D03383810@escpl.tcp.com.ar>

Aahz wrote:

#- Actually, the whole point of the Decimal class is that it's easy to
#- implement.  Once we agree on the API and semantics, converting to C
#- should be not much harder than trivial.

And we'll have the test suite finished and people using the Decimal python
implementation.

.	Facundo





. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
ADVERTENCIA  

La informaci?n contenida en este mensaje y cualquier archivo anexo al mismo,
son para uso exclusivo del destinatario y pueden contener informaci?n
confidencial o propietaria, cuya divulgaci?n es sancionada por la ley. 

Si Ud. No es uno de los destinatarios consignados o la persona responsable
de hacer llegar este mensaje a los destinatarios consignados, no est?
autorizado a divulgar, copiar, distribuir o retener informaci?n (o parte de
ella) contenida en este mensaje. Por favor notif?quenos respondiendo al
remitente, borre el mensaje original y borre las copias (impresas o grabadas
en cualquier medio magn?tico) que pueda haber realizado del mismo. 

Todas las opiniones contenidas en este mail son propias del autor del
mensaje y no necesariamente coinciden con las de Telef?nica Comunicaciones
Personales S.A. o alguna empresa asociada. 

Los mensajes electr?nicos pueden ser alterados, motivo por el cual
Telef?nica Comunicaciones Personales S.A. no aceptar? ninguna obligaci?n
cualquiera sea el resultante de este mensaje. 

Muchas Gracias.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20040331/b15de8f7/attachment.html
From FBatista at uniFON.com.ar  Wed Mar 31 08:24:06 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 31 08:26:11 2004
Subject: [Python-Dev] Expert floats
Message-ID: <A128D751272CD411BC9200508BC2194D03383811@escpl.tcp.com.ar>

Greg Ewing wrote:

#- But maybe it would still be "fast enough" doing it all
#- properly in decimal?

The issue here (I didn't solve it yet) is:

	How fast is fast enough?

.	Facundo

From dave at boost-consulting.com  Wed Mar 31 08:32:19 2004
From: dave at boost-consulting.com (David Abrahams)
Date: Wed Mar 31 08:40:31 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<c4dbo8$ddh$1@sea.gmane.org>
Message-ID: <uk71116sc.fsf@boost-consulting.com>

"Terry Reedy" <tjreedy@udel.edu> writes:

> <decorator>      # almost as easy, or
> <<decorator>>
>
> Is there near precedent in other languages that I don't know of?

I want to caution against these.  Turning </>, which already parse as
un-paired, into angle brackets caused embarassing parsing problems for
C++.  Now such things as 

      A::foo<X> y;

have to be written as

      A::template foo<X> y;
         ^^^^^^^^-----------keyword disambiguates parse

in some cases.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com


From FBatista at uniFON.com.ar  Wed Mar 31 09:47:06 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 31 09:49:26 2004
Subject: [Python-Dev] Decimal test cases finished
Message-ID: <A128D751272CD411BC9200508BC2194D03383816@escpl.tcp.com.ar>

People:

I finished the test cases of the Decimal module. The file is
test_Decimal.py:

http://cvs.sourceforge.net/viewcvs.py/python/python/nondist/sandbox/decimal/
test_Decimal.py

This should test all the specifications of PEP 327 (which I summited to
David Goodger asking him how to call for a review).

When the PEP get accepted, I'll review the test cases and then I'll start to
work on the Decimal module itself.

Please, if you're interested in Decimal, review the test cases. Any feedback
will be very grateful.

Thank you!

.	Facundo





. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
ADVERTENCIA  

La informaci?n contenida en este mensaje y cualquier archivo anexo al mismo,
son para uso exclusivo del destinatario y pueden contener informaci?n
confidencial o propietaria, cuya divulgaci?n es sancionada por la ley. 

Si Ud. No es uno de los destinatarios consignados o la persona responsable
de hacer llegar este mensaje a los destinatarios consignados, no est?
autorizado a divulgar, copiar, distribuir o retener informaci?n (o parte de
ella) contenida en este mensaje. Por favor notif?quenos respondiendo al
remitente, borre el mensaje original y borre las copias (impresas o grabadas
en cualquier medio magn?tico) que pueda haber realizado del mismo. 

Todas las opiniones contenidas en este mail son propias del autor del
mensaje y no necesariamente coinciden con las de Telef?nica Comunicaciones
Personales S.A. o alguna empresa asociada. 

Los mensajes electr?nicos pueden ser alterados, motivo por el cual
Telef?nica Comunicaciones Personales S.A. no aceptar? ninguna obligaci?n
cualquiera sea el resultante de este mensaje. 

Muchas Gracias.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20040331/0bd8fe46/attachment.html
From guido at python.org  Wed Mar 31 10:39:57 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 10:40:42 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: Your message of "Wed, 31 Mar 2004 11:36:32 +0100."
	<2m3c7pjob3.fsf@starship.python.net> 
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org> 
	<2m3c7pjob3.fsf@starship.python.net> 
Message-ID: <200403311539.i2VFdvO12990@guido.python.org>

> Seems like it would be painful to implement.

Not at all.  Stay tuned. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Mar 31 10:42:32 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 10:42:43 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: Your message of "Wed, 31 Mar 2004 10:12:39 +0200."
	<2CF5B513-82EB-11D8-BD7A-0003934AD54A@chello.se> 
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<c4dbo8$ddh$1@sea.gmane.org> 
	<2CF5B513-82EB-11D8-BD7A-0003934AD54A@chello.se> 
Message-ID: <200403311542.i2VFgWf13002@guido.python.org>

> > Taking the prefix position as given for the moment, why overload list
> > literal syntax versus something currently illegal and meaningless?  
> > Such as
> >
> > <decorator>      # almost as easy, or
> 
> Yes. This looks better and will make it more clear that it's a special 
> case. Otherwise the decorators will look too decoupled from the 
> function.

Why does <...> look better than [...]?  To me, <...> just reminds me
of XML, which is totally the wrong association.

There are several parsing problems with <...>: the lexer doesn't see <
and > as matched brackets, so you won't be able to break lines without
using a backslash, and the closing > is ambiguous -- it might be a
comparison operator.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From jack at performancedrivers.com  Wed Mar 31 10:53:21 2004
From: jack at performancedrivers.com (Jack Diederich)
Date: Wed Mar 31 10:53:35 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403302258.i2UMwfp10162@guido.python.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
	<200403302258.i2UMwfp10162@guido.python.org>
Message-ID: <20040331155321.GA15543@performancedrivers.com>

My use cases are at the bottom, but a couple comments first

On Tue, Mar 30, 2004 at 02:58:41PM -0800, Guido van Rossum wrote:
> > The most obvious issues I see with this is are that:
> >   - grepping for "def func" won't show any sign that its decorated, and 
> 
> Seems rather minor -- there's only so much you can cram on a single
> line.
see below, in my app I would never hit the 80 char mark w/ inline decorators.


> > Neither of these are fatal flaws, but I think I still prefer:
> > 
> >     def func(args) [decorator]:
> 
> Which hides 'classmethod' behind the more voluminous stuff, and that's
> my main gripe.

I don't think 'classmethod' is more important than the function def,
def func(cls) [staticmethod]: pass
is obviously wrong by convention and very easy to spot by eye or with 
a future version of pychecker.

def [classmethod] func(cls): pass
I read this as redundant, and grepping (eye and machine) just got much harder.

Here is my use case, this is a production application written in python.
It isn't a framework so there aren't many instances of decorators.

27k lines of python
371 classes
1381 functions (includes class and module functions)

function decorators,
  staticmethod 12
  classmethod 4
  memoize 1

class decorators
  Factory.register 50
  run_nightly 45

Here are some typical use cases out of that set done in my preferred style
of decorators last.

-- begin samples --

# staticmethods are almost always factory methods
class Contact(Entity):
  def new_by_email(email) [staticmethod]: pass
class CollectFactory(object):
  def class_ob(db_id, form_id) [staticmethod]: pass

# but sometimes helpers in a namespace
class Db(object):
  def client_database_name(client_id) [staticmethod]:
    return 'enduser_%d' % (client_id)

# classmethods are a mixed bag
class DataSet(object):
  table_name = 'foo'
  def drop_table(cls) [classmethod]: pass
  def create_table(cls) [classmethod]: pass

class Collect(object):
  def validate_field_trans(cls) [classmethod]: pass
  """this is run in the test suite, it just makes sure any derivative 
     classes are sane
  """

# and one memoize
cache_here = {}
def days_of_week(day_in_week) [memoize(cache_here)]:
  """for the passed in day, figure out what YEARWEEK it is and return a list 
     of all the days in that week.
  """

# the class decorators all register with a factory of
# their appropriate type.  I currently do this with metaclasses
class NewUsers(DataSet) [run_nightly]: pass # 45 of these

# note, not all DataSet derivatives run_nightly, the current way
# I get around this while using metaclasses is
class NewUsers(DataSet):
  run_nightly = 0 # run_nightly, 1/4 of DataSets set this to false
    
CF = CollectFactory
class Newsletter(Collect) [CF.register(db=4, form_id=3)]: pass # about 50 of these
# see CollectFactory.class_ob() above

-- end samples --

There are still some spots where I will continue to use metaclasses
instead of class decorators, namely where I want to keep track of all the leaf
classes.  But usually I want to decorate classes on a case-by-case basis.

These numbers are only representative of one application.
SPARK or other framework type things are going to use decorators a lot
more pervasively.  The number of people writing applications has to
be much higher than the number of people writing frameworks.

I'll use decorators whatever they look like, but they aren't more important to
me than the function definition.  And I don't want to have to relearn how to
grep for them.  So I prefer

def foo(self, bar) [decorator1]: pass

In my 27 tousand lines of python this will ALWAYS fit on one line and it
won't change how I eye-grep or machine-grep for functions.

-jackdied, BUFL (Benign User For Life)



From ark-mlist at att.net  Wed Mar 31 11:04:12 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 31 11:04:15 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEGCJOAB.tim.one@comcast.net>
Message-ID: <007801c41739$cf5b2c10$6402a8c0@arkdesktop>

> > You argued against applying the Scheme rules because that would make
> > marshalling less accurate when the unmarshalling is done on a machine
> > with longer floats.


> I said the 754 committee had that objection.

OK, but either that objection is relevant for Python or it isn't.

> This was discussed on David Hough's numeric-interest mailing list at the
> time Clinger and Steele/White published their float<->string papers, and
> "phooey" was the consensus of the 754 folks on the mailing list at the
> time. 

Interesting -- I didn't know that.  But it would make sense -- they're
probably more interested in cross-platform computational accuracy than they
are in convenience for casual uses.

> I personally don't think decimal strings are a sane way to transport
> binary floats regardless of rounding gimmicks.

Fair enough.

> > But on such a machine, 17 digits won't be good enough anyway.


> Doesn't change that 17 digits gets closer then shortest-possible:  the art
> of binary fp is about reducing error, not generally about eliminating
> error.
> Shortest-possible does go against the spirit of 754 in that respect.

That's a fair criticism.  On the other hand, maybe ?!ng is right about the
desirable properties of display for people being different from those for
marshalling/unmarshalling.

> > That's what I meant.  Rather than 0.47 from exact, I meant 0.47 from
> > the best possible.

> Well, you originally said that in response to my saying that the standard
> doesn't require perfect rounding (and it doesn't), and that the standard
> has different accuracy requirements for different inputs (and it does).
> So now I'm left wondering what your original "I thought that ..." was
> trying to get across.

I don't remember; sorry.

> > Hey, I know some people who write C programs that don't rely on the
> > platform C libraries for anything :-)

> Python would love to grab their I/O implementation then <0.8 wink>.

http://www.research.att.com/sw/tools/sfio/

I think the licensing terms are compatible with Python, if you're serious.


From ark-mlist at att.net  Wed Mar 31 11:09:13 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 31 11:09:16 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040331012812.GA1541@panix.com>
Message-ID: <008401c4173a$82e7fc40$6402a8c0@arkdesktop>

> I've read the whole thread, and I wanted to repeat a critical point for
> emphasis:
> 
>     This doesn't help
> 
> No matter what you do to improve conversion issues, you're still dealing
> with the underlying floating-point problems, and having watched the
> changing discussions in c.l.py since we moved to the different conversion
> system, it seems clear to me that we've improved the nature of the
> discussion by forcing people to get bitten earlier.

On the other hand, it is pragmatically more convenient when an
implementation prints the values of floating-point literals with a small
number of significant digits with the same number of significant digits with
which they were entered.

If I can enter a number as 0.1, printing that number as 0.1 does not
introduce any errors that were not already there, as proved by the fact that
reading that 0.1 back will yield exactly the same value.



From s.percivall at chello.se  Wed Mar 31 11:10:23 2004
From: s.percivall at chello.se (Simon Percivall)
Date: Wed Mar 31 11:10:28 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <200403311542.i2VFgWf13002@guido.python.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<c4dbo8$ddh$1@sea.gmane.org>
	<2CF5B513-82EB-11D8-BD7A-0003934AD54A@chello.se>
	<200403311542.i2VFgWf13002@guido.python.org>
Message-ID: <EA073DB9-832D-11D8-8C75-0003934AD54A@chello.se>


On 2004-03-31, at 17.42, Guido van Rossum wrote:

>>> Taking the prefix position as given for the moment, why overload list
>>> literal syntax versus something currently illegal and meaningless?
>>> Such as
>>>
>>> <decorator>      # almost as easy, or
>>
>> Yes. This looks better and will make it more clear that it's a special
>> case. Otherwise the decorators will look too decoupled from the
>> function.
>
> Why does <...> look better than [...]?  To me, <...> just reminds me
> of XML, which is totally the wrong association.
>
> There are several parsing problems with <...>: the lexer doesn't see <
> and > as matched brackets, so you won't be able to break lines without
> using a backslash, and the closing > is ambiguous -- it might be a
> comparison operator.
>

I'm not sure <decorator> is the best solution but it sure is better 
than [decorator]. Someone learning Python will probably be profoundly 
confused when seeing that special case. It also really feels like the 
decorators are decoupled from the function. The coupling between the 
decorators and the function would perhaps not appear to be greater with 
another syntax; but it would stand out.

Unlike docstrings which are used a lot and which documentation and 
books explain early on, decorators won't be used that much. It would 
only make it harder to learn Python if someone looks through code and 
sees a decorator. Also: When explaining lists in a book you'd have to 
mention that "there are special cases which we will discuss in an 
addendum to chapter 27".

Also, it'll make it so much harder to search for decorators if they are 
lists instead of a special syntax.


From aahz at pythoncraft.com  Wed Mar 31 11:53:02 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 31 11:53:06 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <008401c4173a$82e7fc40$6402a8c0@arkdesktop>
References: <20040331012812.GA1541@panix.com>
	<008401c4173a$82e7fc40$6402a8c0@arkdesktop>
Message-ID: <20040331165302.GA12976@panix.com>

On Wed, Mar 31, 2004, Andrew Koenig wrote:
>Aahz:
>>
>> I've read the whole thread, and I wanted to repeat a critical point for
>> emphasis:
>> 
>>     This doesn't help
>> 
>> No matter what you do to improve conversion issues, you're still dealing
>> with the underlying floating-point problems, and having watched the
>> changing discussions in c.l.py since we moved to the different conversion
>> system, it seems clear to me that we've improved the nature of the
>> discussion by forcing people to get bitten earlier.
> 
> On the other hand, it is pragmatically more convenient when an
> implementation prints the values of floating-point literals with a
> small number of significant digits with the same number of significant
> digits with which they were entered.

Pragmatically more convenient by what metric?  No matter how you slice
it, binary floating point contains surprises for the uninitiated.  The
question is *WHEN* do you hammer the point home?  I've yet to see you
address this directly.

> If I can enter a number as 0.1, printing that number as 0.1 does not
> introduce any errors that were not already there, as proved by the
> fact that reading that 0.1 back will yield exactly the same value.

It's not a matter of introducing errors, it's a matter of making the
errors visible.  Python is, among other things, a language suitable for
introducing people to computers.  That's why the Zen of Python contains
such gems as

    Explicit is better than implicit.
    Errors should never pass silently.
    In the face of ambiguity, refuse the temptation to guess.

If you're going to continue pressing your point, please elucidate your
reasoning in terms of Python's design principles.
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From pje at telecommunity.com  Wed Mar 31 12:32:35 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 31 12:34:16 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403310038.i2V0cY910483@guido.python.org>
References: <Your message of "Tue, 30 Mar 2004 19:17:14 EST."
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com>
	<Your message of "Tue, 30 Mar 2004 18:17:55 EST."
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<Your message of "Wed,
	31 Mar 2004 08:27:19 +1000." <20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com>
Message-ID: <5.1.1.6.0.20040331115806.02d60a60@telecommunity.com>

At 04:38 PM 3/30/04 -0800, Guido van Rossum wrote:
> > > > Hm.  So if we reversed the order so that the outermost
> > > > decorators (such as classmethod) come first in the list, would
> > > > that sway you to relent in favor of decorators-after-arguments?
> > >
> > >Not really, because they're still hidden behind the argument list.
> >
> > Because:
> >
> > 1) it won't be on the same line if there are lots of arguments
> > 2) nobody will read past the argument list
> > 3) other
> > 4) all of the above
> > 5) none of the above
> >
> > (Not trying to change your opinion; I just think the answer to this should
> > go in the PEP.)
>
>1&2, mostly.

There appears to be a strong correlation between people who have specific 
use cases for decorators, and the people who want the last-before-colon 
syntax.  Whereas, people who have few use cases (or don't like decorators 
at all) appear to favor syntaxes that move decorators earlier.  Whether 
that means the "earlier" syntaxes are better or worse, I don't know.  <0.5 
wink>

I'll assume your intent is to prevent decorators from biting the unwary -- 
specifically people who *don't* use decorators a lot and therefore are not 
looking for them.  I will therefore focus now on the issues with the 
"previous line" syntax that may bite people, with an eye to how they might 
be fixed.


> > By the way, you didn't mention whether it's okay to put the decorators on
> > the same logical line, e.g.:
> >
> > [classmethod] def foo(bar,baz):
> >      # body goes here
> >
> > If the rationale here is that we're copying C#, I'd think that it
> > should be permissible, even though it looks a bit ugly and tempts me
> > to indent the body to align with the function name.
>
>This is much harder to do with the current parser.  (My plan would be
>to tie the list expression and the function definition together in the
>code generating phase, just like doc strings.)

Yeah, this is also going to now have to be a special case for documentation 
processing tools.  Whereas making it part of the definition syntax, it's 
more directly available in the parse tree.  It also seems to be working 
against the AST branch a bit, in that I would expect the decorator 
expressions to be part of the function definition node, rather than in an 
unrelated statement.  And, it's also going to be interesting to document in 
the language reference, since the grammar there is going to continue to 
diverge from the "real" grammar used by the implementation.

Another issue...  is this valid?

[classmethod]

def foo(bar,baz):
     pass

How about this?

[classmethod]
# Okay, now we're going to define something...
def foo(bar,baz):
     pass

If they *are* valid, then you can have nasty effects at a distance.  If 
they *aren't* valid, accidentally adding or removing whitespace or comments 
can silently change the meaning of the program, and *not* in a DWIMish way.

I personally would rather have the decorators required to be on the same 
logical line, and then use:

[classmethod] \
def foo(bar,baz):
     pass

for visual separation.  The backslash visually alerts that this is *not* a 
mere bare list.

I'm not a parser guru by any stretch of the imagination, but wouldn't it be 
possible to simply create a statement-level construct that was something like:

liststmt: '[' [listmaker] ']' ( funcdef | restofliststmt )

and put it where it matches sooner than the expression-based versions of 
the statement?  It seems like the main complexity would be the possibility 
of having to duplicate a number of levels of containing rules for 
'restofliststmt'.  But maybe I'm completely off base here and there's no 
sensible way to define a correct 'restofliststmt'.


From FBatista at uniFON.com.ar  Wed Mar 31 12:40:53 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 31 12:42:56 2004
Subject: [Python-Dev] PEP 327: Decimal data type
Message-ID: <A128D751272CD411BC9200508BC2194D0338381A@escpl.tcp.com.ar>

People:

This PEP is ready for your review.

As always, any feedback is very welcomed.

Thank you!

.	Facundo

From jtk at yahoo.com  Wed Mar 31 12:29:49 2004
From: jtk at yahoo.com (Jeff Kowalczyk)
Date: Wed Mar 31 12:50:53 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
Message-ID: <pan.2004.03.31.17.29.48.90660@yahoo.com>

Jeremy Hylton wrote:
> Another possibility that has been suggested is
> [decorator]
> def func(arg, arg):

The decorator before def spelling is familiar from C#, and
looks the nicest to me. However, I'm curious about the way
decorators differ from docstrings to explain putting one
kind of metadata shorthand before def and the other after.

[decorator]
def func(arg, arg):
    """Docstring"""
    pass

def func(arg, arg):
    [decorator]
    """Docstring"""
    pass

If there will only ever be two of these kinds of things,
and a consensus is to put them on lines before and after
the def, that would seem to be reason enough for 
decorator before def.

If someday there were other types of metadata distinct
from decorators, perhaps immutable and pertaining to
secure execution, the before-docstring spelling might
allow for more consistent additions to the shorthand.

def func(arg, arg):
    [decorator]
    (otherator)
    """Docstring"""
    pass


From neal at metaslash.com  Wed Mar 31 12:56:00 2004
From: neal at metaslash.com (Neal Norwitz)
Date: Wed Mar 31 12:56:06 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <5.1.1.6.0.20040331115806.02d60a60@telecommunity.com>
References: <5.1.1.6.0.20040330185626.024903c0@telecommunity.com>
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com>
	<5.1.1.6.0.20040331115806.02d60a60@telecommunity.com>
Message-ID: <20040331175600.GN18810@epoch.metaslash.com>

On Wed, Mar 31, 2004 at 12:32:35PM -0500, Phillip J. Eby wrote:
> 
> I personally would rather have the decorators required to be on the same 
> logical line, and then use:
> 
> [classmethod] \
> def foo(bar,baz):
>     pass
> 
> for visual separation.  The backslash visually alerts that this is *not* a 
> mere bare list.

I agree with Phillip about the backslash.  But I don't like this
variant because it appears to operate by side-effect.  If the list had
a keyword after it, that wouldn't be as bad, but the only current
keywords I can think of (in, for) don't fit well and would overload
their meaning.  I don't really like any variants, but the original
seems the least bad to me:

        def foo(bar, baz) [classmethod]:

Some of the concerns deal with the decorators getting lost after the
arguments or they are too far away from the function name.  It seems
to me that if formatted properly, this isn't as big of a deal:

        def foo(cls, lots, of, arguments, that, will, not,
                fit, on, a, single, line) \
        [classmethod, decorate(author='Someone', version='1.2.3',
                               other='param')]:
            """The docstring goes here."""

I hope that's a pretty unrealistic case.  I think all of the proposed
variants are ugly with the definition above.  But, this may be more
reasonable:

        def foo(cls, lots, of, arguments, all, on, a line) \
               [classmethod, 
                decorate(author='Someone', version='1.2.3', other='param')]:
            """The docstring goes here."""

Writing decorators this way is the least surprising to me.  Although,
I wish there was a better alternative.

Neal

From skip at pobox.com  Wed Mar 31 13:01:46 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 31 13:02:00 2004
Subject: [Python-Dev] OpenVMS file system and UNIVERSAL_NEWLINES support
In-Reply-To: <40688D7D.3030203@laposte.net>
References: <403A351B.6060007@laposte.net> <20040223183552.GA24056@panix.com>
	<40688D7D.3030203@laposte.net>
Message-ID: <16491.1930.522356.860553@montanaro.dyndns.org>


    >>> Building from time to time Python 2.4 from CVS snapshot, I have just
    >>> noticed that all the conditional compilation against
    >>> WITH_UNIVERSAL_NEWLINES has been removed.

    >> Please file a bug report on SF so that this doesn't get lost, then
    >> post the bug number here.

    Jean-Fran?ois> I have submit a fix for this problem, bug 903339

I'll try to get to this in the next couple days.  I believe I saved the
discussion of possible solutions.

Skip

From ark-mlist at att.net  Wed Mar 31 13:09:56 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 31 13:09:57 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <pan.2004.03.31.17.29.48.90660@yahoo.com>
Message-ID: <000101c4174b$5fdb2540$6402a8c0@arkdesktop>

If you spell decorators this way:

	[decorator]
	def func():
		pass

then what will happen when you type [decorator] at an interactive
interpreter prompt?

	>>> [decorator]

Will it type ... and wait for you to say more?  Or will it evaluate the
single-element list whose element is the value of the variable ``decorator''
and print the result?

If it types ... and waits for you to say more, will do so for any list?  Or
will it somehow figure out which lists represent decorators?


From ark-mlist at att.net  Wed Mar 31 13:09:56 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 31 13:10:08 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040331165302.GA12976@panix.com>
Message-ID: <000501c4174b$60f93f70$6402a8c0@arkdesktop>

> Pragmatically more convenient by what metric?

Short output is easier to read than long output.

> No matter how you slice it, binary floating point contains surprises
> for the uninitiated.  The question is *WHEN* do you hammer the point home?
> I've yet to see you address this directly.

I haven't, because I'm unconvinced that there is a single right answer.

Decimal floating-point has almost all the pitfalls of binary floating-point,
yet I do not see anyone arguing against decimal floating-point on the basis
that it makes the pitfalls less apparent.

> > If I can enter a number as 0.1, printing that number as 0.1 does not
> > introduce any errors that were not already there, as proved by the
> > fact that reading that 0.1 back will yield exactly the same value.

> It's not a matter of introducing errors, it's a matter of making the
> errors visible.  Python is, among other things, a language suitable for
> introducing people to computers.  That's why the Zen of Python contains
> such gems as

>     Explicit is better than implicit.
>     Errors should never pass silently.
>     In the face of ambiguity, refuse the temptation to guess.

> If you're going to continue pressing your point, please elucidate your
> reasoning in terms of Python's design principles.

Beautiful is better than ugly.
Simple is better than complex.
Readability counts.

When I write programs that print floating-point numbers I usually want to
see one of the following:

	* a rounded representation with n significant digits,
	  where n is significantly less than 17
	* a rounded representation with n digits after the decimal point,
	  where n is often 2
	* the unbounded-precision exact decimal representation of the
	  number (which always exists, because every binary floating-point
	  number has a finite exact decimal representation)
	* the most convenient (i.e. shortest) way of representing the
	  number that will yield exactly the same result when read

Python gives me none of these, and instead gives me something else entirely
that is almost never what I would like to see, given the choice.  I
understand that I have the option of requesting the first two of these
choices explicitly, but I don't think there's a way to make any of them the
default.

I'm not picking on Python specifically here, as I have similar objections to
the floating-point behavior of most other languages aside from Scheme (which
is not to my taste for other reasons).  However, I do think that this issue
is more subtle than one that can be settled by appealing to slogans.  In
particular, I *do* buy the argument that the current behavior is the best
that can be efficiently achieved while relying on the underlying C
floating-point conversions.

If you're really serious about hammering errors in early, why not have the
compiler issue a warning any time a floating-point literal cannot be exactly
represented?  <0.5 wink>


From walter.doerwald at livinglogic.de  Wed Mar 31 13:23:49 2004
From: walter.doerwald at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Wed Mar 31 13:23:55 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <20040331175600.GN18810@epoch.metaslash.com>
References: <5.1.1.6.0.20040330185626.024903c0@telecommunity.com>	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>	<20040330222718.GJ7833@frobozz>	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>	<1080674306.12643.76.camel@localhost.localdomain>	<200403302121.i2ULLIM09840@guido.python.org>	<20040330222718.GJ7833@frobozz>	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com>	<5.1.1.6.0.20040331115806.02d60a60@telecommunity.com>
	<20040331175600.GN18810@epoch.metaslash.com>
Message-ID: <406B0CB5.8080909@livinglogic.de>

Neal Norwitz wrote:

> [...]
>         def foo(cls, lots, of, arguments, that, will, not,
>                 fit, on, a, single, line) \
>         [classmethod, decorate(author='Someone', version='1.2.3',
>                                other='param')]:
>             """The docstring goes here."""
> 
> I hope that's a pretty unrealistic case.  I think all of the proposed
> variants are ugly with the definition above.  But, this may be more
> reasonable:
> 
>         def foo(cls, lots, of, arguments, all, on, a line) \
>                [classmethod, 
>                 decorate(author='Someone', version='1.2.3', other='param')]:
>             """The docstring goes here."""
> 
> Writing decorators this way is the least surprising to me.  Although,
> I wish there was a better alternative.

Why not make the def look like a function call, i.e.:

    def(classmethod,
        decorate(author='Someone', version='1.2.3', other='param')) \
       foo(cls, lots, of, arguments, all, on, a line):
       """The docstring goes here."""

Bye,
    Walter D?rwald


From Scott.Daniels at Acm.Org  Wed Mar 31 13:24:09 2004
From: Scott.Daniels at Acm.Org (Scott David Daniels)
Date: Wed Mar 31 13:24:20 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <E1B8j8W-0006Mm-Kj@mail.python.org>
References: <E1B8j8W-0006Mm-Kj@mail.python.org>
Message-ID: <406B0CC9.80302@Acm.Org>

A simple issue I have with:
     [classmethod, logged, debug]
     def function(args):
         ...
Is "How do you type this into Idle?"  I realize this is not the most
important of considerations, but access to experimentation is going to
be vital.  You can always force with:
 >>> if True:
         [classmethod, logged, debug]
         def function(args):
             ...

but I wonder if we want to go that route.
-- 
-Scott David Daniels
Scott.Daniels@Acm.Org

From guido at python.org  Wed Mar 31 13:26:10 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 13:26:16 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: Your message of "Tue, 30 Mar 2004 23:08:05 EST."
	<406A4425.4070500@zope.com> 
References: Your message of "Tue, 30 Mar 2004 14:18:26 EST."
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org> 
	<406A4425.4070500@zope.com> 
Message-ID: <200403311826.i2VIQBc13773@guido.python.org>

I've produced a patch for my version:

python.org/sf/926860

This patch (deco.diff) patches compile.c to recognize
the following form of decorators:

[list_of_expressions]
def func(args):
    ...

The list of expressions should contain at least one
element and should not be a list comprehension,
otherwise no special treatment is taken. (An empty list
has no effect either way.)

There's a simple test suite, Lib/test/test_decorators.py.

I don't expect to have time to discuss this until tonight.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From skip at pobox.com  Wed Mar 31 13:33:32 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 31 13:33:59 2004
Subject: [Python-Dev] python-dev Summary for 2004-03-01 through
	2004-03-15 [rough draft]
In-Reply-To: <2mptavlr5r.fsf@starship.python.net>
References: <Pine.SOL.4.58.0403200747340.23908@death.OCF.Berkeley.EDU>
	<2mptavlr5r.fsf@starship.python.net>
Message-ID: <16491.3836.631565.430107@montanaro.dyndns.org>


    >> Ever since I first had to type Martin v. L|o_diaeresis|wis' name, I
    >> have had issues with Unicode in the summary.  

    mwh> I take it you're not /all that/ interested in learning how to do
    mwh> real time spelling in emacs? <wink>

Michael,

Brett went to UC Berkeley.  I'm sure they don't even allow Emacs to be
installed on their machines. <wink>

Skip

From martin at v.loewis.de  Wed Mar 31 13:41:01 2004
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed Mar 31 13:41:20 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <406B0CC9.80302@Acm.Org>
References: <E1B8j8W-0006Mm-Kj@mail.python.org> <406B0CC9.80302@Acm.Org>
Message-ID: <406B10BD.3030904@v.loewis.de>

Scott David Daniels wrote:
> A simple issue I have with:
>     [classmethod, logged, debug]
>     def function(args):
>         ...
> Is "How do you type this into Idle?"  

I could, at first, not understand what you were talking about.
I tried it, and it worked just fine.

I then realized that you are talking about interactive mode,
not the IDLE source editor.

> I realize this is not the most
> important of considerations, but access to experimentation is going to
> be vital.  You can always force with:
>  >>> if True:
>         [classmethod, logged, debug]
>         def function(args):
>             ...

I suggest that the the break-less solution

[classmethod, logged, debug] def function(args):

should also be supported.

Regards,
Martin


From guido at python.org  Wed Mar 31 13:41:29 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 13:42:14 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: Your message of "Wed, 31 Mar 2004 13:09:56 EST."
	<000101c4174b$5fdb2540$6402a8c0@arkdesktop> 
References: <000101c4174b$5fdb2540$6402a8c0@arkdesktop> 
Message-ID: <200403311841.i2VIfTG13884@guido.python.org>

> If you spell decorators this way:
> 
> 	[decorator]
> 	def func():
> 		pass
> 
> then what will happen when you type [decorator] at an interactive
> interpreter prompt?
> 
> 	>>> [decorator]
> 
> Will it type ... and wait for you to say more?  Or will it evaluate the
> single-element list whose element is the value of the variable ``decorator''
> and print the result?

The latter.  You can't add a decorator to a top-level function in
interactive mode unless you put it inside an if.

Check out the behavior of the patch I posted (926860).

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Mar 31 13:43:36 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 13:43:43 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: Your message of "Wed, 31 Mar 2004 10:24:09 PST."
	<406B0CC9.80302@Acm.Org> 
References: <E1B8j8W-0006Mm-Kj@mail.python.org>  <406B0CC9.80302@Acm.Org> 
Message-ID: <200403311843.i2VIhaU13904@guido.python.org>

> A simple issue I have with:
>      [classmethod, logged, debug]
>      def function(args):
>          ...
> Is "How do you type this into Idle?"  I realize this is not the most
> important of considerations, but access to experimentation is going to
> be vital.  You can always force with:
>  >>> if True:
>          [classmethod, logged, debug]
>          def function(args):
>              ...
> 
> but I wonder if we want to go that route.

Since it will normally be part of a class, you shouldn't have any
problem.  I expect that toplevel functions with decorators will be
rare enough to put up with the "if True" work-around.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From walter.doerwald at livinglogic.de  Wed Mar 31 13:58:09 2004
From: walter.doerwald at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Wed Mar 31 13:58:16 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <200403311843.i2VIhaU13904@guido.python.org>
References: <E1B8j8W-0006Mm-Kj@mail.python.org> <406B0CC9.80302@Acm.Org>
	<200403311843.i2VIhaU13904@guido.python.org>
Message-ID: <406B14C1.6080807@livinglogic.de>

Guido van Rossum wrote:

>>A simple issue I have with:
>>     [classmethod, logged, debug]
>>     def function(args):
>>         ...
>>Is "How do you type this into Idle?"  I realize this is not the most
>>important of considerations, but access to experimentation is going to
>>be vital.  You can always force with:
>> >>> if True:
>>         [classmethod, logged, debug]
>>         def function(args):
>>             ...
>>
>>but I wonder if we want to go that route.
> 
> 
> Since it will normally be part of a class, you shouldn't have any
> problem.  I expect that toplevel functions with decorators will be
> rare enough to put up with the "if True" work-around.

This looks ugly to me. I do have top level functions that would use
decorators because those functions will be put into a class as
classmethods and this class will be put into sys.modules instead of
the original module. Replacing

def foo(cls):
    ...
foo = classmethod(foo)

with

if True:
    [classmethod]
    def foo(cls):
       ...

doesn't look that attractive to me.

Bye,
    Walter D?rwald



From aahz at pythoncraft.com  Wed Mar 31 13:58:54 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 31 13:59:07 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <000501c4174b$60f93f70$6402a8c0@arkdesktop>
References: <20040331165302.GA12976@panix.com>
	<000501c4174b$60f93f70$6402a8c0@arkdesktop>
Message-ID: <20040331185854.GE22620@panix.com>

On Wed, Mar 31, 2004, Andrew Koenig wrote:
>
> Decimal floating-point has almost all the pitfalls of binary
> floating-point, yet I do not see anyone arguing against decimal
> floating-point on the basis that it makes the pitfalls less apparent.

Actually, decimal floating point takes care of two of the pitfalls of
binary floating point:

* binary/decimal conversion

* easily modified precision

When people are taught decimal arithmetic, they're usually taught the
problems with it, so they aren't surprised.  (e.g. 1/3)
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From ark-mlist at att.net  Wed Mar 31 14:02:42 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 31 14:02:45 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <20040331185854.GE22620@panix.com>
Message-ID: <001101c41752$beb003e0$6402a8c0@arkdesktop>

> > Decimal floating-point has almost all the pitfalls of binary
> > floating-point, yet I do not see anyone arguing against decimal
> > floating-point on the basis that it makes the pitfalls less apparent.

> Actually, decimal floating point takes care of two of the pitfalls of
> binary floating point:

> * binary/decimal conversion

> * easily modified precision

> When people are taught decimal arithmetic, they're usually taught the
> problems with it, so they aren't surprised.  (e.g. 1/3)

But doesn't that just push the real problems further into the background,
making them more dangerous? <0.1 wink>

For example, be it, binary or decimal, floating-point addition is still not
associative, so even such a simple computation as a+b+c requires careful
thought if you wish the maximum possible precision.  Why are you not arguing
against decimal floating-point if your goal is to expose users to the
problems of floating-point as early as possible?


From michel at dialnetwork.com  Wed Mar 31 13:59:33 2004
From: michel at dialnetwork.com (Michel Pelletier)
Date: Wed Mar 31 14:11:59 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <E1B8j8V-0006Mm-FD@mail.python.org>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
Message-ID: <1080759573.2043.49.camel@heinlein>

> Message: 1
> Date: Wed, 31 Mar 2004 18:10:23 +0200
> From: Simon Percivall <s.percivall@chello.se>
> Subject: Re: [Python-Dev] Re: PEP 318: Decorators last before colon

I've been following this discussion closely and I would like to voice my
opinion.

Please don't add any decorator syntax to Python, at least not yet.  All
of the proposals I have seen so far are, to be blunt, and in my opinion
of course, ugly and are getting uglier as the discussion ensues.

I see nothing wrong, at least for the present, with the status quo
decorators that follow a function or class definition.  They are
explicit, functionally equivalent, use the existing and completely
understandable syntax, and are so rarely used by only the *most*
experienced and advanced programmers that violating the beauty of the
language is unjustified.

> I'm not sure <decorator> is the best solution but it sure is better 
> than [decorator].

I think they're both bad, especially on the line preceding the
definition.

>  Someone learning Python will probably be profoundly 
> confused when seeing that special case. 

I agree.

> It also really feels like the 
> decorators are decoupled from the function. 

And to me they always were and always will be even with the help of
syntax sugar.  I can accept that, in the rarest cases that I need to use
them.  

What's worse, if any of these proposals were to be accepted, I will have
to go and look up the special syntax in those very rarest cases, instead
of just spelling it the way that seems most natural to me, as a Python
programmer.

> The coupling between the 
> decorators and the function would perhaps not appear to be greater with 
> another syntax; but it would stand out.
> 
> Unlike docstrings which are used a lot and which documentation and 
> books explain early on, decorators won't be used that much.

I agree wholeheartedly, which is why I would hate to see special syntax.

-Michel


From guido at python.org  Wed Mar 31 14:20:13 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 14:20:25 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: Your message of "Wed, 31 Mar 2004 20:41:01 +0200."
	<406B10BD.3030904@v.loewis.de> 
References: <E1B8j8W-0006Mm-Kj@mail.python.org> <406B0CC9.80302@Acm.Org>  
	<406B10BD.3030904@v.loewis.de> 
Message-ID: <200403311920.i2VJKD814087@guido.python.org>

> I suggest that the the break-less solution
> 
> [classmethod, logged, debug] def function(args):

Sorry, can't do that with the current parser.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Mar 31 14:22:06 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 14:22:15 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: Your message of "Wed, 31 Mar 2004 20:58:09 +0200."
	<406B14C1.6080807@livinglogic.de> 
References: <E1B8j8W-0006Mm-Kj@mail.python.org> <406B0CC9.80302@Acm.Org>
	<200403311843.i2VIhaU13904@guido.python.org> 
	<406B14C1.6080807@livinglogic.de> 
Message-ID: <200403311922.i2VJM6u14105@guido.python.org>

> This looks ugly to me. I do have top level functions that would use
> decorators because those functions will be put into a class as
> classmethods and this class will be put into sys.modules instead of
> the original module. Replacing
> 
> def foo(cls):
>     ...
> foo = classmethod(foo)
> 
> with
> 
> if True:
>     [classmethod]
>     def foo(cls):
>        ...
> 
> doesn't look that attractive to me.

You won't have to do that except in interactive mode.  How often do
you type functions that need decorators interactively?

--Guido van Rossum (home page: http://www.python.org/~guido/)

From tim.one at comcast.net  Wed Mar 31 14:41:36 2004
From: tim.one at comcast.net (Tim Peters)
Date: Wed Mar 31 14:41:45 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <001101c41752$beb003e0$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCAELHJOAB.tim.one@comcast.net>

[Andrew Koenig]
> ...
> For example, be it, binary or decimal, floating-point addition is
> still not associative, so even such a simple computation as a+b+c
> requires careful thought if you wish the maximum possible precision.

Not really for most everyday applications of decimal arithmetic.  People
work with decimal quantities in real life, and addition of fp decimals is
exact (hence also associative) provided the total precision isn't exceeded.
Since Decimal allows setting precision to whatever the user wants, it's very
easy to pick a precision obviously so large that even adding a billion
(e.g.) dollars-and-cents inputs yields the exact result, and regardless of
addition order.  For the truly paranoid, Decimal's "inexact flag" can be
inspected at the end to see whether the exactness assumption was violated,
and the absurdly paranoid can even ask that an exception get raised whenever
an inexact result would have been produced.

Binary fp loses in these common cases *just because* the true inputs can't
be represented, and the number printed at the end isn't even the true result
of approximately adding the approximated inputs.  Decimal easily avoids all
of that.

> Why are you not arguing against decimal floating-point if your goal
> is to expose users to the problems of floating-point as early as
> possible?

The overwhelmingly most common newbie binary fp traps today are failures to
realize that the numbers they type aren't the numbers they get, and that the
numbers they see aren't the results they got.  Adding 0.1 to itself 10 times
and not getting 1.0 exactly is universally considered to be "a bug" by
newbies (but it is exactly 1.0 in decimal).  OTOH, if they add 1./3. to
itself 3 times under decimal and don't get exactly 1.0, they won't be
surprised at all.  It's the same principle at work in both cases, but
they're already trained to expect 0.9...9 from the latter.

The primary newbie difficulty with binary fp is that the simplest use case
(just typing in an ordinary number) is already laced with surprises -- it
already violates WYSIWYG, and insults a lifetime of "intuition" gained from
by-hand and calculator math (of course it's not a coincidence that hand
calculators use decimal arithmetic internally -- they need to be
user-friendly).

You have to do things fancier than *just* typing in the prices of grocery
items to get in trouble with Decimal.


From barry at barrys-emacs.org  Wed Mar 31 14:43:43 2004
From: barry at barrys-emacs.org (Barry Scott)
Date: Wed Mar 31 14:43:56 2004
Subject: [Python-Dev] Re: Timing for Py2.4
In-Reply-To: <fzbt2g1r.fsf@yahoo.co.uk>
References: <4060E835.50805@interlink.com.au>
	<003401c41486$2e6ec100$2fb4958d@oemcomputer>
	<fzbt2g1r.fsf@yahoo.co.uk>
Message-ID: <6.0.3.0.2.20040331204015.032ceec0@torment.chelsea.private>

At 28-03-2004 15:37, Paul Moore wrote:
>On Windows, the killer issue is the availability of pywin32. I don't
>believe you'll get much take-up of Python 2.4 by Windows users without
>a pywin32 binary release. Is there any chance of prevailing upon Mark
>to produce a 2.4-compatible binary?

Agreed my pywin32 limited testing. And a lot of my python depends on
wxPython on unix and windows.

>I'd advocate an early alpha for another reason - Windows binary
>builders will need as much time as possible to work through any issues
>with the new requirement for MSVC7.

Is that 7.0 or 7.1 aka .net 2003? The reason I ask is that as my extensions
are in C++ and 7.0 is unusable.

Barry



From bob at redivi.com  Wed Mar 31 15:04:18 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 31 15:00:29 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <1080759573.2043.49.camel@heinlein>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
Message-ID: <97B52378-834E-11D8-8201-000A95686CD8@redivi.com>

On Mar 31, 2004, at 1:59 PM, Michel Pelletier wrote:

>> Message: 1
>> Date: Wed, 31 Mar 2004 18:10:23 +0200
>> From: Simon Percivall <s.percivall@chello.se>
>> Subject: Re: [Python-Dev] Re: PEP 318: Decorators last before colon
>
> I've been following this discussion closely and I would like to voice  
> my
> opinion.
>
> Please don't add any decorator syntax to Python, at least not yet.  All
> of the proposals I have seen so far are, to be blunt, and in my opinion
> of course, ugly and are getting uglier as the discussion ensues.
>
> I see nothing wrong, at least for the present, with the status quo
> decorators that follow a function or class definition.  They are
> explicit, functionally equivalent, use the existing and completely
> understandable syntax, and are so rarely used by only the *most*
> experienced and advanced programmers that violating the beauty of the
> language is unjustified.

I've been pretty quiet about this lately because the discussions have  
gone into space, largely by people who don't even have a need or desire  
for decorators, but uninformed comments like this just irk me.

Decorators *are not rare* and are *used by regular programmers* in some  
problem domains.  Yes, it takes an advanced programmer to write such a  
framework, but that doesn't mean that the syntax is useless to  
non-advanced programmers.  It's particularly applicable to applications  
using PyObjC or ctypes where the function simply will not work unless  
it's decorated with the correct type signature.  It can also  
potentially make pure python frameworks such as Twisted, ZopeX3, or  
PEAK easier to use, by moving boilerplate wrapping stuff into  
decorators.  Decorators solve a *huge* problem with the current syntax:

def someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_(takes,  
some, args, here):
     pass
someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_ =  
objc.selector(someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonge 
r_, signature='some type signature')

versus (Guido's latest suggested syntax, which I approve of, even  
though I prefer the after-args-brackets):

[objc.selector(signature='some type signature')]
def someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_(self,  
some, args, here):
     pass

Please understand that just because you haven't need them yet doesn't  
make them worthless, ugly, etc.  It has nothing to do with being an  
experienced or advanced programmer, some problem domains simply REQUIRE  
decorated functions in order to work at all.  If decorators do not make  
Python 2.4, that is another major release cycle that extensions such as  
PyObjC and ctypes will be hampered by lack of syntax... to the point  
where I'd be inclined to just fork Python (or Stackless, more likely)  
in order to get the syntax.

-bob


From ark-mlist at att.net  Wed Mar 31 15:00:42 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 31 15:00:43 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAELHJOAB.tim.one@comcast.net>
Message-ID: <002701c4175a$d93f0d20$6402a8c0@arkdesktop>

> Binary fp loses in these common cases *just because* the true inputs can't
> be represented, and the number printed at the end isn't even the true
> result of approximately adding the approximated inputs.  Decimal easily
> avoids all of that.

Well, some of it.  It still doesn't avoid 1E50 + 1E-50 == 1E50, for example.

> > Why are you not arguing against decimal floating-point if your goal
> > is to expose users to the problems of floating-point as early as
> > possible?

> The overwhelmingly most common newbie binary fp traps today are failures
> to realize that the numbers they type aren't the numbers they get, and
> that the numbers they see aren't the results they got.

Then how about giving a warning for every floating-point literal that cannot
be exactly represented in binary?



From FBatista at uniFON.com.ar  Wed Mar 31 15:30:01 2004
From: FBatista at uniFON.com.ar (Batista, Facundo)
Date: Wed Mar 31 15:32:03 2004
Subject: [Python-Dev] Expert floats
Message-ID: <A128D751272CD411BC9200508BC2194D0338381D@escpl.tcp.com.ar>

Andrew Koenig wrote:

#- Well, some of it.  It still doesn't avoid 1E50 + 1E-50 == 
#- 1E50, for example.

Nice example:

>>> import Decimal
>>> d1 = Decimal.Decimal('1E50')
>>> d2 = Decimal.Decimal('1E-50')
>>> d1 + d2
Decimal( (0, (1, 0, 0, 0, 0, 0, 0, 0, 0), 42L) )
>>> d1 + d2 == d1
True
>>> Decimal.getcontext().prec = 1000
>>> d1 + d2 == d1
False
>>> d1 + d2
Decimal( (0, (1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1), -50L) )

Regards,

.	Facundo





. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
ADVERTENCIA  

La informaci?n contenida en este mensaje y cualquier archivo anexo al mismo,
son para uso exclusivo del destinatario y pueden contener informaci?n
confidencial o propietaria, cuya divulgaci?n es sancionada por la ley. 

Si Ud. No es uno de los destinatarios consignados o la persona responsable
de hacer llegar este mensaje a los destinatarios consignados, no est?
autorizado a divulgar, copiar, distribuir o retener informaci?n (o parte de
ella) contenida en este mensaje. Por favor notif?quenos respondiendo al
remitente, borre el mensaje original y borre las copias (impresas o grabadas
en cualquier medio magn?tico) que pueda haber realizado del mismo. 

Todas las opiniones contenidas en este mail son propias del autor del
mensaje y no necesariamente coinciden con las de Telef?nica Comunicaciones
Personales S.A. o alguna empresa asociada. 

Los mensajes electr?nicos pueden ser alterados, motivo por el cual
Telef?nica Comunicaciones Personales S.A. no aceptar? ninguna obligaci?n
cualquiera sea el resultante de este mensaje. 

Muchas Gracias.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20040331/ba27011d/attachment.html
From marktrussell at btopenworld.com  Wed Mar 31 15:31:56 2004
From: marktrussell at btopenworld.com (Mark Russell)
Date: Wed Mar 31 15:32:12 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <200403311920.i2VJKD814087@guido.python.org>
References: <E1B8j8W-0006Mm-Kj@mail.python.org> <406B0CC9.80302@Acm.Org>
	<406B10BD.3030904@v.loewis.de>
	<200403311920.i2VJKD814087@guido.python.org>
Message-ID: <1080765116.10273.9.camel@localhost>

On Wed, 2004-03-31 at 20:20, Guido van Rossum wrote:
> > I suggest that the the break-less solution
> > 
> > [classmethod, logged, debug] def function(args):
> 
> Sorry, can't do that with the current parser.

Actually I think that's a good thing - it forces everyone to format
things the same way.  I was a fan of the "def foo() [decorator]" syntax
but I've changed my mind - this way has several things going for it:

	- Simple implementation
	- More or less forces one style of code layout
	- Doesn't break tools that look for "def foo(" type patterns
`	- Short-circuits the endless syntax discussion :-)

Mark Russell

From pyth at devel.trillke.net  Wed Mar 31 15:37:01 2004
From: pyth at devel.trillke.net (Holger Krekel)
Date: Wed Mar 31 15:37:36 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
Message-ID: <20040331203701.GC6361@solar.trillke>

Hi Bob, 

Bob Ippolito wrote:
> On Mar 31, 2004, at 1:59 PM, Michel Pelletier wrote:
> >Please don't add any decorator syntax to Python, at least not yet.  All
> >of the proposals I have seen so far are, to be blunt, and in my opinion
> >of course, ugly and are getting uglier as the discussion ensues.
> >
> >I see nothing wrong, at least for the present, with the status quo
> >decorators that follow a function or class definition.  They are
> >explicit, functionally equivalent, use the existing and completely
> >understandable syntax, and are so rarely used by only the *most*
> >experienced and advanced programmers that violating the beauty of the
> >language is unjustified.
> 
> I've been pretty quiet about this lately because the discussions have  
> gone into space, largely by people who don't even have a need or desire  
> for decorators, but uninformed comments like this just irk me.

I don't see anything particularly more uninformed here than in many
other postings.  

I agree to Michel in that we may be trying to fix a slighly inconvenient
but currently nevertheless explicit and easily understandable syntax with
something special cased. 

But is the special case of decorated functions/methods really worth adding a
special case to current syntax rules? 

> Decorators *are not rare* and are *used by regular programmers* in some  
> problem domains.  Yes, it takes an advanced programmer to write such a  
> framework, but that doesn't mean that the syntax is useless to  
> non-advanced programmers.

I have and had lots of use cases where i "decorated" functions --- but i
didn't use the syntax discussed in the decorator threads for it. 
Instead i did something like: 

    cls = transform_methods(cls) 

after the definition of a class.  Which functions are selected to be
transformed by 'transform_methods' is up to you. I used special names as
well as inspecting the first argument name (e.g. 'cls') and other
applicable ideas.  Specifically, in PyPy we use something like 

    def add_List_List(self, w_list1, w_list2):
        ...
    def add_Int_Int(self, w_list1, w_list2):
        ...

and later on just one:

    register_all(vars(), W_ListType)

(which will register the methods to some multimethod dispatcher).  It's
a matter of taste i guess if using decorators - whatever syntax - at each 
function definition would be more readable. 

> It's particularly applicable to applications  
> using PyObjC or ctypes where the function simply will not work unless  
> it's decorated with the correct type signature.  It can also  
> potentially make pure python frameworks such as Twisted, ZopeX3, or  
> PEAK easier to use, by moving boilerplate wrapping stuff into  
> decorators.  Decorators solve a *huge* problem with the current syntax:
> 
> def someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_(takes,  
> some, args, here):
>     pass
> someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_ =  
> objc.selector(someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonge 
> r_, signature='some type signature')

I am not sure the only "good" solution here is to add special decorator 
syntax like ... 

> [objc.selector(signature='some type signature')]
> def someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_(self,  
> some, args, here):
>     pass

for example you could try to use the first line of the docstring 
for it - maybe building on the typical way to document C-defined python
functions. e.g.:

    "(some_type_signature) -> (some_return_type)"

I am not saying straight away this is better but i doubt that the
only good solution to the above problem is to add new syntax. 

> It has nothing to do with being an  
> experienced or advanced programmer, some problem domains simply REQUIRE  
> decorated functions in order to work at all.

But do these problem domains REQUIRE special syntax? I doubt it. 

> If decorators do not make  
> Python 2.4, that is another major release cycle that extensions such as  
> PyObjC and ctypes will be hampered by lack of syntax... to the point  
> where I'd be inclined to just fork Python (or Stackless, more likely)  
> in order to get the syntax.

Nah, just work with us on PyPy to allow it to add decorator syntax
at runtime :-) 

cheers,

    holger

From guido at python.org  Wed Mar 31 15:49:14 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 15:49:22 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: Your message of "Wed, 31 Mar 2004 12:32:35 EST."
	<5.1.1.6.0.20040331115806.02d60a60@telecommunity.com> 
References: <Your message of "Tue, 30 Mar 2004 19:17:14 EST."
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com> <Your
	message of "Tue, 30 Mar 2004 18:17:55 EST."
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com> <Your
	message of "Wed,
	31 Mar 2004 08:27:19 +1000." <20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com> 
	<5.1.1.6.0.20040331115806.02d60a60@telecommunity.com> 
Message-ID: <200403312049.i2VKnEi14444@guido.python.org>

> There appears to be a strong correlation between people who have
> specific use cases for decorators, and the people who want the
> last-before-colon syntax.  Whereas, people who have few use cases
> (or don't like decorators at all) appear to favor syntaxes that move
> decorators earlier.  Whether that means the "earlier" syntaxes are
> better or worse, I don't know.  <0.5 wink>

Maybe the practitioners are so eager to have something usable that
they aren't swayed as much by esthetics.

> I'll assume your intent is to prevent decorators from biting the
> unwary -- specifically people who *don't* use decorators a lot and
> therefore are not looking for them.  I will therefore focus now on
> the issues with the "previous line" syntax that may bite people,
> with an eye to how they might be fixed.

They should still be very unlikely to accidentally create one.

With proper vertical whitespace, the fact that a decorator list
(written properly) means something special should be obvious to even
the most casual observer: there should be a blank line before the
decorator list and none between it and the 'def' line.

The human brain is a lot more flexible in picking up patterns than the
Python parser; as shown many times in this discussion, most people
have no clue about the actual syntax accepted by the Python parser,
and simply copy (and generalize!) patterns they see in examples.

My goal is to help people grasp the gist of a particular construct
without having to think too much about the exact syntax; within all
sorts of other constraints of course, like being parsable with the
simple ans stupid LL(1) parser, and being easy to grep (in some
cases).

> > > By the way, you didn't mention whether it's okay to put the
> > > decorators on the same logical line, e.g.:
> > >
> > > [classmethod] def foo(bar,baz):
> > >      # body goes here
> > >
> > > If the rationale here is that we're copying C#, I'd think that it
> > > should be permissible, even though it looks a bit ugly and tempts me
> > > to indent the body to align with the function name.
> >
> >This is much harder to do with the current parser.  (My plan would
> >be to tie the list expression and the function definition together
> >in the code generating phase, just like doc strings.)
> 
> Yeah, this is also going to now have to be a special case for
> documentation processing tools.

I hadn't thought of those, but the new situation can't be worse than
before (decorations following the 'def').  Usually these tools either
use some ad-hoc regex-based parsing, which shouldn't have any problems
picking out at least the typical forms, or they (hopefully) use the
AST produced by the compiler package -- it should be possible for that
package to modify the parse tree so that decorators appear as part of
the Function node.

> Whereas making it part of the definition syntax, it's more directly
> available in the parse tree.

Definitely, but I don't see it as a showstopper.  I've now implemented
this for the traditional compile.c; I think the effort is fairly
moderate.  (My patch is smaller than Michael Hudson's.)

> It also seems to be working against the AST branch a bit, in that I
> would expect the decorator expressions to be part of the function
> definition node, rather than in an unrelated statement.  And, it's
> also going to be interesting to document in the language reference,
> since the grammar there is going to continue to diverge from the
> "real" grammar used by the implementation.

Hardly worse than the difference between theory and practice for
assignment statements or keyword parameters.

> Another issue...  is this valid?
> 
> [classmethod]
> 
> def foo(bar,baz):
>      pass

Yes.

> How about this?
> 
> [classmethod]
> # Okay, now we're going to define something...
> def foo(bar,baz):
>      pass

Yes.

> If they *are* valid, then you can have nasty effects at a distance.

Given that there really isn't much of a use case for putting a list
display on a line by itself (without assigning it to something), I
don't see this as a likely accident.  By giving only sane examples
we'll help people write readable code (I see no other way; you can't
force people to write unobfuscated code :-).

> If they *aren't* valid, accidentally adding or removing whitespace
> or comments can silently change the meaning of the program, and
> *not* in a DWIMish way.
> 
> I personally would rather have the decorators required to be on the
> same logical line, and then use:
> 
> [classmethod] \
> def foo(bar,baz):
>      pass
> 
> for visual separation.  The backslash visually alerts that this is
> *not* a mere bare list.

Ugly, and the stupid LL(1) parser can't parse that -- it needs a
unique initial symbol for each alternative at a particular point.

> I'm not a parser guru by any stretch of the imagination, but
> wouldn't it be possible to simply create a statement-level construct
> that was something like:
> 
> liststmt: '[' [listmaker] ']' ( funcdef | restofliststmt )
> 
> and put it where it matches sooner than the expression-based versions of 
> the statement?

Maybe in SPARK, but not in Python's LL(1) parser.

> It seems like the main complexity would be the possibility of having
> to duplicate a number of levels of containing rules for
> 'restofliststmt'.  But maybe I'm completely off base here and
> there's no sensible way to define a correct 'restofliststmt'.

You can assume that's the case. :)

--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Wed Mar 31 15:51:28 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 31 15:51:34 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <1080765116.10273.9.camel@localhost>
References: <200403311920.i2VJKD814087@guido.python.org>
	<E1B8j8W-0006Mm-Kj@mail.python.org> <406B0CC9.80302@Acm.Org>
	<406B10BD.3030904@v.loewis.de>
	<200403311920.i2VJKD814087@guido.python.org>
Message-ID: <5.1.1.6.0.20040331153953.0280c3e0@telecommunity.com>

At 09:31 PM 3/31/04 +0100, Mark Russell wrote:
>On Wed, 2004-03-31 at 20:20, Guido van Rossum wrote:
> > > I suggest that the the break-less solution
> > >
> > > [classmethod, logged, debug] def function(args):
> >
> > Sorry, can't do that with the current parser.
>
>Actually I think that's a good thing - it forces everyone to format
>things the same way.  I was a fan of the "def foo() [decorator]" syntax
>but I've changed my mind - this way has several things going for it:
>
>         - Simple implementation
>         - More or less forces one style of code layout
>         - Doesn't break tools that look for "def foo(" type patterns
>`       - Short-circuits the endless syntax discussion :-)

Um, perhaps I'm confused, but that sounds to me like a list of reasons to 
go with decorators-last.  :)

Conversely, the magic list-on-one-line, def-on-the-next is *so* implicit 
and error-prone in so many respects that it makes me want to vote against 
having decorator syntax at all.  Certainly, it's providing me with a strong 
motivation to see if I can find a way to make the current parser handle the 
"list on the same line" variation without a complete rewrite.  Perhaps a 
sneaky trick like this:

expr_stmt: testlist (augassign testlist | ('=' testlist)* | [funcdef | 
classdef] )

with a special case check that the testlist consists solely of a list, that 
triggers a syntax error at the funcdef or classdef point.


From bob at redivi.com  Wed Mar 31 16:01:16 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 31 15:57:13 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <20040331203701.GC6361@solar.trillke>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<20040331203701.GC6361@solar.trillke>
Message-ID: <8CF77A2E-8356-11D8-8201-000A95686CD8@redivi.com>


On Mar 31, 2004, at 3:37 PM, Holger Krekel wrote:

> Hi Bob,
>
> Bob Ippolito wrote:
>> On Mar 31, 2004, at 1:59 PM, Michel Pelletier wrote:
>>> Please don't add any decorator syntax to Python, at least not yet.   
>>> All
>>> of the proposals I have seen so far are, to be blunt, and in my  
>>> opinion
>>> of course, ugly and are getting uglier as the discussion ensues.
>>>
>>> I see nothing wrong, at least for the present, with the status quo
>>> decorators that follow a function or class definition.  They are
>>> explicit, functionally equivalent, use the existing and completely
>>> understandable syntax, and are so rarely used by only the *most*
>>> experienced and advanced programmers that violating the beauty of the
>>> language is unjustified.
>>
>> I've been pretty quiet about this lately because the discussions have
>> gone into space, largely by people who don't even have a need or  
>> desire
>> for decorators, but uninformed comments like this just irk me.
>
> I don't see anything particularly more uninformed here than in many
> other postings.
>
> I agree to Michel in that we may be trying to fix a slighly  
> inconvenient
> but currently nevertheless explicit and easily understandable syntax  
> with
> something special cased.
>
> But is the special case of decorated functions/methods really worth  
> adding a
> special case to current syntax rules?

Yes.

>> Decorators *are not rare* and are *used by regular programmers* in  
>> some
>> problem domains.  Yes, it takes an advanced programmer to write such a
>> framework, but that doesn't mean that the syntax is useless to
>> non-advanced programmers.
>
> I have and had lots of use cases where i "decorated" functions --- but  
> i
> didn't use the syntax discussed in the decorator threads for it.
> Instead i did something like:
>
>     cls = transform_methods(cls)
>
> after the definition of a class.  Which functions are selected to be
> transformed by 'transform_methods' is up to you. I used special names  
> as
> well as inspecting the first argument name (e.g. 'cls') and other
> applicable ideas.  Specifically, in PyPy we use something like
>
>     def add_List_List(self, w_list1, w_list2):
>         ...
>     def add_Int_Int(self, w_list1, w_list2):
>         ...
>
> and later on just one:
>
>     register_all(vars(), W_ListType)
>
> (which will register the methods to some multimethod dispatcher).  It's
> a matter of taste i guess if using decorators - whatever syntax - at  
> each
> function definition would be more readable.

In the case of PyObjC, the name of the function already has a very  
specific meaning (there is a 1:1 mapping to Objective C selectors) and  
can not change to include type information.  Additionally, it is never  
the case where all methods in a class are to be decorated the same way,  
so it must live elsewhere.

Would you have even considered writing something as crazy as the  
aforementioned example if decorators were already in Python?  I doubt  
it.

>> It's particularly applicable to applications
>> using PyObjC or ctypes where the function simply will not work unless
>> it's decorated with the correct type signature.  It can also
>> potentially make pure python frameworks such as Twisted, ZopeX3, or
>> PEAK easier to use, by moving boilerplate wrapping stuff into
>> decorators.  Decorators solve a *huge* problem with the current  
>> syntax:
>>
>> def  
>> someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_(takes,
>> some, args, here):
>>     pass
>> someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_ =
>> objc.selector(someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLon 
>> ge
>> r_, signature='some type signature')
>
> I am not sure the only "good" solution here is to add special decorator
> syntax like ...
>
>> [objc.selector(signature='some type signature')]
>> def someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_(self,
>> some, args, here):
>>     pass
>
> for example you could try to use the first line of the docstring
> for it - maybe building on the typical way to document C-defined python
> functions. e.g.:
>
>     "(some_type_signature) -> (some_return_type)"
>
> I am not saying straight away this is better but i doubt that the
> only good solution to the above problem is to add new syntax.

Mangling *doc* strings is really stupid and is not even an option,  
IMHO.  The only reason people do this is because we don't have a  
decorator syntax.

>> It has nothing to do with being an
>> experienced or advanced programmer, some problem domains simply  
>> REQUIRE
>> decorated functions in order to work at all.
>
> But do these problem domains REQUIRE special syntax? I doubt it.

Special syntax, stupid doc string hacks, or LOTS AND LOTS of ugly  
repetition, as demonstrated.

>> If decorators do not make
>> Python 2.4, that is another major release cycle that extensions such  
>> as
>> PyObjC and ctypes will be hampered by lack of syntax... to the point
>> where I'd be inclined to just fork Python (or Stackless, more likely)
>> in order to get the syntax.
>
> Nah, just work with us on PyPy to allow it to add decorator syntax
> at runtime :-)

That'd be an option, if I could wait a few years until it's capable  
enough to do everything it needs to do.  But I can't, which is exactly  
why it's worth forking if decorator syntax doesn't happen in 2.4.  I'm  
just hoping it does, because that requires the least amount of effort  
for everyone :-)

-bob


From ark-mlist at att.net  Wed Mar 31 15:59:12 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 31 15:59:13 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <A128D751272CD411BC9200508BC2194D0338381D@escpl.tcp.com.ar>
Message-ID: <003f01c41763$05490850$6402a8c0@arkdesktop>

>>> import Decimal 
>>> d1 = Decimal.Decimal('1E50') 
>>> d2 = Decimal.Decimal('1E-50') 
>>> d1 + d2 
Decimal( (0, (1, 0, 0, 0, 0, 0, 0, 0, 0), 42L) ) 
>>> d1 + d2 == d1 
True 
>>> Decimal.getcontext().prec = 1000 
>>> d1 + d2 == d1 
False 
>>> d1 + d2 
Decimal( (0, (1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1), -50L) )


Very nice!


From pje at telecommunity.com  Wed Mar 31 16:12:54 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 31 16:13:02 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403312049.i2VKnEi14444@guido.python.org>
References: <Your message of "Wed, 31 Mar 2004 12:32:35 EST."
	<5.1.1.6.0.20040331115806.02d60a60@telecommunity.com>
	<Your message of "Tue, 30 Mar 2004 19:17:14 EST."
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com>
	<Your message of "Tue, 30 Mar 2004 18:17:55 EST."
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<Your message of "Wed,
	31 Mar 2004 08:27:19 +1000." <20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com>
	<5.1.1.6.0.20040331115806.02d60a60@telecommunity.com>
Message-ID: <5.1.1.6.0.20040331155927.0289ae70@telecommunity.com>

At 12:49 PM 3/31/04 -0800, Guido van Rossum wrote:
> > There appears to be a strong correlation between people who have
> > specific use cases for decorators, and the people who want the
> > last-before-colon syntax.  Whereas, people who have few use cases
> > (or don't like decorators at all) appear to favor syntaxes that move
> > decorators earlier.  Whether that means the "earlier" syntaxes are
> > better or worse, I don't know.  <0.5 wink>
>
>Maybe the practitioners are so eager to have something usable that
>they aren't swayed as much by esthetics.

I appreciate the aesthetics of the new syntax, it's just the little 
implementation nits that bother me.  And it just seems so un-Python to have 
a special syntax that doesn't start with an introducing keyword.


> > I'll assume your intent is to prevent decorators from biting the
> > unwary -- specifically people who *don't* use decorators a lot and
> > therefore are not looking for them.  I will therefore focus now on
> > the issues with the "previous line" syntax that may bite people,
> > with an eye to how they might be fixed.
>
>They should still be very unlikely to accidentally create one.
>
>With proper vertical whitespace, the fact that a decorator list
>(written properly) means something special should be obvious to even
>the most casual observer: there should be a blank line before the
>decorator list and none between it and the 'def' line.
>
>The human brain is a lot more flexible in picking up patterns than the
>Python parser; as shown many times in this discussion, most people
>have no clue about the actual syntax accepted by the Python parser,
>and simply copy (and generalize!) patterns they see in examples.
>
>My goal is to help people grasp the gist of a particular construct
>without having to think too much about the exact syntax; within all
>sorts of other constraints of course, like being parsable with the
>simple ans stupid LL(1) parser, and being easy to grep (in some
>cases).

I previously thought these were related.  That is, I thought that keeping 
it to an LL(1) grammar was intended to make it easier for humans to parse, 
because no backtracking is required.  But, the new syntax *does* require 
backtracking, even on the same line.  Granted that humans don't literaly 
read a character stream the way a parser does, isn't this still the first 
piece of backtracking-required syntax in Python?


>I hadn't thought of those, but the new situation can't be worse than
>before (decorations following the 'def').  Usually these tools either
>use some ad-hoc regex-based parsing, which shouldn't have any problems
>picking out at least the typical forms, or they (hopefully) use the
>AST produced by the compiler package -- it should be possible for that
>package to modify the parse tree so that decorators appear as part of
>the Function node.

Good point.  Will this be true for the AST branch's AST model as well?


> > It seems like the main complexity would be the possibility of having
> > to duplicate a number of levels of containing rules for
> > 'restofliststmt'.  But maybe I'm completely off base here and
> > there's no sensible way to define a correct 'restofliststmt'.
>
>You can assume that's the case. :)

Yeah, I just spent a few minutes taking a look at the actual parser 
implementation.  :(  For some reason I thought that the simple LL(1) 
grammar meant there was also a simple recursive-descent parser.  I'd 
completely forgotten about 'pgen' et al.


From michel at dialnetwork.com  Wed Mar 31 16:09:11 2004
From: michel at dialnetwork.com (Michel Pelletier)
Date: Wed Mar 31 16:23:14 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
Message-ID: <1080767350.2043.58.camel@heinlein>

On Wed, 2004-03-31 at 12:04, Bob Ippolito wrote:
> On Mar 31, 2004, at 1:59 PM, Michel Pelletier wrote:
> 
> >> Message: 1
> >> Date: Wed, 31 Mar 2004 18:10:23 +0200
> >> From: Simon Percivall <s.percivall@chello.se>
> >> Subject: Re: [Python-Dev] Re: PEP 318: Decorators last before colon
> >
> > I've been following this discussion closely and I would like to voice  
> > my
> > opinion.
> >
> > Please don't add any decorator syntax to Python, at least not yet.  All
> > of the proposals I have seen so far are, to be blunt, and in my opinion
> > of course, ugly and are getting uglier as the discussion ensues.
> >
> > I see nothing wrong, at least for the present, with the status quo
> > decorators that follow a function or class definition.  They are
> > explicit, functionally equivalent, use the existing and completely
> > understandable syntax, and are so rarely used by only the *most*
> > experienced and advanced programmers that violating the beauty of the
> > language is unjustified.
> 
> I've been pretty quiet about this lately because the discussions have  
> gone into space, largely by people who don't even have a need or desire  
> for decorators, but uninformed comments like this just irk me.

Well like the post says, it's just my opinion.

> Decorators *are not rare* and are *used by regular programmers* in some  
> problem domains.  Yes, it takes an advanced programmer to write such a  
> framework, but that doesn't mean that the syntax is useless to  
> non-advanced programmers.

I didn't say useless, I said rarely used.  I wouldn't imagine anyone
here discussing syntax that was useless.

> Please understand that just because you haven't need them yet doesn't  
> make them worthless, ugly, etc.

I didn't say worthless either.

>   It has nothing to do with being an  
> experienced or advanced programmer, some problem domains simply REQUIRE  
> decorated functions in order to work at all. 

I disagree, but I agree that there exist problem domains for which
decorators would make it *easier*.  Are satisfying these problem domains
worth new syntax?

-Michel


From s.percivall at chello.se  Wed Mar 31 16:40:33 2004
From: s.percivall at chello.se (Simon Percivall)
Date: Wed Mar 31 16:41:27 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <8CF77A2E-8356-11D8-8201-000A95686CD8@redivi.com>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<20040331203701.GC6361@solar.trillke>
	<8CF77A2E-8356-11D8-8201-000A95686CD8@redivi.com>
Message-ID: <09FBE6A3-835C-11D8-B533-0003934AD54A@chello.se>

On 2004-03-31, at 23.01, Bob Ippolito wrote:
>>
>> But is the special case of decorated functions/methods really worth 
>> adding a
>> special case to current syntax rules?
>
> Yes.

But is it really wise to add the special syntax required to make your 
work easier (or possible) by overloading a currently perfectly valid 
(if non-sensical) statement? It's one thing that you really, really 
want a better decorator syntax; it's another thing completely that the 
new syntax is an overloaded and restricted list. That's just plain 
wrong for many reasons argued earlier.


From dave at boost-consulting.com  Wed Mar 31 16:40:44 2004
From: dave at boost-consulting.com (David Abrahams)
Date: Wed Mar 31 16:41:42 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
References: <000101c4174b$5fdb2540$6402a8c0@arkdesktop>
	<200403311841.i2VIfTG13884@guido.python.org>
Message-ID: <ulllgy9sz.fsf@boost-consulting.com>

Guido van Rossum <guido@python.org> writes:

>> 	>>> [decorator]
>> 
>> Will it type ... and wait for you to say more?  Or will it evaluate the
>> single-element list whose element is the value of the variable ``decorator''
>> and print the result?
>
> The latter.  You can't add a decorator to a top-level function in
> interactive mode unless you put it inside an if.

That seems like a very odd special case to me.  Is it worth it?

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com


From skip at pobox.com  Wed Mar 31 16:44:44 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 31 16:44:56 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403312049.i2VKnEi14444@guido.python.org>
References: <Your message of "Tue, 30 Mar 2004 19:17:14 EST."
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com> <Your
	message of "Tue, 30 Mar 2004 18:17:55 EST."
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com> <Your
	message of "Wed,
	31 Mar 2004 08:27:19 +1000." <20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com>
	<5.1.1.6.0.20040331115806.02d60a60@telecommunity.com>
	<200403312049.i2VKnEi14444@guido.python.org>
Message-ID: <16491.15308.540605.409901@montanaro.dyndns.org>


    >> Yeah, this is also going to now have to be a special case for
    >> documentation processing tools.

    Guido> I hadn't thought of those, but the new situation can't be worse
    Guido> than before (decorations following the 'def').  

Except for the fact that when colorizing

    [d1, d2]
    def func(a,b,c):
        pass

"[d1, d2]" should probably not be colored the way other lists are because
semantically it's not just a list that gets discarded.

Skip

From martin at v.loewis.de  Wed Mar 31 16:50:50 2004
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed Mar 31 16:51:00 2004
Subject: [Python-Dev] Re: Timing for Py2.4
In-Reply-To: <6.0.3.0.2.20040331204015.032ceec0@torment.chelsea.private>
References: <4060E835.50805@interlink.com.au>	<003401c41486$2e6ec100$2fb4958d@oemcomputer>	<fzbt2g1r.fsf@yahoo.co.uk>
	<6.0.3.0.2.20040331204015.032ceec0@torment.chelsea.private>
Message-ID: <406B3D3A.1080506@v.loewis.de>

Barry Scott wrote:
> Is that 7.0 or 7.1 aka .net 2003? The reason I ask is that as my extensions
> are in C++ and 7.0 is unusable.

We use 7.1 (aka .net 2003) for building. A number of developers got
sponsored licenses of that compiler.

Regards,
Martin


From dave at boost-consulting.com  Wed Mar 31 16:54:20 2004
From: dave at boost-consulting.com (David Abrahams)
Date: Wed Mar 31 16:55:06 2004
Subject: [Python-Dev] Re: PEP 309 re-written
References: <4043B4A5.9030007@blueyonder.co.uk>
Message-ID: <ufzboy96b.fsf@boost-consulting.com>

Peter Harris <scav@blueyonder.co.uk> writes:

> Hi
>
> The latest version of PEP 309 has been published in the usual place. I
> hope much of the woollyness (sp?) of the early versions has been
> sheared off, leaving more ..um..
> [metaphor panic!] .. mutton?
>
> I have settled on calling the whole thing "partial function
> application", 

Thank you!  The functional programming community (of which I'm
probably not really a member ;-> ) thanks you!

> broadly including object methods, classes and other callable
> objects.  I want to name the constructor partial(), because that
> name will do for now and doesn't do violence to accepted terminology
> the way curry() or closure() would.

<snip>

> Not sure what sort of feedback the PEP needs before review, so I'm
> open to any comments about the wording of the proposal and the
> usefulness or otherwise of its intent.
>
> If anyone can think of any really elegant hacks that are naturally
> expressed by partial function application I'd like to see them

There are millions, but AFAICT they're all more natural with lambda,
so...

        "I agree that lambda is usually good enough, just not
        always."

Examples, please?

        "And I want the possibility of useful introspection and
        subclassing"

Can you elaborate on that, please?

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com


From bob at redivi.com  Wed Mar 31 17:01:32 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 31 16:57:29 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <09FBE6A3-835C-11D8-B533-0003934AD54A@chello.se>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<20040331203701.GC6361@solar.trillke>
	<8CF77A2E-8356-11D8-8201-000A95686CD8@redivi.com>
	<09FBE6A3-835C-11D8-B533-0003934AD54A@chello.se>
Message-ID: <F84561D9-835E-11D8-8201-000A95686CD8@redivi.com>


On Mar 31, 2004, at 4:40 PM, Simon Percivall wrote:

> On 2004-03-31, at 23.01, Bob Ippolito wrote:
>>>
>>> But is the special case of decorated functions/methods really worth 
>>> adding a
>>> special case to current syntax rules?
>>
>> Yes.
>
> But is it really wise to add the special syntax required to make your 
> work easier (or possible) by overloading a currently perfectly valid 
> (if non-sensical) statement? It's one thing that you really, really 
> want a better decorator syntax; it's another thing completely that the 
> new syntax is an overloaded and restricted list. That's just plain 
> wrong for many reasons argued earlier.

I've stated many times that my preferences is the currently invalid:

def foo(args)[decorators]:
	pass

.. but I'll reluctantly accept Guido's suggestion if that's the only 
thing that's going to happen.

-bob


From tjreedy at udel.edu  Wed Mar 31 17:02:37 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed Mar 31 17:02:36 2004
Subject: [Python-Dev] Re: Expert floats
References: <20040331012812.GA1541@panix.com>
	<008401c4173a$82e7fc40$6402a8c0@arkdesktop>
Message-ID: <c4ff5m$hhu$1@sea.gmane.org>


"Andrew Koenig" <ark-mlist@att.net> wrote in message
news:008401c4173a$82e7fc40$6402a8c0@arkdesktop...
If I can enter a number as 0.1, printing that number as 0.1 does not
introduce any errors that were not already there, as proved by the fact
that
reading that 0.1 back will yield exactly the same value.

If I enter 1.1000000000000001, I am not sure I would necessarily be happy
if str() and repr() both gave the same highly rounded string representation
;-)

tjr




From ark-mlist at att.net  Wed Mar 31 17:13:23 2004
From: ark-mlist at att.net (Andrew Koenig)
Date: Wed Mar 31 17:13:23 2004
Subject: [Python-Dev] Re: Expert floats
In-Reply-To: <c4ff5m$hhu$1@sea.gmane.org>
Message-ID: <004101c4176d$6255a2b0$6402a8c0@arkdesktop>

> If I enter 1.1000000000000001, I am not sure I would necessarily be happy
> if str() and repr() both gave the same highly rounded string
> representation
> ;-)

What if you entered 1.100000000000000000000000000000001?


From python-dev at zesty.ca  Wed Mar 31 17:44:24 2004
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Wed Mar 31 17:44:06 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <200403311841.i2VIfTG13884@guido.python.org>
References: <000101c4174b$5fdb2540$6402a8c0@arkdesktop> 
	<200403311841.i2VIfTG13884@guido.python.org>
Message-ID: <Pine.LNX.4.58.0403311602480.18028@server1.LFW.org>

On Wed, 31 Mar 2004, Guido van Rossum wrote:
> > 	>>> [decorator]
> >
> > Will it type ... and wait for you to say more?  Or will it evaluate the
> > single-element list whose element is the value of the variable ``decorator''
> > and print the result?
>
> The latter.  You can't add a decorator to a top-level function in
> interactive mode unless you put it inside an if.

This discussion is turning in a direction that alarms me.

Guido, i beg you to reconsider.  Up to this point, we've all been
talking about aesthetics -- whether it's prettier to have the list
of decorators here or there, reads more naturally, visually suits
the semantics, and so on.

Putting the [decorator] on a separate line before the function
changes the stakes entirely.  It sets aside real functional issues
in favour of aesthetics.  Being enamoured with the way this syntax
*looks* does not justify functionally *breaking* other things in
the implementation to achieve it.

Consider the arguments in favour:

    1.  Decorators appear first.
    2.  Function signature and body remain an intact unit.
    3.  Decorators can be placed close to the function name.

These are all aesthetic arguments: they boil down to "the appearance
is more logical".  Similar arguments have been made about the other
proposals.

Consider the arguments against:

    1.  Previously valid code has new semantics.
    2.  Parse tree doesn't reflect semantics.
    3.  Inconsistent interactive and non-interactive behaviour. [*]
    4.  Decorators can be arbitrarily far away from the function name.

These are all functional arguments, with the exception of 4.  People
are pointing out that the way Python *works* will be compromised.

Putting the [decorator] on a separate preceding line:
    1. violates a fundamental rule of language evolution,
    2. makes it impossible to write an unambiguous grammar for Python, and
    3. stacks the odds against anyone trying to learn decorators.

I don't think there is any dispute about functionality.  Everyone
agrees that the interactive interpreter should reflect non-interactive
behaviour as much as possible; everyone agrees that previously valid
code should not change semantics; everyone agrees that the parse tree
should reflect the semantics.  Your responses to these points have been
of the form "Well, it's not *so* bad.  Look, we can hack around it..."

Those aren't positive arguments.  Those are excuses.

Aesthetic and functional arguments are at two entirely different scales.
To persist in finding excuses for one option because it is subjectively
good-looking, while setting aside objective functional deficiences, is
outside the bounds of reason.

Special cases aren't special enough to break the rules.

Please, i implore you, consider a different option on this one.

Thanks for reading,



-- ?!ng


[*] Please don't say "Oh, the interactive and non-interactive modes are
inconsistent already."  That's not a reason to make it worse.  Having
the interactive interpreter close an "if:" block when you enter a
blank line doesn't make "if:" totally unusable.  But interpreting
decorators inconsistently fails in a way that the decorator doesn't
work at all, and provides no hint as to how you would make it work.

This is especially bad since decorators are a weird and wonderful
feature and people are going to have to experiment with them in the
interactive interpreter as part of the learning process.  Every such
inconsistency makes Python harder to learn, and in particular this
will make decorators especially painful to learn.


From tdelaney at avaya.com  Wed Mar 31 17:55:51 2004
From: tdelaney at avaya.com (Delaney, Timothy C (Timothy))
Date: Wed Mar 31 17:55:59 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
Message-ID: <338366A6D2E2CA4C9DAEAE652E12A1DE015A0154@au3010avexu1.global.avaya.com>

python-dev-bounces+tdelaney=avaya.com@python.org wrote:

> If you spell decorators this way:
> 
> 	[decorator]
> 	def func():
> 		pass
> 
> then what will happen when you type [decorator] at an interactive
> interpreter prompt?
> 
> 	>>> [decorator]

I've been waiting and reading all the threads before chiming in ...

At first the 

    [decorator]
    def func():
        pass

had some appeal, although I definitely didn't like it as much as:

    def func() [decorator]:
        pass

because it split the function definition up into two bits - I lose the association with the def.

However, as these threads have progressed, it's become obvious to me that there are too many ways in which this is a special case for me to be happy with it. I've seen lots of objections brought up based on existing behaviours with the construct, and each time it's been waved away as not being common enough to worry about. These are often actual use cases that experienced developers have - documentation tools, editor configuration, etc.

Well, it is worrying me. Each special case is one more thing that adds a burden to using Python. The fact that whitespace and comments are allowed between the decorator and function definition is particularly worrying to me - it can quite easily mask errors in code - particularly newbie errors:

    [a]  # oops - I copied this from an interactive session and forgot to modify it.

    # This is a simple function
    def func (args):
        pass

Tim Delaney

From paul at prescod.net  Wed Mar 31 18:10:10 2004
From: paul at prescod.net (Paul Prescod)
Date: Wed Mar 31 18:10:39 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <200403311542.i2VFgWf13002@guido.python.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>	<1080674306.12643.76.camel@localhost.localdomain>	<200403302121.i2ULLIM09840@guido.python.org>	<c4dbo8$ddh$1@sea.gmane.org>
	<2CF5B513-82EB-11D8-BD7A-0003934AD54A@chello.se>
	<200403311542.i2VFgWf13002@guido.python.org>
Message-ID: <406B4FD2.1040506@prescod.net>

Guido van Rossum wrote:

>...
> 
> Why does <...> look better than [...]?  To me, <...> just reminds me
> of XML, which is totally the wrong association.

I vote for << >>. The brackets and parens have too many meanings in 
Python already. <<staticmethod, foobar(baz)>> looks more like French 
than XML. ;)

  Paul Prescod



From mike.spam.filter at day8.com.au  Wed Mar 31 18:26:39 2004
From: mike.spam.filter at day8.com.au (Mike Thompson)
Date: Wed Mar 31 18:26:51 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <200403311922.i2VJM6u14105@guido.python.org>
References: <E1B8j8W-0006Mm-Kj@mail.python.org>
	<406B0CC9.80302@Acm.Org>	<200403311843.i2VIhaU13904@guido.python.org>
	<406B14C1.6080807@livinglogic.de>
	<200403311922.i2VJM6u14105@guido.python.org>
Message-ID: <c4fk3i$ujb$1@sea.gmane.org>

Guido van Rossum wrote:

>>This looks ugly to me. I do have top level functions that would use
>>decorators because those functions will be put into a class as
>>classmethods and this class will be put into sys.modules instead of
>>the original module. Replacing
>>
>>def foo(cls):
>>    ...
>>foo = classmethod(foo)
>>
>>with
>>
>>if True:
>>    [classmethod]
>>    def foo(cls):
>>       ...
>>
>>doesn't look that attractive to me.
> 
> 
> You won't have to do that except in interactive mode.  How often do
> you type functions that need decorators interactively?
> 

Wouldn't a small addition to your syntax make all these (special case) 
problems disappear and, perhaps, go some way to making the syntax more 
readable (particularly for beginners)::

     as: [classmethod]
     def func(args)

I'm a relative newbie to Python (12 months but many, many years 
elsewhere) which means I'm only qualified to comment on what a newbie 
would find confusing and I can assure you an isolated list on a line by 
itself or having to read about "if True:" work-arounds, would be 
remarkably confusing in a language where so much else happens just the 
way you would expect.

If the decorator list is to go first and on the line above, then the 
'as:' syntax just looks so right to my newbie eyes and sensibilities. 
It even read fairly well for the oft given 'classmethod' and 
'staticmethod' use cases.

--
Mike



From michel at dialnetwork.com  Wed Mar 31 18:18:35 2004
From: michel at dialnetwork.com (Michel Pelletier)
Date: Wed Mar 31 18:31:39 2004
Subject: [Python-Dev] Re: Python-Dev Digest, Vol 8, Issue 107
In-Reply-To: <E1B8mhR-0000as-Gw@mail.python.org>
References: <E1B8mhR-0000as-Gw@mail.python.org>
Message-ID: <1080775114.2043.159.camel@heinlein>

On Wed, 2004-03-31 at 12:52, python-dev-request@python.org wrote:
> Message: 6
> Date: Wed, 31 Mar 2004 12:49:14 -0800
> From: Guido van Rossum <guido@python.org>
> Subject: Re: [Python-Dev] PEP 318: Decorators last before colon
> To: "Phillip J. Eby" <pje@telecommunity.com>
> Cc: python-dev@python.org
> Message-ID: <200403312049.i2VKnEi14444@guido.python.org>
> 
> > There appears to be a strong correlation between people who have
> > specific use cases for decorators, and the people who want the
> > last-before-colon syntax.  Whereas, people who have few use cases
> > (or don't like decorators at all) appear to favor syntaxes that move
> > decorators earlier.  Whether that means the "earlier" syntaxes are
> > better or worse, I don't know.  <0.5 wink>
> 
> Maybe the practitioners are so eager to have something usable that
> they aren't swayed as much by esthetics.

The current system with no syntax is equally usable, what's gained
functionally?  This argument was and is often used against interface
syntax; it's a strong argument.  In the case of interfaces the argument
was against one nine letter keyword that could not possibly be confused
with a list or anything else.  This syntax is riskier.  

> The human brain is a lot more flexible in picking up patterns than the
> Python parser; as shown many times in this discussion, most people
> have no clue about the actual syntax accepted by the Python parser,
> and simply copy (and generalize!) patterns they see in examples.

Or like me, they just do what Emacs tells 'em, which does argue for
decorators of any or no syntax.  Javadoc coming before a method is
another precedent in favor of something coming before, but these are
without run-time function.

> > It also seems to be working against the AST branch a bit, in that I
> > would expect the decorator expressions to be part of the function
> > definition node, rather than in an unrelated statement.

Another discussion point occurred to me regarding interfaces and
projects that use them heavily like Zope, Twisted, PEAK etc.  Has the
decorator syntax as proposed been evaluated in the light of these
interfaces (and any future native language support for them), whose
methods have no body to interpose between the definition and decorators
as they exist now?  I've seen the "Large Body" argument use several
times in defense of the decorator syntax being before or above the
definition.

-Michel


From guido at python.org  Wed Mar 31 18:32:51 2004
From: guido at python.org (Guido van Rossum)
Date: Wed Mar 31 18:33:00 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: Your message of "Wed, 31 Mar 2004 15:10:10 PST."
	<406B4FD2.1040506@prescod.net> 
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<c4dbo8$ddh$1@sea.gmane.org>
	<2CF5B513-82EB-11D8-BD7A-0003934AD54A@chello.se>
	<200403311542.i2VFgWf13002@guido.python.org> 
	<406B4FD2.1040506@prescod.net> 
Message-ID: <200403312332.i2VNWpe15013@guido.python.org>

> > Why does <...> look better than [...]?  To me, <...> just reminds me
> > of XML, which is totally the wrong association.
> 
> I vote for << >>. The brackets and parens have too many meanings in 
> Python already. <<staticmethod, foobar(baz)>> looks more like French 
> than XML. ;)

<<...>> has the same practical problems as <...>: no automatic line
breaking, >> is ambiguous.

--Guido van Rossum (home page: http://www.python.org/~guido/)

From nas-python at python.ca  Wed Mar 31 18:43:12 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Wed Mar 31 18:43:21 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
Message-ID: <20040331234312.GA14993@mems-exchange.org>

On Wed, Mar 31, 2004 at 03:04:18PM -0500, Bob Ippolito wrote:
> On Mar 31, 2004, at 1:59 PM, Michel Pelletier wrote:
> >Please don't add any decorator syntax to Python, at least not yet.  All
> >of the proposals I have seen so far are, to be blunt, and in my opinion
> >of course, ugly and are getting uglier as the discussion ensues.

I agree with Michel.  The decorator syntax being discussed looks
ugly.  I think it would be okay if the set of valid decorations were
limited to 'classmethod', 'staticmethod' and maybe a few others.
Allowing more general expressions seems to asking for abuse.

> Decorators solve a *huge* problem with the current syntax:
> 
> def someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_(takes,  
> some, args, here):
>     pass
> someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_ =  
> objc.selector(someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonge 
> r_, signature='some type signature')

I would be happer if there was an easier way for you to do what you
want _without_ introducing new syntax to that language.  For
example, what if '_' was bound to the last defined function?  You
could then do something like this:

  def someObjectiveCSelector_itsReallyLong_(takes, some, args, here):
      pass
  objc.selector(_, signature='some type signature')

That looks pretty nice and is even shorter to type than the proposed
syntax.

> Please understand that just because you haven't need them yet doesn't  
> make them worthless, ugly, etc.

I don't think Michel is saying they are worthless.  However, the
proposed syntax is highly contentious.  It would be good if there
was a short term solution that wouldn't require new syntax.  That
would give Guido and the Python community time to figure out the
best syntax.

  Neil

From bob at redivi.com  Wed Mar 31 19:08:44 2004
From: bob at redivi.com (Bob Ippolito)
Date: Wed Mar 31 19:04:43 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <20040331234312.GA14993@mems-exchange.org>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<20040331234312.GA14993@mems-exchange.org>
Message-ID: <BDB89D90-8370-11D8-8201-000A95686CD8@redivi.com>


On Mar 31, 2004, at 6:43 PM, Neil Schemenauer wrote:

> On Wed, Mar 31, 2004 at 03:04:18PM -0500, Bob Ippolito wrote:
>> On Mar 31, 2004, at 1:59 PM, Michel Pelletier wrote:
>>> Please don't add any decorator syntax to Python, at least not yet.   
>>> All
>>> of the proposals I have seen so far are, to be blunt, and in my  
>>> opinion
>>> of course, ugly and are getting uglier as the discussion ensues.
>
> I agree with Michel.  The decorator syntax being discussed looks
> ugly.  I think it would be okay if the set of valid decorations were
> limited to 'classmethod', 'staticmethod' and maybe a few others.
> Allowing more general expressions seems to asking for abuse.
>
>> Decorators solve a *huge* problem with the current syntax:
>>
>> def  
>> someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_(takes,
>> some, args, here):
>>     pass
>> someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_ =
>> objc.selector(someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLon 
>> ge
>> r_, signature='some type signature')
>
> I would be happer if there was an easier way for you to do what you
> want _without_ introducing new syntax to that language.  For
> example, what if '_' was bound to the last defined function?  You
> could then do something like this:
>
>   def someObjectiveCSelector_itsReallyLong_(takes, some, args, here):
>       pass
>   objc.selector(_, signature='some type signature')
>
> That looks pretty nice and is even shorter to type than the proposed
> syntax.

No, that is not correct.. it must rebind the name... it doesn't mutate  
the function, it returns a selector descriptor.  It's also easy to lose  
once you have functions that do things other than pass.  It really  
ought to be in close proximity to the def statement, if it's not just  
an extension to the def syntax.

-bob


From pedronis at bluewin.ch  Wed Mar 31 19:09:59 2004
From: pedronis at bluewin.ch (Samuele Pedroni)
Date: Wed Mar 31 19:05:01 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <338366A6D2E2CA4C9DAEAE652E12A1DE015A0154@au3010avexu1.glob
	al.avaya.com>
Message-ID: <5.2.1.1.0.20040401015840.03371880@pop.bluewin.ch>


>
>Well, it is worrying me. Each special case is one more thing that adds a 
>burden to using Python. The fact that whitespace and comments are allowed 
>between the decorator and function definition is particularly worrying to 
>me - it can quite easily mask errors in code - particularly newbie errors:
>
>     [a]  # oops - I copied this from an interactive session and forgot to 
> modify it.
>
>     # This is a simple function
>     def func (args):
>         pass

I'm not invested into this in any way, but the variations

+[classmethod]
def f(cls):
   pass

or

-[classmethod]
def f(cls):
   pass

are syntactically valid today but OTOH differently from plain [...] 
correspond to run-time errors.

/[classmethod]
def f(cls):
   pass

*[classmethod]
def f(cls):
   pass

even better are syntax errors but are really ugly.

I could live with the +[...] form, as I can live

[...]
def f(...):

or

def f(...) [...]:

it's really a matter of "practicality beats purity" and a 
compromise/balance issue.

The limits of the parser aren't helping either in this case. 


From nas-python at python.ca  Wed Mar 31 19:15:31 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Wed Mar 31 19:15:39 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <BDB89D90-8370-11D8-8201-000A95686CD8@redivi.com>
References: <E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<20040331234312.GA14993@mems-exchange.org>
	<BDB89D90-8370-11D8-8201-000A95686CD8@redivi.com>
Message-ID: <20040401001531.GA16367@mems-exchange.org>

On Wed, Mar 31, 2004 at 07:08:44PM -0500, Bob Ippolito wrote:
> On Mar 31, 2004, at 6:43 PM, Neil Schemenauer wrote:
> >  def someObjectiveCSelector_itsReallyLong_(takes, some, args, here):
> >      pass
> >  objc.selector(_, signature='some type signature')
> >
> >That looks pretty nice and is even shorter to type than the proposed
> >syntax.
> 
> No, that is not correct.. it must rebind the name... it doesn't mutate  
> the function, it returns a selector descriptor.

You can do that via sys._getframe().  Ugly but better, IMHO, than
adding half-baked syntax to the language.

  Neil

From pje at telecommunity.com  Wed Mar 31 19:16:06 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 31 19:16:14 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <20040331234312.GA14993@mems-exchange.org>
References: <97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
Message-ID: <5.1.1.6.0.20040331184627.01e86a10@telecommunity.com>

At 06:43 PM 3/31/04 -0500, Neil Schemenauer wrote:
>On Wed, Mar 31, 2004 at 03:04:18PM -0500, Bob Ippolito wrote:
> > On Mar 31, 2004, at 1:59 PM, Michel Pelletier wrote:
> > >Please don't add any decorator syntax to Python, at least not yet.  All
> > >of the proposals I have seen so far are, to be blunt, and in my opinion
> > >of course, ugly and are getting uglier as the discussion ensues.
>
>I agree with Michel.  The decorator syntax being discussed looks
>ugly.  I think it would be okay if the set of valid decorations were
>limited to 'classmethod', 'staticmethod' and maybe a few others.
>Allowing more general expressions seems to asking for abuse.

It sounds like a lot of people's "use" translates to your "abuse"; e.g. 
Bob's use cases would certainly have to be considered abuse, since they 
wouldn't be part of some fixed set defined by Python.


> > Decorators solve a *huge* problem with the current syntax:
> >
> > def someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_(takes,
> > some, args, here):
> >     pass
> > someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_ =
> > objc.selector(someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonge
> > r_, signature='some type signature')
>
>I would be happer if there was an easier way for you to do what you
>want _without_ introducing new syntax to that language.  For
>example, what if '_' was bound to the last defined function?  You
>could then do something like this:
>
>   def someObjectiveCSelector_itsReallyLong_(takes, some, args, here):
>       pass
>   objc.selector(_, signature='some type signature')
>
>That looks pretty nice and is even shorter to type than the proposed
>syntax.

And it doesn't do what's needed, since you left out the rebinding of the 
function name.  That is, you would have to do:

someObjectiveCSelector_yesTheyCanBeThisLong_sometimesLonger_ 
=  objc.selector(_, signature='some type signature')

So only one typing of the name is saved.

Finally, it doesn't address the issue of wanting to call these things out 
*ahead* of the function's code, as part of the definition header, so that a 
person who is reading for an overview can notice it and take it in.


>I don't think Michel is saying they are worthless.  However, the
>proposed syntax is highly contentious.  It would be good if there
>was a short term solution that wouldn't require new syntax.

There's no such thing, since it's specifically a syntactical change that's 
desired.  Obviously it is *possible* to use decorators today.  The desired 
change is that they be easy for someone to see when they are skimming 
through code.

Ugly or not, *all* of the proposed syntaxes that don't involve creating an 
extra suite have the practical effect of improving the semantic readability 
of decorator use over today's Python.  In addition, they also have the 
practical effect of making decorator use more convenient, since the two 
extra repetitions of the function or class name go away.

I suspect this is part of why there is such disagreement on the subject of 
decorators: people who make heavy use of them today know exactly what 
problems today's syntax has from both the readability and writability 
viewpoints.  Whereas people who do not use them, don't get why limiting 
their use to fixed subsets, or single decorators, or any number of other 
ideas just negate the usefulness of having a syntax in the first 
place.  Or, like Michel, they don't see a point to having a syntax at 
all.  Well, that's certainly understandable, and if light users' objection 
was "we're going to have to read your stinkin' decorators so we want 'em 
really visible", I would totally understand and support that direction.

But, instead, a lot of the syntax directions and arbitrary restrictions and 
"why do we need a syntax" aren't really helpful for heavy users of 
decorators.  We know what we want, and MWH already implemented a patch some 
time ago that does exactly what we want.  I think most of us are willing to 
compromise to Guido's syntax if that's what it takes to get a syntax that 
meets our primary goals, even if leery about the potential 
consequences.  (Because even though most people learn by example, Python's 
appearance of having consistent and sensible rules is what initially hooks 
many of us on using it.)

But, there's really no point in arguing that no decorator syntax is 
needed.  That would be like arguing that there is no point to having 'for' 
loops because you can emulate them with 'while'!  Given that Python 2.2 
introduced the usefulness of decorators, and 2.3 was a "no new syntax" 
release, Python 2.4 is the logical release to add a decorator syntax.  And 
now, therefore, is the time to get a syntax approved.

Personally, if I'd realized that the original PEP 318 syntax 
(last-before-colon) was so controversial, I'd have been campaigning for it 
a lot sooner than now.  I somehow assumed that it was a "done deal" as far 
as having tentative BDFL support from way back.


From pje at telecommunity.com  Wed Mar 31 19:23:24 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 31 19:23:29 2004
Subject: [Python-Dev] Re: Python-Dev Digest, Vol 8, Issue 107
In-Reply-To: <1080775114.2043.159.camel@heinlein>
References: <E1B8mhR-0000as-Gw@mail.python.org>
	<E1B8mhR-0000as-Gw@mail.python.org>
Message-ID: <5.1.1.6.0.20040331191627.01f10870@telecommunity.com>

At 03:18 PM 3/31/04 -0800, Michel Pelletier wrote:
>On Wed, 2004-03-31 at 12:52, python-dev-request@python.org wrote:
> > Message: 6
> > Date: Wed, 31 Mar 2004 12:49:14 -0800
> > From: Guido van Rossum <guido@python.org>
> > Subject: Re: [Python-Dev] PEP 318: Decorators last before colon
> > To: "Phillip J. Eby" <pje@telecommunity.com>
> > Cc: python-dev@python.org
> > Message-ID: <200403312049.i2VKnEi14444@guido.python.org>
> >
> > > There appears to be a strong correlation between people who have
> > > specific use cases for decorators, and the people who want the
> > > last-before-colon syntax.  Whereas, people who have few use cases
> > > (or don't like decorators at all) appear to favor syntaxes that move
> > > decorators earlier.  Whether that means the "earlier" syntaxes are
> > > better or worse, I don't know.  <0.5 wink>
> >
> > Maybe the practitioners are so eager to have something usable that
> > they aren't swayed as much by esthetics.
>
>The current system with no syntax is equally usable, what's gained
>functionally?

It's not equally usable.  It is 1) considerably more verbose, particularly 
in Bob's use case of long function names, and 2) hard to spot by a reader 
who's skimming to get an overview of a class or module.


>Another discussion point occurred to me regarding interfaces and
>projects that use them heavily like Zope, Twisted, PEAK etc.  Has the
>decorator syntax as proposed been evaluated in the light of these
>interfaces (and any future native language support for them), whose
>methods have no body to interpose between the definition and decorators
>as they exist now?  I've seen the "Large Body" argument use several
>times in defense of the decorator syntax being before or above the
>definition.

I have seen virtually no use of decorators on interface methods in the 
frameworks you mention.  In fact, I can't recall ever having seen a single 
use of decorators on interface methods.  That's probably simply because the 
cost of using decorators is too high to waste on anything that's not part 
of the implementation, and the documentary value of decorators using 
today's syntax is poor compared to adding text to the method's docstring.


From fumanchu at amor.org  Wed Mar 31 19:30:31 2004
From: fumanchu at amor.org (Robert Brewer)
Date: Wed Mar 31 19:32:32 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
Message-ID: <DE1CF2B4FEC4A342BF62B6B2B334601E561F91@opus.amorhq.net>

Ka-Ping Yee wrote:
> This discussion is turning in a direction that alarms me.
> 
> Putting the [decorator] on a separate line before the function
> changes the stakes entirely.  It sets aside real functional issues
> in favour of aesthetics.  Being enamoured with the way this syntax
> *looks* does not justify functionally *breaking* other things in
> the implementation to achieve it.
> 
> Consider the arguments in favour:
> 
>     1.  Decorators appear first.
>     2.  Function signature and body remain an intact unit.
>     3.  Decorators can be placed close to the function name.
> 
> These are all aesthetic arguments: they boil down to "the appearance
> is more logical".  Similar arguments have been made about the other
> proposals.
> 
> Consider the arguments against:
> 
>     1.  Previously valid code has new semantics.
>     2.  Parse tree doesn't reflect semantics.
>     3.  Inconsistent interactive and non-interactive behaviour. [*]
>     4.  Decorators can be arbitrarily far away from the function name.

Agreed, although I won't use such alarmist phrases. ;)

Notice that the form:

decorate:
    dec1
    dec2
def foo(arg1, arg2, ...):
    pass

gives all of the benefits you mentioned without incurring any of the
"arguments against", except possibly #4. The PEP itself gives the
"arguments against" this form as:

1. "The function definition is not nested within the using: block making
it impossible to tell which objects following the block will be
decorated"

Somewhat of an oblique argument, to which the simple answer is: there
must be a def right after it, in exactly the same manner that a try:
must be followed by an except: or finally:

2. An argument which only applies if foo is inside the decorate block. I
don't advocate that.
3. Another.
4. "Finally, it would require the introduction of a new keyword."

Yup. Not a bad thing for such a powerful tool, IMO.


Robert Brewer
MIS
Amor Ministries
fumanchu@amor.org

From nas-python at python.ca  Wed Mar 31 19:39:13 2004
From: nas-python at python.ca (Neil Schemenauer)
Date: Wed Mar 31 19:39:18 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <5.1.1.6.0.20040331184627.01e86a10@telecommunity.com>
References: <97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<5.1.1.6.0.20040331184627.01e86a10@telecommunity.com>
Message-ID: <20040401003913.GB16367@mems-exchange.org>

On Wed, Mar 31, 2004 at 07:16:06PM -0500, Phillip J. Eby wrote:
> It sounds like a lot of people's "use" translates to your "abuse";
> e.g.  Bob's use cases would certainly have to be considered abuse,
> since they wouldn't be part of some fixed set defined by Python.

Yup.  It seems to me that there are a number of different use cases
and they all want to try to use the same new syntax.

> [Binding _ to the last function definition] doesn't do what's
> needed, since you left out the rebinding of the function name.

That's possible without new syntax.  Some example code:

    import sys

    def myclassmethod(func):
        frame = sys._getframe(1)
        name = func.func_name
        if name in frame.f_locals:
            namespace = frame.f_locals
        elif name in frame.f_globals:
            namespace = frame.f_globals
        else:
            raise NameError, 'cannot find %r' % name
        namespace[name] = classmethod(func)

    class A:
        def something():
            pass
        myclassmethod(something)
        print something

It's not pretty but it might prevent people from developing RSI
while Guido works out what new syntax is best.

> Finally, it doesn't address the issue of wanting to call these
> things out *ahead* of the function's code, as part of the
> definition header, so that a person who is reading for an overview
> can notice it and take it in.

That's true.

  Neil

From michel at dialnetwork.com  Wed Mar 31 19:50:32 2004
From: michel at dialnetwork.com (Michel Pelletier)
Date: Wed Mar 31 19:59:49 2004
Subject: [Python-Dev] Re: Python-Dev Digest, Vol 8, Issue 107
In-Reply-To: <5.1.1.6.0.20040331191627.01f10870@telecommunity.com>
References: <E1B8mhR-0000as-Gw@mail.python.org>
	<E1B8mhR-0000as-Gw@mail.python.org>
	<5.1.1.6.0.20040331191627.01f10870@telecommunity.com>
Message-ID: <1080780631.2043.194.camel@heinlein>

On Wed, 2004-03-31 at 16:23, Phillip J. Eby wrote:

> >The current system with no syntax is equally usable, what's gained
> >functionally?
> 
> It's not equally usable.  It is 1) considerably more verbose, particularly 
> in Bob's use case of long function names, and 2) hard to spot by a reader 
> who's skimming to get an overview of a class or module.

Both true.  Perhaps I should have said functionally equal.  The proposed
syntax is more usable from a syntax perspective, I agree, but offers no
more "use" once you have the object in hand, so to speak.  The
aesthetics is another matter I've already voiced my opinion on.

> >Another discussion point occurred to me regarding interfaces and
> >projects that use them heavily like Zope, Twisted, PEAK etc.  Has the
> >decorator syntax as proposed been evaluated in the light of these
> >interfaces (and any future native language support for them), whose
> >methods have no body to interpose between the definition and decorators
> >as they exist now?  I've seen the "Large Body" argument use several
> >times in defense of the decorator syntax being before or above the
> >definition.
> 
> I have seen virtually no use of decorators on interface methods in the 
> frameworks you mention.  In fact, I can't recall ever having seen a single 
> use of decorators on interface methods.  That's probably simply because the 
> cost of using decorators is too high to waste on anything that's not part 
> of the implementation,

I think there must be *some* decorators valid for interfaces.  What
about proposed decorators like "synchronized"?  Are these part of the
interface?  Or something equivalent to Java's "throws", arguably a
decoration and arguably part of a method's interface.  What about
decorations that can *never* be used in an interface, like
"classmethod"?  Would an error be raised if you tried to decorate an
interface method with a classmethod decorator? 

I don't think any of these things refute or validate the need for
special syntax, I just wonder if it's been thought about enough, and
that goes way beyond the syntax.

-Michel


From s.percivall at chello.se  Wed Mar 31 20:30:15 2004
From: s.percivall at chello.se (Simon Percivall)
Date: Wed Mar 31 20:30:45 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <200403312332.i2VNWpe15013@guido.python.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<c4dbo8$ddh$1@sea.gmane.org>
	<2CF5B513-82EB-11D8-BD7A-0003934AD54A@chello.se>
	<200403311542.i2VFgWf13002@guido.python.org>
	<406B4FD2.1040506@prescod.net>
	<200403312332.i2VNWpe15013@guido.python.org>
Message-ID: <20DF71E4-837C-11D8-B533-0003934AD54A@chello.se>

On 2004-04-01, at 01.32, Guido van Rossum wrote:
>>> Why does <...> look better than [...]?  To me, <...> just reminds me
>>> of XML, which is totally the wrong association.
>>
>> I vote for << >>. The brackets and parens have too many meanings in
>> Python already. <<staticmethod, foobar(baz)>> looks more like French
>> than XML. ;)
>
> <<...>> has the same practical problems as <...>: no automatic line
> breaking, >> is ambiguous.

So is automatic line breaking really necessary? Why not just wrap the 
actual decorators inside parentheses if you need to? Like:

<<(long_decorator(attrs={'author':'somebody',
                          'frequency':'daily'}),
    another_decorator,
    one_more_decorator)>>

<(long_decorator(attr={'author':'somebody',
                        'frequency':'daily'}),
    another_decorator,
    one_more_decorator)>

Well ... it's not pretty.

As for the ambiguity: If '[...]' can be handled for that special case, 
can't '<...>' or '<<...>>' also be handled? They would be much less 
ambiguous for the human interpreter.


From skip at pobox.com  Wed Mar 31 21:42:39 2004
From: skip at pobox.com (Skip Montanaro)
Date: Wed Mar 31 21:42:54 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <338366A6D2E2CA4C9DAEAE652E12A1DE015A0154@au3010avexu1.global.avaya.com>
References: <338366A6D2E2CA4C9DAEAE652E12A1DE015A0154@au3010avexu1.global.avaya.com>
Message-ID: <16491.33183.102141.327504@montanaro.dyndns.org>


On the advisability of

    [decorator]
    def func():
        pass

vs. 

    def func() [decorator]:
        pass

I'm beginning to think Guido has staged an elaborate April Fool's joke.

<hopeful wink>

Skip

From jeremy at alum.mit.edu  Wed Mar 31 21:58:44 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed Mar 31 22:00:54 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <5.1.1.6.0.20040331155927.0289ae70@telecommunity.com>
References: <Your message of "Wed, 31 Mar 2004 12:32:35 EST."
	<5.1.1.6.0.20040331115806.02d60a60@telecommunity.com>
	<Your message of "Tue, 30 Mar 2004 19:17:14 EST."
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com>
	<Your message of "Tue, 30 Mar 2004 18:17:55 EST."
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<Your message of "Wed,
	31 Mar 2004 08:27:19 +1000." <20040330222718.GJ7833@frobozz>
	<Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
	<20040330222718.GJ7833@frobozz>
	<5.1.1.6.0.20040330181110.01ea7880@telecommunity.com>
	<5.1.1.6.0.20040330185626.024903c0@telecommunity.com>
	<5.1.1.6.0.20040331115806.02d60a60@telecommunity.com>
	<5.1.1.6.0.20040331155927.0289ae70@telecommunity.com>
Message-ID: <1080788324.22892.80.camel@localhost.localdomain>

On Wed, 2004-03-31 at 16:12, Phillip J. Eby wrote:
> At 12:49 PM 3/31/04 -0800, Guido van Rossum wrote:
> >I hadn't thought of those, but the new situation can't be worse than
> >before (decorations following the 'def').  Usually these tools either
> >use some ad-hoc regex-based parsing, which shouldn't have any problems
> >picking out at least the typical forms, or they (hopefully) use the
> >AST produced by the compiler package -- it should be possible for that
> >package to modify the parse tree so that decorators appear as part of
> >the Function node.
> 
> Good point.  Will this be true for the AST branch's AST model as well?

If decorators get added, that's my plan.

Jeremy



From greg at cosc.canterbury.ac.nz  Wed Mar 31 22:16:01 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 31 22:16:45 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <008401c4173a$82e7fc40$6402a8c0@arkdesktop>
Message-ID: <200404010316.i313G1Do016951@cosc353.cosc.canterbury.ac.nz>

Andrew Koenig <ark-mlist@att.net>:

> On the other hand, it is pragmatically more convenient when an
> implementation prints the values of floating-point literals with a small
> number of significant digits with the same number of significant digits with
> which they were entered.

But "significant digits" is a concept that exists only
in the mind of the user. How is the implementation to
know how many of the digits are significant, or how
many digits it was originally entered with?

And what about numbers that result from a calculation,
and weren't "entered" at all?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From tjreedy at udel.edu  Wed Mar 31 22:19:53 2004
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed Mar 31 22:19:51 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
References: <E1B8j8W-0006Mm-Kj@mail.python.org><406B0CC9.80302@Acm.Org>	<200403311843.i2VIhaU13904@guido.python.org><406B14C1.6080807@livinglogic.de><200403311922.i2VJM6u14105@guido.python.org>
	<c4fk3i$ujb$1@sea.gmane.org>
Message-ID: <c4g1oh$qns$1@sea.gmane.org>


"Mike Thompson" <mike.spam.filter@day8.com.au> wrote in message
news:c4fk3i$ujb$1@sea.gmane.org...
>      as: [classmethod]
>      def func(args)

I think I like this better than the bare list as a prefix.  as: suites
could be explained as an optional prefix suite for functions just as else:
suites are optional suffix suites for if/while/for.   Hmmm...

Terry J. Reedy




From pje at telecommunity.com  Wed Mar 31 22:32:20 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 31 22:26:34 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <20040401003913.GB16367@mems-exchange.org>
References: <5.1.1.6.0.20040331184627.01e86a10@telecommunity.com>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<E1B8j8V-0006Mm-FD@mail.python.org>
	<1080759573.2043.49.camel@heinlein>
	<97B52378-834E-11D8-8201-000A95686CD8@redivi.com>
	<5.1.1.6.0.20040331184627.01e86a10@telecommunity.com>
Message-ID: <5.1.0.14.0.20040331222524.0278dd40@mail.telecommunity.com>

At 07:39 PM 3/31/04 -0500, Neil Schemenauer wrote:
>On Wed, Mar 31, 2004 at 07:16:06PM -0500, Phillip J. Eby wrote:
> > [Binding _ to the last function definition] doesn't do what's
> > needed, since you left out the rebinding of the function name.
>
>That's possible without new syntax.  Some example code:
>
>[snip]
 >
>It's not pretty but it might prevent people from developing RSI
>while Guido works out what new syntax is best.

Actually, one use case for the syntax that isn't possible now, even with 
frame hacking like your example, is generic functions.  That is, I'd like 
to do something like:

def foo(bar,baz) [generic(int,int)]:
     # code to foo two integers

def foo(bar,baz) [generic(str,int)]:
     # code to foo a string and an integer

etc.  The idea here is that the object returned by 'generic()' looks to see 
if the existing definition of 'foo' is a generic function, and if so, adds 
the current function to the generic function's definition, and then returns 
the generic function.

This use case isn't possible with *any* syntax today, without renaming 
either the generic function or the individual functions, plus twice as many 
repetitions of the names.


From pje at telecommunity.com  Wed Mar 31 22:39:05 2004
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed Mar 31 22:33:15 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <c4g1oh$qns$1@sea.gmane.org>
References: <E1B8j8W-0006Mm-Kj@mail.python.org> <406B0CC9.80302@Acm.Org>
	<200403311843.i2VIhaU13904@guido.python.org>
	<406B14C1.6080807@livinglogic.de>
	<200403311922.i2VJM6u14105@guido.python.org>
	<c4fk3i$ujb$1@sea.gmane.org>
Message-ID: <5.1.0.14.0.20040331223313.03868a60@mail.telecommunity.com>

At 10:19 PM 3/31/04 -0500, Terry Reedy wrote:

>"Mike Thompson" <mike.spam.filter@day8.com.au> wrote in message
>news:c4fk3i$ujb$1@sea.gmane.org...
> >      as: [classmethod]
> >      def func(args)
>
>I think I like this better than the bare list as a prefix.  as: suites
>could be explained as an optional prefix suite for functions just as else:
>suites are optional suffix suites for if/while/for.   Hmmm...

'as' is not currently a keyword, and making it so would break any programs 
that use it as a name.

On the bright side, if it *were* made a keyword, then this would be possible:

as [classmethod] def func(*args):
     pass

Really, of the current "before the def" proposals, I think I like Samuele's:

*[classmethod]
def func(*args):
     pass

approach the best.  The '*' seems to say "special note, pay attention".  :)


From tim.one at comcast.net  Wed Mar 31 22:36:00 2004
From: tim.one at comcast.net (Tim Peters)
Date: Wed Mar 31 22:36:17 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <200404010316.i313G1Do016951@cosc353.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENLJOAB.tim.one@comcast.net>

[Andrew Koenig]
>> On the other hand, it is pragmatically more convenient when an
>> implementation prints the values of floating-point literals with a
>> small number of significant digits with the same number of
>> significant digits with which they were entered.

[Greg Ewing]
> But "significant digits" is a concept that exists only
> in the mind of the user. How is the implementation to
> know how many of the digits are significant, or how
> many digits it was originally entered with?
>
> And what about numbers that result from a calculation,
> and weren't "entered" at all?

The Decimal module has answers to such questions, following the proposed IBM
decimal standard, which in turn follows long-time REXX practice.  The
representation is not normalized, and because of that is able to keep track
of "significant" trailing zeroes.  So, e.g., decimal 2.7 - 1.7 yields
decimal 1.0 (neither decimal 1. nor decimal 1.00), while decimal 2.75 - 1.65
yields decimal 1.10, and 1.0 and 1.10 have different internal
representations than decimal 1 and 1.1, or 1.00 and 1.100.  "The rules" are
spelled out in detail in the spec:

    http://www2.hursley.ibm.com/decimal/


From barry at python.org  Wed Mar 31 22:41:00 2004
From: barry at python.org (Barry Warsaw)
Date: Wed Mar 31 22:41:01 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403302121.i2ULLIM09840@guido.python.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<1080674306.12643.76.camel@localhost.localdomain>
	<200403302121.i2ULLIM09840@guido.python.org>
Message-ID: <1080790860.21196.23.camel@geddy.wooz.org>

On Tue, 2004-03-30 at 16:21, Guido van Rossum wrote:
> > Another possibility that has been suggested is
> > 
> > [decorator] 
> > def func(arg, arg):
> 
> And one that I currently favor.  I'm out of bandwidth to participate
> on a msg-by-msg basis, but perhaps folks can see if they can come to
> terms with this solution?

I don't like it.  It already has a meaning (albeit fairly useless) and
it doesn't seem obvious from just looking at it that the decorator is
connected to the following method.  It doesn't taste Pythonic to me.

-Barry



From tim.one at comcast.net  Wed Mar 31 22:51:13 2004
From: tim.one at comcast.net (Tim Peters)
Date: Wed Mar 31 22:51:29 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <002701c4175a$d93f0d20$6402a8c0@arkdesktop>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENNJOAB.tim.one@comcast.net>

[Tim]
>> Binary fp loses in these common cases *just because* the true inputs
>> can't be represented, and the number printed at the end isn't even
>> the true result of approximately adding the approximated inputs.
>> Decimal easily avoids all of that.

[Andrew Koenig]
> Well, some of it.  It still doesn't avoid 1E50 + 1E-50 == 1E50, for
> example.

It's not common for newbies to use exponential notation, and neither is it
common "for most everyday applications of decimal arithmetic" (which I was
talking about, in part of the context that got snipped) to have inputs
spanning 100 orders of magnitude.

If you know that *your* app has inputs spanning 100 orders of magnitude, and
you care about every digit, then set Decimal precision to something
exceeding 100 digits, and your sample addition will be exact (and then 1E50
+ 1E-50 > 1E50, and exceeds the RHS by exactly 1E-50).  That's what the
"easily" in "easily avoids" means -- the ability to boost precision is very
powerful!

> ...
> Then how about giving a warning for every floating-point literal that
> cannot be exactly represented in binary?

Well, I'm sure that pissing off everyone all the time would be a significant
step backwards.

BTW, so long as Python relies on C libraries for float<->string conversion,
it also has no way to know which floating-point literals can't be exactly
represented anyway.


From barry at python.org  Wed Mar 31 23:03:24 2004
From: barry at python.org (Barry Warsaw)
Date: Wed Mar 31 23:03:25 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <Pine.LNX.4.58.0403311602480.18028@server1.LFW.org>
References: <000101c4174b$5fdb2540$6402a8c0@arkdesktop>
	<200403311841.i2VIfTG13884@guido.python.org>
	<Pine.LNX.4.58.0403311602480.18028@server1.LFW.org>
Message-ID: <1080792204.21196.37.camel@geddy.wooz.org>

On Wed, 2004-03-31 at 17:44, Ka-Ping Yee wrote:

> This discussion is turning in a direction that alarms me.

Thank you Ka-Ping, well put.

-Barry



From bac at OCF.Berkeley.EDU  Wed Mar 31 23:03:39 2004
From: bac at OCF.Berkeley.EDU (Brett C.)
Date: Wed Mar 31 23:04:08 2004
Subject: [Python-Dev] python-dev Summary for 2004-03-01 through	2004-03-15
	[rough draft]
In-Reply-To: <16491.3836.631565.430107@montanaro.dyndns.org>
References: <Pine.SOL.4.58.0403200747340.23908@death.OCF.Berkeley.EDU>	<2mptavlr5r.fsf@starship.python.net>
	<16491.3836.631565.430107@montanaro.dyndns.org>
Message-ID: <406B949B.6030604@ocf.berkeley.edu>

Skip Montanaro wrote:

>     >> Ever since I first had to type Martin v. L|o_diaeresis|wis' name, I
>     >> have had issues with Unicode in the summary.  
> 
>     mwh> I take it you're not /all that/ interested in learning how to do
>     mwh> real time spelling in emacs? <wink>
> 
> Michael,
> 
> Brett went to UC Berkeley.  I'm sure they don't even allow Emacs to be
> installed on their machines. <wink>
> 

=)  The joke of it all is the intro CS course teaches you how to program 
in Emacs.  At least they use Solaris so at least some heritage is there.

-Brett

From aahz at pythoncraft.com  Wed Mar 31 23:10:05 2004
From: aahz at pythoncraft.com (Aahz)
Date: Wed Mar 31 23:10:12 2004
Subject: [Python-Dev] Re: PEP 318: Decorators last before colon
In-Reply-To: <Pine.LNX.4.58.0403311602480.18028@server1.LFW.org>
References: <000101c4174b$5fdb2540$6402a8c0@arkdesktop>
	<200403311841.i2VIfTG13884@guido.python.org>
	<Pine.LNX.4.58.0403311602480.18028@server1.LFW.org>
Message-ID: <20040401041004.GB25782@panix.com>

On Wed, Mar 31, 2004, Ka-Ping Yee wrote:
>
> Special cases aren't special enough to break the rules.
> 
> Please, i implore you, consider a different option on this one.

+1
-- 
Aahz (aahz@pythoncraft.com)           <*>         http://www.pythoncraft.com/

"usenet imitates usenet"  --Darkhawk

From greg at cosc.canterbury.ac.nz  Wed Mar 31 23:19:56 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 31 23:20:12 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <000501c4174b$60f93f70$6402a8c0@arkdesktop>
Message-ID: <200404010419.i314JuV8017044@cosc353.cosc.canterbury.ac.nz>

Andrew Koenig <ark-mlist@att.net>:

> Decimal floating-point has almost all the pitfalls of binary
> floating-point, yet I do not see anyone arguing against decimal
> floating-point on the basis that it makes the pitfalls less
> apparent.

But they're not the pitfalls at issue here. The pitfalls at
issue are the ones due to binary floating point behaving
*differently* from decimal floating point.

Most people's mental model of arithmetic, including floating
point, works in decimal. They can reason about it based on
their experience with pocket calculators. They don't have
any experience with binary floating point, though, so any
additional oddities due to that are truly surprising and
mysterious to them.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Wed Mar 31 23:24:12 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 31 23:24:31 2004
Subject: [Python-Dev] Expert floats
In-Reply-To: <000501c4174b$60f93f70$6402a8c0@arkdesktop>
Message-ID: <200404010424.i314OCLf017051@cosc353.cosc.canterbury.ac.nz>

Andrew Koenig <ark-mlist@att.net>:

> Python gives me none of these, and instead gives me something else
> entirely that is almost never what I would like to see, given the
> choice.

Er, you do realise this only happens when the number
pops out in the interactive interpreter, or you use
repr(), don't you?

If you convert it with str(), or print it, you get
something much more like what you seem to want.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From greg at cosc.canterbury.ac.nz  Wed Mar 31 23:30:16 2004
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed Mar 31 23:30:22 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200403311826.i2VIQBc13773@guido.python.org>
Message-ID: <200404010430.i314UGCj017063@cosc353.cosc.canterbury.ac.nz>

Guido:

> This patch (deco.diff) patches compile.c to recognize
> the following form of decorators:
> 
> [list_of_expressions]
> def func(args):
>     ...

I need a reality check here. Are you saying *this* is
what you currently favour?

I hate this. It's unspeakably horrible. Please tell me
I'm having a nightmare...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

From fdrake at acm.org  Wed Mar 31 23:30:11 2004
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Wed Mar 31 23:30:36 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <1080790860.21196.23.camel@geddy.wooz.org>
References: <Pine.LNX.4.58.0403300136280.18028@server1.LFW.org>
	<200403302121.i2ULLIM09840@guido.python.org>
	<1080790860.21196.23.camel@geddy.wooz.org>
Message-ID: <200403312330.11544.fdrake@acm.org>

Regarding

    [decorator]
    def func(arg, arg):
        # stuff...

On Wednesday 31 March 2004 10:41 pm, Barry Warsaw wrote:
 > I don't like it.  It already has a meaning (albeit fairly useless) and
 > it doesn't seem obvious from just looking at it that the decorator is
 > connected to the following method.  It doesn't taste Pythonic to me.

Whether or not we're arbiters of what's Pythonic, this syntax is quite sad, 
though I'll grant that it's better than variations along the line of

    decorate:
        decorator
    def func(arg, arg):
        # stuff...

I think Phillip Eby's observation that people who want to use decorators want 
something different is quite telling.  I'm with Phillip in preferring

    def func(arg, arg) [decorator]:
        # stuff...


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Zope Corporation


From jeremy at alum.mit.edu  Wed Mar 31 23:47:06 2004
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed Mar 31 23:48:59 2004
Subject: [Python-Dev] PEP 318: Decorators last before colon
In-Reply-To: <200404010430.i314UGCj017063@cosc353.cosc.canterbury.ac.nz>
References: <200404010430.i314UGCj017063@cosc353.cosc.canterbury.ac.nz>
Message-ID: <1080794825.22892.113.camel@localhost.localdomain>

On Wed, 2004-03-31 at 23:30, Greg Ewing wrote:
> Guido:
> 
> > This patch (deco.diff) patches compile.c to recognize
> > the following form of decorators:
> > 
> > [list_of_expressions]
> > def func(args):
> >     ...
> 
> I need a reality check here. Are you saying *this* is
> what you currently favour?
> 
> I hate this. It's unspeakably horrible. Please tell me
> I'm having a nightmare...

There's got to be a better way for people to share their preference on
matters like these.  Maybe we should create a Wiki page where people can
sign up for or against any particular syntax.  It would be easier to
keep track of than trying to read all the messages in this thread.

Jeremy

PS Unspeakably horrible is a minor objection on the Internet.  It's not
really bad unless its untype-ably bad.